id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.02099
Semilinear elliptic equations involving fractional Hardy operators
Our aim in this article is to study semilinear elliptic equations involving a fractional Hardy operator, an absorption and a Radon source in a weighted distributional sense. We show various scenarios, produced by the combined effect of the fractional Hardy potential, the growth of the absorption term and the concentration of the measure, in which existence and uniqueness results holds.
Huyuan Chen, Konstantinos T. Gkikas, Phuoc-Tai Nguyen
2023-02-04T05:33:40Z
http://arxiv.org/abs/2302.02099v1
# Semilinear elliptic equations involving fractional Hardy operators ###### Abstract. Our aim in this article is to study semilinear elliptic equations involving a fractional Hardy operator, an absorption and a Radon source in a weighted distributional sense. We show various scenarios, produced by the combined effect of the fractional Hardy potential, the growth of the absorption term and the concentration of the measure, in which existence and uniqueness results holds. Key words: _Semilinear elliptic problem; Fractional Hardy operators; Fractional Hardy potential; Radon measures._ Mathematics Subject Classification: _35R11; 35J70; 35B40._ ###### Contents * 1 Introduction * 1.1 Review of the literature * 1.2 Framework, notion of solutions and main results * 2 Preliminary * 2.1 Basic study of Poisson problem * 2.2 Local behavior at the origin * 3 The semilinear problem with interior measure data * 3.1 Estimates in the weighted weak Lebesgue space * 3.2 Existence and uniqueness * 4 Semilinear problem with Dirac measures concentrated at * 4.1.1 The \(\mathbb{R}^{N}\)-norm * 4.1.2 The \(\mathbb{R}^{N}\)-norm * 4.1.3 The \(\mathbb{R}^{N}\)-norm * 4.1.4 The \(\mathbb{R}^{N}\)-norm * 4.1.5 The \(\mathbb{R}^{N}\)-norm * 4.1.6 The \(\mathbb{R}^{N}\)-norm * 4.1.7 The \(\mathbb{R}^{N}\)-norm * 4.1.8 The \(\mathbb{R}^{N}\)-norm * 4.2 The \(\mathbb{R}^{N}\)-norm * 4.2 The \(\mathbb{R}^{N}\)-norm * 4.3 The \(\mathbb{R}^{N}\)-norm * 4.4 The \(\mathbb{R}^{N}\)-norm * 4.5 The \(\mathbb{R}^{N}\)-norm * 4.6 The \(\mathbb{R}^{N}\)-norm * 4.7 The \(\mathbb{R}^{N}\)-norm * 4.8 The \(\mathbb{R}^{N}\)-norm * 4.9 The \(\mathbb{R}^{N}\)-norm * 4.1.1 The \(\mathbb{R}^{N}\)-norm * 4.1.1 The \(\mathbb{R}^{N}\)-norm * 4.1.2 The \(\mathbb{R}^{N}\)-norm * 4.1.3 The \(\mathbb{R}^{N}\)-norm * 4.1.4 The \(\mathbb{R}^{N}\)-norm * 4.1.5 The \(\mathbb{R}^{N}\)-norm * 4.1.6 The \(\mathbb{R}^{N}\)-norm * 4.1.7 The \(\mathbb{R}^{N}\)-norm * 4.1.8 The \(\mathbb{R}^{N}\)-norm * 4.1.9 The \(\mathbb{R}^{N}\)-norm * 4.2.1 The \(\mathbb{R}^{N}\)-norm * 4.2.2 The \(\mathbb{R}^{N}\)-norm * 4.2.3 The \(\mathbb{R}^{N}\)-norm * 4.2.4 The \(\mathbb{R}^{N}\)-norm * 4.2.5 The \(\mathbb{R}^{N}\)-norm * 4.2.6 The \(\mathbb{R}^{N}\)-norm * 4.2.7 The \(\mathbb{R}^{N}\)-norm * 4.2.8 The \(\mathbb{R}^{N}\)-norm * 4.2.9 The \(\mathbb{R}^{N}\)-norm * 4.3 The \(\mathbb{R}^{N}\)-norm * 4.3.1 The \(\mathbb{R}^{N}\)-norm * 4.3.2 The \(\mathbb{R}^{N}\)-norm * 4.3.3 The \(\mathbb{R}^{N}\)-norm * 4.3.4 The \(\mathbb{R}^{N}\)-norm * 4.3.5 The \(\mathbb{R}^{N}\)-norm * 4.3.6 The \(\mathbb{R}^{N}\)-norm * 4.3.7 The \(\mathbb{R}^{N}\)-norm * 4.3.8 The \(\mathbb{R}^{N}\)-norm * 4.3.9 The \(\mathbb{R}^{N}\)-norm * 4.4.1 The \(\mathbb{R}^{N}\)-norm * 4.4.2 The \(\mathbb{R}^{N}\)-norm * 4.4.3 The \(\mathbb{R}^{N}\)-norm * 4.4.4 The \(\mathbb{R}^{N}\)-norm * 4.4.5 The \(\mathbb{R}^{N}\)-norm * 4.4.6 The \(\mathbb{R}^{N}\)-norm * 4.4.7 The \(\mathbb{R}^{N}\)-norm * 4.4.8 The \(\mathbb{R}^{N}\)-norm * 4.4.9 The \(\mathbb{R}^{N}\)-norm * 4.4.10 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm * 4.4.12 The \(\mathbb{R}^{N}\)-norm * 4.4.13 The \(\mathbb{R}^{N}\)-norm * 4.4.14 The \(\mathbb{R}^{N}\)-norm * 4.4.15 The \(\mathbb{R}^{N}\)-norm * 4.4.16 The \(\mathbb{R}^{N}\)-norm * 4.4.17 The \(\mathbb{R}^{N}\)-norm * 4.4.18 The \(\mathbb{R}^{N}\)-norm * 4.4.19 The \(\mathbb{R}^{N}\)-norm * 4.4.20 The \(\mathbb{R}^{N}\)-norm * 4.4.30 The \(\mathbb{R}^{N}\)-norm * 4.4.21 The \(\mathbb{R}^{N}\)-norm * 4.4.22 The \(\mathbb{R}^{N}\)-norm * 4.4.31 The \(\mathbb{R}^{N}\)-norm * 4.4.32 The \(\mathbb{R}^{N}\)-norm * 4.4.43 The \(\mathbb{R}^{N}\)-norm * 4.4.44 The \(\mathbb{R}^{N}\)-norm * 4.4.5 The \(\mathbb{R}^{N}\)-norm * 4.4.6 The \(\mathbb{R}^{N}\)-norm * 4.4.7 The \(\mathbb{R}^{N}\)-norm * 4.4.8 The \(\mathbb{R}^{N}\)-norm * 4.4.9 The \(\mathbb{R}^{N}\)-norm * 4.4.10 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm * 4.4.12 The \(\mathbb{R}^{N}\)-norm * 4.4.13 The \(\mathbb{R}^{N}\)-norm * 4.4.14 The \(\mathbb{R}^{N}\)-norm * 4.4.15 The \(\mathbb{R}^{N}\)-norm * 4.4.16 The \(\mathbb{R}^{N}\)-norm * 4.4.17 The \(\mathbb{R}^{N}\)-norm * 4.4.18 The \(\mathbb{R}^{N}\)-norm * 4.4.19 The \(\mathbb{R}^{N}\)-norm * 4.4.20 The \(\mathbb{R}^{N}\)-norm * 4.4.21 The \(\mathbb{R}^{N}\)-norm * 4.4.22 The \(\mathbb{R}^{N}\)-norm * 4.4.3 The \(\mathbb{R}^{N}\)-norm * 4.4.31 The \(\mathbb{R}^{N}\)-norm * 4.4.32 The \(\mathbb{R}^{N}\)-norm * 4.4.45 The \(\mathbb{R}^{N}\)-norm * 4.4.5 The \(\mathbb{R}^{N}\)-norm * 4.4.6 The \(\mathbb{R}^{N}\)-norm * 4.4.7 The \(\mathbb{R}^{N}\)-norm * 4.4.7 The \(\mathbb{R}^{N}\)-norm * 4.4.8 The \(\mathbb{R}^{N}\)-norm * 4.4.9 The \(\mathbb{R}^{N}\)-norm * 4.4.10 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm * 4.4.12 The \(\mathbb{R}^{N}\)-norm * 4.4.22 The \(\mathbb{R}^{N}\)-norm * 4.4.3 The \(\mathbb{R}^{N}\)-norm * 4.4.4 The \(\mathbb{R}^{N}\)-norm * 4.4.4 The \(\mathbb{R}^{N}\)-norm * 4.4.5 The \(\mathbb{R}^{N}\)-norm * 4.4.6 The \(\mathbb{R}^{N}\)-norm * 4.4.7 The \(\mathbb{R}^{N}\)-norm * 4.4.8 The \(\mathbb{R}^{N}\)-norm * 4.4.9 The \(\mathbb{R}^{N}\)-norm * 4.4.10 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm * 4.4.12 The \(\mathbb{R}^{N}\)-norm * 4.4.13 The \(\mathbb{R}^{N}\)-norm * 4.4.14 The \(\mathbb{R}^{N}\)-norm * 4.4.15 The \(\mathbb{R}^{N}\)-norm * 4.4.16 The \(\mathbb{R}^{N}\)-norm * 4.4.17 The \(\mathbb{R}^{N}\)-norm * 4.4.18 The \(\mathbb{R}^{N}\)-norm * 4.4.19 The \(\mathbb{R}^{N}\)-norm * 4.4.10 The \(\mathbb{R}^{N}\)-norm * 4.4.11 The \(\mathbb{R}^{N}\)-norm ## 1. Introduction In this paper, we study the fractional Hardy operator \(\mathcal{L}^{s}_{\mu}\), which is a nonlinear operator with a bounded bounded measure on \(\Omega\). We consider the following semilinear elliptic equation \[\begin{cases}\mathcal{L}^{s}_{\mu}u+g(u)=\tilde{\nu}&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{1.1}\] where \(g:\mathbb{R}\to\mathbb{R}\) is a nondecreasing continuous function such that \(g(0)=0\) and \(\tilde{\nu}\) is a bounded measure on \(\Omega\). ### Review of the literature When \(s=1\), the operator \(\mathcal{L}^{s}_{\mu}\) becomes the local Hardy operator \(\mathcal{L}^{1}_{\mu}:=-\Delta+\frac{\mu}{|x|^{2}}\) and problem (1.1) reduces to the following semilinear elliptic equation \[\begin{cases}\mathcal{L}^{1}_{\mu}u+g(u)=\tilde{\nu}&\text{in }\Omega,\\ u=0&\text{on }\partial\Omega.\end{cases} \tag{1.2}\] In the free-potential case, namely \(\mu=0\), fundamental contributions were due to Brezis [6] and Benilan and Brezis [2]. When \(N\geq 3\), it was proved that if \(g:\mathbb{R}\to\mathbb{R}\) satisfies the _integral subcritical assumption_ \[\int_{1}^{+\infty}(g(s)-g(-s))s^{-1-\frac{N}{N-2}}ds<+\infty\] then problem (1.2) admits a unique weak solution. When \(N=2\), Vazquez [34] imposed a condition expressed in terms of exponential growth of \(g\) under which there exists a unique weak solution of (1.2) with \(\mu=0\). Later on, Baras and Pierre [1] studied (1.2)) with \(g(u)=|u|^{p-1}u\) for \(p>1\) and they discovered that for \(p\geq\frac{N}{N-2}\), the problem is solvable if and only if \(\tilde{\nu}\) is absolutely continuous with respect to the Bessel capacity \(c_{2,p^{\prime}}\) with \(p^{\prime}=\frac{p}{p-1}\). Since then, significant developments on problem (1.2) in different directions have been established; see e.g. [7, 28, 36]. In case \(\mu\neq 0\), problem (1.2) was studied in [23, 14, 4] in connection with the Hardy inequality [8, 35]. Thanks to a new notion of weak solutions of \(\mathcal{L}^{1}_{\mu}u=0\) combined with a dual formulation of the equation introduced in [10], the authors of the paper [11] investigated problem (1.2)and proved a existence and uniqueness result provided that \(g\) satisfies a subcritical integral assumption and the weak \(\Delta_{2}\)-condition. Moreover, when \(g(u)=|u|^{p-1}u\) with \(p>1\), they gave necessary and sufficient conditions for the existence of a weak solution to (1.2). For semilinear elliptic equations with more general potentials, we refer to [22]. When \(s\in(0,1)\) and \(\mu=0\), problem (1.1) turns to the fractional semilinear elliptic equation \[\begin{cases}(-\Delta)^{s}u+g(u)=\tilde{\nu}&\text{in}\,\Omega,\\ u=0&\text{in}\,\mathbb{R}^{N}\setminus\Omega,\end{cases}\] which has been studied in [12] with Radon measure \(\tilde{\nu}\). The readers are referred to [27] for the case \(g=0\) and to [32] for boundary measures. Now we return to problem (1.1), which is driven by the fractional Hardy operator \(\mathcal{L}^{s}_{\mu}\). This operator arises in physical models related to relativistic Schrodinger operator with Coulomb potential (see [21, 19]), in the study of Hardy inequalities and Hardy-Lieb-Thirring inequalities (see, e.g., [18, 17, 33]) and is closely related to the fractional Hardy inequality \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|\varphi(x)- \varphi(y)|^{2}}{|x-y|^{N+2s}}dydx+\mu_{0}\int_{\mathbb{R}^{N}}\frac{|\varphi (x)|^{2}}{|x|^{2s}}dx\geq 0,\quad\forall\varphi\in C_{0}^{\infty}(\mathbb{R}^{N}), \tag{1.3}\] where the best constant in (1.3) is explicitly determined by (see, e.g., [18]) \[\mu_{0}=-2^{2s}\frac{\Gamma^{2}(\frac{N+2s}{4})}{\Gamma^{2}(\frac{N-2s}{4})}.\] Related inequalities could be found in [3, 18, 17]. Note that when \(\mu\geq\mu_{0}\), the fractional Hardy operator \(\mathcal{L}_{\mu}^{s}\) is positive definite. The potential \(\mu|x|^{-2s}\) is of the same homogeneity \(-2s\) as \((-\Delta)^{s}\), hence it cannot be understood as a lower order perturbation of \((-\Delta)^{s}\). Moreover, in non-relativistic quantum mechanics, this potential exhibits the borderline between regular potentials (for which ordinary stationary states exist) and singular potentials (for which the energy is not bounded from below), therefore it may lead to disclose anomalous phenomena (see [20]). Further properties of \(\mathcal{L}_{\mu}^{s}\) were obtained in [5, 25, 29]. Recent years have witnessed a growing interest in the study of elliptic equations with fractional Hardy potentials, evidenced by [16, 30, 21, 31, 37, 13, 9]. Let us recall relevant results on the linear equation with a fractional Hardy potential. It was shown in [13] that for \(\mu\geq\mu_{0}\) the equation \[\mathcal{L}_{\mu}^{s}u=0\quad\text{in}\ \,\mathbb{R}^{N}\setminus\{0\}\] has two distinct radial solutions \[\Phi_{s,\mu}(x)=\begin{cases}|x|^{\tau-(s,\mu)}&\text{if}\ \mu>\mu_{0}\\ |x|^{-\frac{N-2s}{2}}|\ln|x||&\text{if}\ \mu=\mu_{0}\end{cases}\quad\text{ and}\ \,\Gamma_{s,\mu}(x)=|x|^{\tau_{+}(s,\mu)}\ \text{ for}\ x\in\mathbb{R}^{N}\setminus\{0\}, \tag{1.4}\] where \(\tau_{-}(s,\mu)\leq\tau_{+}(s,\mu)\). Additional properties of \(\tau_{-}(s,\mu)\) and \(\tau_{+}(s,\mu)\) were given in [13, Proposition 1.2]. _In the remaining of the paper, when there is no ambiguity, we write for short \(\tau_{+}\) and \(\tau_{-}\) instead of \(\tau_{+}(s,\mu)\) and \(\tau_{-}(s,\mu)\)._ Note that for any \(\xi\in C_{c}^{2}(\mathbb{R}^{N})\), \[\int_{\mathbb{R}^{N}}\Phi_{s,\mu}(-\Delta)^{s}_{\tau_{+}}\xi\,dx=c_{s,\mu}\xi( 0),\] where \(c_{s,\mu}>0\) and \((-\Delta)^{s}_{\tau_{+}}\) denotes the dual of the operator of \(\mathcal{L}_{\mu}^{s}\), which is a weighted fractional Laplacian given by \[(-\Delta)^{s}_{\tau_{+}}v(x):=C_{N,s}\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}^{N} \setminus B_{\epsilon}(x)}\frac{v(x)-v(y)}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy.\] By [13, Theorem 4.14], there exist a positive constant \(c=c(N,s,\mu,\Omega)\) and a nonnegative function \(\Phi_{s,\mu}^{\Omega}\in W^{s,2}_{\text{loc}}(\mathbb{R}^{N}\setminus\{0\})\) such that \(\Phi_{s,\mu}^{\Omega}=0\) in \(\mathbb{R}^{N}\setminus\Omega\), \[\lim_{|x|\to 0^{+}}\frac{\Phi_{s,\mu}^{\Omega}(x)}{\Phi_{s,\mu}(x)}=1\quad \text{and}\quad\Phi_{s,\mu}^{\Omega}(x)\leq c|x|^{\tau_{-}},\quad\forall x \in\Omega\setminus\{0\}. \tag{1.5}\] Moreover, for any \(v\in C_{0}^{\infty}(\Omega\setminus\{0\})\) \[\ll\Phi_{s,\mu}^{\Omega},v\gg_{\mu}:=\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}}\frac{\big{(}u(x)-u(y)\big{)}\big{(}v(x)-v(y)\big{)}}{|x- y|^{N+2s}}dydx+\mu\int_{\Omega}\frac{u(x)v(x)}{|x|^{2s}}dx=0\] and for any \(\psi\in C_{0}^{1,1}(\Omega)\) \[\int_{\Omega}\Phi_{s,\mu}^{\Omega}(-\Delta)^{s}_{\tau_{+}}\psi\,dx=c_{s,\mu} \,\psi(0),\] where \(c_{s,\mu}\) is a constant given in [13, (1.15)]. In a recent paper [9], the authors of the present paper set up an appropriate distributional framework to investigate of the form \[\begin{cases}\mathcal{L}^{s}_{\mu}u=\tilde{\nu}&\text{in}\;\Omega,\\ \quad u=0&\text{in}\;\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{1.6}\] where \(\tilde{\nu}\) is a bounded measure on \(\Omega\). The approach in [9] is to analyze the associated weighted fractional Laplace operator \((-\Delta)^{s}_{\gamma}\) which is defined by \[(-\Delta)^{s}_{\gamma}v(x)=C_{N,s}\lim_{\delta\to 0^{+}}\int_{\mathbb{R}^{N} \setminus B_{\delta}(x)}\frac{v(x)-v(y)}{|x-y|^{N+2s}}\,|y|^{\gamma}dy, \tag{1.7}\] where \(\gamma\in\big{[}-\frac{N-2s}{2},2s\big{)}\). Particularly, \((-\Delta)^{s}_{0}\) reduces to the fractional Laplacian. From the integral-differential form of the weighted fractional Laplacian, a natural restriction for the function \(v\) is \[\|v\|_{L_{2s-\gamma}(\mathbb{R}^{N})}:=\int_{\mathbb{R}^{N}}\frac{|v(x)|}{(1+| x|)^{N+2s-\gamma}}dx<+\infty.\] The weighted Sobolev space associated to \((-\Delta)^{s}_{\gamma}\) is \(H^{s}_{0}(\Omega;|x|^{\gamma})\), which is defined as the closure of the functions in \(C^{\infty}(\mathbb{R}^{N})\) with the compact support in \(\Omega\) under the norm \[\|u\|_{H^{s}_{0}(\Omega;|x|^{\gamma})}:=\sqrt{\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}|y|^{\gamma}dy|x|^{\gamma}dx}. \tag{1.8}\] Note that \(H^{s}_{0}(\Omega;|x|^{\gamma})\) is a Hilbert space with the inner product \[\langle u,v\rangle_{s,\gamma}:=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac {\big{(}u(x)-u(y)\big{)}\big{(}v(x)-v(y)\big{)}}{|x-y|^{N+2s}}|y|^{\gamma}dy| x|^{\gamma}dx. \tag{1.9}\] _The results in [9], which will be recalled in Section SS2, provide a basic framework for the study of problem (1.1) with Radon measures._ Semilier equations with a fractional Hardy potential have been a research objective in numerous papers; see e.g. [16, 30, 21, 31, 37]. However, the above-mentioned works focus only on the case of source nonlinearities and rely on variational methods. To the best of our knowledge, to date, a profound understanding of the absorption case, namely equation (1.1), is still lacking. The interplay between the fractional Hardy operator, the absorption nonlinearity and the measure datum generates different types of substantial difficulties and requires a different approach. In this paper, we perform a deep analysis of this interplay and develop a theory for (1.1) in measure frameworks. The main ingredients includes recent results in [9] (see subsection 2.1), the local behavior near the origin of the solutions to the Poisson equation (see Proposition 2.5) obtained by adapting ideas in [15] and in [27], two-sided estimates of approximating solutions expressed in term of the fundamental solution \(\Phi^{\Omega}_{s,\mu}\) (see Lemma 5.1). ### Framework, notion of solutions and main results Let \(\Omega\) be a bounded domain in \(\mathbb{R}^{N}\) containing the origin and \(d(x)=\operatorname{dist}(x,\partial\Omega)\). For \(q\in[1,+\infty)\), \(\alpha,\beta\in\mathbb{R}\), we denote by \(L^{q}(\Omega;d(x)^{\alpha}|x|^{\beta})\) the weighted Lebesgue space of functions \(v:\Omega\to\mathbb{R}\) such that \[\|v\|_{L^{q}(\Omega;d(x)^{\alpha}|x|^{\beta})}:=\left(\int_{\Omega}|v|^{q}d(x) ^{\alpha}|x|^{\beta}dx\right)^{\frac{1}{q}}<+\infty.\] We denote by \(\mathfrak{M}(\Omega;d(x)^{\alpha}|x|^{\beta})\) _(resp. \(\mathfrak{M}(\Omega\setminus\{0\};d(x)^{\alpha}|x|^{\beta})\)) the space of Radon measures \(\nu\) on \(\Omega\) (resp. \(\Omega\setminus\{0\}\)) such that_ \[\|\nu\|_{\mathfrak{M}(\Omega;d(x)^{\alpha}|x|^{\beta})} :=\int_{\Omega}d(x)^{\alpha}|x|^{\beta}\,d|\nu|<+\infty,\] \[(\text{resp. }\|\nu\|_{\mathfrak{M}(\Omega\setminus\{0\};d(x)^{ \alpha}|x|^{\beta})} :=\int_{\Omega\setminus\{0\}}d(x)^{\alpha}|x|^{\beta}\,d|\nu|<+\infty)\] _and by \(\mathfrak{M}^{+}(\Omega;d(x)^{\alpha}|x|^{\beta})\) (resp. \(\mathfrak{M}^{+}(\Omega\setminus\{0\};d(x)^{\alpha}|x|^{\beta})\)) its positive cone._ In order to specify the notion of weak solutions, we first introduce the space of test functions. **Definition 1.1**.: _Assume \(\Omega\subset\mathbb{R}^{N}\) is a bounded domain satisfying the exterior ball condition and containing the origin. For \(b<2s-\tau_{+}\), we denote by \(\mathbf{X}_{\mu}(\Omega;|x|^{-b})\) the space of functions \(\psi\) with the following properties:_ _(i) \(\psi\in H^{s}_{0}(\Omega;|x|^{\tau_{+}});\)_ _(ii) \((-\Delta)^{s}_{\tau_{+}}\psi\) exists a.e. in \(\Omega\setminus\{0\}\) and \(\sup_{x\in\Omega\setminus\{0\}}\left||x|^{b}(-\Delta)^{s}_{\tau_{+}}\psi(x) \right|<+\infty\);_ _(iii) for any compact set \(K\subset\Omega\setminus\{0\}\), there exist \(\delta_{0}>0\) and \(w\in L^{1}_{\mathrm{loc}}(\Omega\setminus\{0\})\) such that_ \[\sup_{0<\delta\leq\delta_{0}}|(-\Delta)^{s}_{\tau_{+},\delta}\psi|\leq w\,\, \,\text{a.e. in}\,K\] _where_ \[(-\Delta)^{s}_{\tau_{+},\delta}\psi(x):=C_{N,s}\int_{\mathbb{R}^{N}\setminus B _{s}(x)}\frac{\psi(x)-\psi(y)}{|x-y|^{N+2s}}\,|y|^{\tau_{+}}dy,\quad x\in \Omega\setminus\{0\}.\] The concentration of the measure datum \(\tilde{\nu}\) plays an important role in the study of problem (1.1). For any \(\tilde{\nu}\in\mathfrak{M}(\Omega;d(x)^{s}|x|^{\alpha})\), we decompose \(\tilde{\nu}=\nu+\ell\delta_{0}\) where \(\nu\in\mathfrak{M}(\Omega\setminus\{0\};d(x)^{s}|x|^{\alpha})\), \(\ell\in\mathbb{R}\) and \(\delta_{0}\) denotes the Dirac measure at the origin. **Definition 1.2**.: _Assume \(g:\mathbb{R}\to\mathbb{R}\) is a nondecreasing continuous function such that \(g(0)=0\), \(\tilde{\nu}=\nu+\ell\delta_{0}\) where \(\nu\in\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) and \(\ell\in\mathbb{R}\). A function \(u\) is called a weak solution of problem (1.1) if for any \(b<2s-\tau_{+}\), \(u\in L^{1}(\Omega;|x|^{-b})\), \(g(u)\in L^{1}(\Omega;|x|^{\tau_{+}})\) and_ \[\int_{\Omega}u(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}g(u) \psi|x|^{\tau_{+}}dx=\int_{\Omega\setminus\{0\}}\psi|x|^{\tau_{+}}d\nu+\ell \int_{\Omega}\Phi^{\Omega}_{s,\mu}(-\Delta)^{s}_{\tau_{+}}\psi dx,\,\,\,\forall \psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b}). \tag{1.10}\] For \(q>1\), we define \[\Lambda_{g,q}:=\int_{1}^{\infty}(g(t)-g(-t))t^{-1-q}dt. \tag{1.11}\] We also put \[p^{*}_{s,\mu}:=\min\left\{\frac{N}{N-2s},\frac{N+\tau_{+}}{-\tau_{-}}\right\}. \tag{1.12}\] We note that \(p^{*}_{s,\mu}\) is the Serrin type critical exponent. If \(\mu\geq 0\) then \(p^{*}_{s,\mu}=\frac{N}{N-2s}\), and if \(\mu_{0}\leq\mu<0\) then \(p^{*}_{s,\mu}=\frac{N+\tau_{+}}{-\tau_{-}}\). The first main result deals with the case when the datum \(\tilde{\nu}\) is concentrated away from the origin. **Theorem 1.3**.: _Assume \(\mu\geq\mu_{0}\), \(\tilde{\nu}=\nu\in\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) and \(g\in C(\Omega)\) is a nondecreasing function such that \(g(0)=0\) and \(\Lambda_{g,\frac{N}{N-2s}}<+\infty\). Then there exists a unique weak solution \(u_{\nu,0}\) to problem (1.1). Moreover, for any \(b<2s-\tau_{+}\),_ \[\|u_{\nu,0}\|_{L^{1}(\Omega;|x|^{-b})}\leq C(N,\Omega,s,\mu,b)\|\nu\|_{ \mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}. \tag{1.13}\] Let us point out that the weak solution stated in the above theorem is constructed by using an approximation procedure in which the local behavior near the origin of the weak solution to the Poisson problem plays an essential role. It is worth noting that ODE techniques, which are efficient in the local case, no longer fit our setting well. Instead, we use a different approach based on the adaption of ideas in [15] and [27], the assumption that \(\operatorname{supp}\nu\cap\{0\}=\emptyset\), and a performance of tedious calculations. The desired local behavior allows us to derive weak Lebesgue estimates, which in turn ensures the convergence of approximating nonlinearities. The next result treats the case when \(\mu>\mu_{0}\) and \(\tilde{\nu}\) is concentrated at the origin. **Theorem 1.4**.: _Assume \(\mu>\mu_{0}\), \(\tilde{\nu}=\ell\delta_{0}\) for some \(\ell\in\mathbb{R}\) and \(\Lambda_{g,\frac{N+\tau_{+}}{\tau_{-}}}<+\infty\). Then there exists a unique weak solution \(u_{0,\ell}\) of problem (1.1)._ Let us sketch the idea of the proof. Since the datum \(\tilde{\nu}\) is concentrated at the origin, we first construct approximating solutions \(u_{\varepsilon}\) in \(\Omega\setminus B_{\varepsilon}(0)\) by using the standard theory of monotone operators. Then we establish two-sided estimates on \(u_{\varepsilon}\) expressed in term of \(\Phi^{\Omega}_{s,\mu}\), which allow us to derive the existence and the asymptotic behavior of a weak solution to problem (1.1). When \(\mu=\mu_{0}\), a logarithmic correction is involved, which is reflected in the following result. For \(q>1\), set \[\tilde{\Lambda}_{g,q}:=\int_{1}^{\infty}(g(|\ln t|t)-g(-|\ln t|t))t^{-1-q}dt. \tag{1.14}\] By an analog argument as in the proof of Theorem 1.4, we can show that **Theorem 1.5**.: _Assume \(\mu=\mu_{0}\), \(\tilde{\nu}=\ell\delta_{0}\) for some \(\ell\in\mathbb{R},\) and \(\tilde{\Lambda}_{g,\frac{N+2s}{N-2s}}<+\infty\). Then there exists a unique weak solution \(u_{0,\ell}\) of (1.1)._ Next we treat the case where the measures may have support on the whole domain \(\Omega\). **Theorem 1.6**.: _Assume \(\mu>\mu_{0}\), \(\tilde{\nu}=\nu+\ell\delta_{0}\) for some \(\nu\in\mathfrak{M}^{+}(\Omega\setminus\{0\};|x|^{\tau_{+}})\), \(\ell\geq 0\) and \(\Lambda_{g,p^{*}_{s,\mu}}<+\infty\). Then there exists a unique weak solution of problem (1.1)._ The existence of the weak solution stated in Theorem 1.6 is also based on an approximation procedure, however since \(\tilde{\nu}\) is supported on the whole domain \(\Omega\), the analysis is more complicated and requires the treatment near the origin and far from the origin at the same time. We stress that in the proof of Theorem 1.6, no \(\Delta_{2}\)-condition is needed. Therefore, Theorem 1.6 does not only extend [12, Theorem B] to the local case, but also improve that result in the sense that the \(\Delta_{2}\)-condition can be relaxed. Finally, in case \(\mu=\mu_{0}\), we also obtain the existence an uniqueness result. **Theorem 1.7**.: _Let \(\mu=\mu_{0}\), \(\tilde{\nu}=\nu+\ell\delta_{0}\) for some \(\nu\in\mathfrak{M}^{+}(\Omega\setminus\{0\};|x|^{\tau_{+}})\), \(\ell\geq 0\) and \(\Lambda_{g,\frac{N}{N-2s}}<\infty\). Then there exists a unique weak solution of (1.1)._ ### Organization of the paper The rest of this paper is organized as follows. In Section 2, we recall basic properties and derive local behavior of solutions to the Poisson problem with a fractional Hardy and a source. Section 3 is devoted to the study of weak solution to semilinear problems involving Radon measures supported away from the origin. In Section 4, we address weak solutions of semilinear problem involving Dirac masses concentrated at the origin. Finally, in Section 5, we construct weak solutions in case measures have support in the whole domain \(\Omega\). **Notation.** Throughout this paper, unless otherwise specified, we assume that \(\Omega\subset\mathbb{R}^{N}\)\((N\geq 2)\) is a bounded domain containing the origin and \(d(x)\) is the distance from \(x\in\Omega\) to \(\mathbb{R}^{N}\backslash\Omega\). We denote by \(c,C,c_{1},c_{2},\ldots\) positive constants that may vary from one appearance to another and depend only on the data. The notation \(c=c(a,b,\ldots)\) indicates the dependence of the constant \(c\) on \(a,b,\cdots\). For a function \(u\), we denote that \(u^{+}=\max\{u,0\}\) and \(u^{-}=\max\{-u,0\}\). For a set \(A\subset\mathbb{R}^{N}\), the function \(\mathbf{1}_{A}\) denotes the indicator function of \(A\). **Acknowledgements.** _H. Chen is supported by NNSF of China, No: 12071189 and 11431005, by the Jiangxi Provincial Natural Science Foundation, No: 20212ACB211005._ _K. T. Gkikas acknowledges the support by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "2nd Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers" (Project Number: 59)._ _P.-T. Nguyen is supported by Czech Science Foundation, Project GA22-17403S._ ## 2. Preliminary ### Basic study of Poisson problem In this subsection, we recall the basic properties of Poisson problem with the fractional Hardy operator and Radon measures in [9]. For \(\gamma\in[\frac{2s-N}{2},2s)\), we denote by \(H^{s}_{0}(\Omega;|x|^{\gamma})\) the closure of functions in \(C^{\infty}(\mathbb{R}^{N})\) with compact support in \(\Omega\) under the norm \[\|u\|_{H^{s}_{0}(\Omega;|x|^{\gamma})}:=\left(\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}|y|^{\gamma}dy|x|^{\gamma} dx\right)^{\frac{1}{2}}.\] Note that \(H^{s}_{0}(\Omega;|x|^{\gamma})\) is a Hilbert space with the inner product \[\langle u,v\rangle_{s,\gamma}:=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}} \frac{\big{(}u(x)-u(y)\big{)}\big{(}v(x)-v(y)\big{)}}{|x-y|^{N+2s}}|y|^{\gamma }dy|x|^{\gamma}dx.\] For \(\mu\geq\mu_{0}\), let \(\mathbf{H}^{s}_{\mu,0}(\Omega)\) be the closure of functions in \(C^{\infty}(\mathbb{R}^{N})\) with compact support in \(\bar{\Omega}\) under the norm \[\|u\|_{\mu}:=\left(\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N} }\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}dydx+\mu\int_{\Omega}\frac{u^{2}}{|x|^{2 s}}dx\right)^{\frac{1}{2}}.\] This is a Hilbert space with the inner product \[\ll u,v\gg_{\mu}:=\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N} }\frac{\big{(}u(x)-u(y)\big{)}\big{(}v(x)-v(y)\big{)}}{|x-y|^{N+2s}}dydx+\mu \int_{\Omega}\frac{u(x)v(x)}{|x|^{2s}}dx.\] The next result provides a relation between the above spaces. **Theorem 2.1** ([9, Theorem 1.1]).: _Assume \(\Omega\) is a bounded Lipschitz domain containing the origin._ \((i)\) _For any \(\gamma\in[\frac{2s-N}{2},2s)\) and \(\mu\geq\mu_{0}\), the space \(C^{\infty}_{0}(\Omega\setminus\{0\})\) is dense in \(H^{s}_{0}(\Omega;|x|^{\gamma})\) and in \(\mathbf{H}^{s}_{\mu,0}(\Omega)\)._ \((ii)\) _For any \(\gamma\in[\frac{2s-N}{2},2s)\), there is \(\mu\geq\mu_{0}\) such that \(\tau_{+}(s,\mu)=\gamma\) and_ \[H^{s}_{0}(\Omega;|x|^{\gamma})=\big{\{}|x|^{-\gamma}u:u\in\mathbf{H}^{s}_{\mu,0}(\Omega)\big{\}}. \tag{2.1}\] \((iii)\) _Let \(\gamma\in[\frac{2s-N}{2},2s)\), \(\beta<2s\) and \(1\leq q<\min\Big{\{}\frac{2N-2\beta}{N-2s},\ \frac{2N}{N-2s}\Big{\}}\). Then there exists a positive constant \(c=c(N,\Omega,s,\gamma,\beta,q)\) such that_ \[\big{\|}|\cdot|^{\gamma}v\|_{L^{q}(\Omega;|x|^{-\beta})}\leq c\,\|v\|_{s, \gamma},\quad\forall v\in H^{s}_{0}(\Omega;|x|^{\gamma}). \tag{2.2}\] Firstly, a sharp condition for the source is considered for variational solution of the Poisson problem \[\left\{\begin{aligned} (-\Delta)_{\gamma}^{s}u&=f \qquad\text{in }\Omega,\\ u&=0\qquad\text{in }\mathbb{R}^{N}\setminus\Omega.\end{aligned}\right. \tag{2.3}\] Here \(u\) is called a variational solution of (2.3) if \(v\in H^{s}_{0}(\Omega;|x|^{\gamma})\) and \[\langle v,\xi\rangle_{s,\gamma}=(f,\xi)_{\gamma}\quad\forall\,\xi\in H^{s}_{ 0}(\Omega;|x|^{\gamma}),\] where \((f,\xi)_{\gamma}:=\int_{\Omega}f\xi|x|^{\gamma}dx\). **Theorem 2.2** ([9, Theorem 1.3]).: _Assume \(\gamma\in[\frac{2s-N}{2},2s)\), \(\alpha\in\mathbb{R}\) and_ \[p>\max\Big{\{}\frac{2N}{N+2s},\,\frac{2N+2\alpha}{N+2s},\;1+\frac{\alpha}{2s} \Big{\}}. \tag{2.4}\] _For any \(f\in L^{p}(\Omega;|x|^{\alpha})\), problem (2.3) has a unique variational solution \(u\). Moreover, there exists a constant \(c=c(N,\Omega,s,\gamma,\alpha,p)\) such that_ \[\left\|u\right\|_{s,\gamma}\leq c\,\|f\|_{L^{p}(\Omega;|x|^{\alpha})}. \tag{2.5}\] _In addition, the following Kato type inequality holds_ \[\langle u^{+},\xi\rangle_{s,\gamma}\leq(f\mathrm{sign}^{+}(u),\xi)_{\gamma}, \quad\forall\,0\leq\xi\in H^{s}_{0}(\Omega;|x|^{\gamma}). \tag{2.6}\] It is easy to check that for \(\alpha<2s\), (2.4) could be reduced to \[p>\max\Big{\{}\frac{2N}{N+2s},\,\frac{2N+2\alpha}{N+2s}\Big{\}}.\] The Poisson problem involving \(\mathcal{L}^{s}_{\mu}\) of the form \[\left\{\begin{aligned} \mathcal{L}^{s}_{\mu}u& =f&&\text{in }\Omega,\\ u&=0&&\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right. \tag{2.7}\] where \(f\in L^{1}(\Omega;d^{s}|x|^{\tau_{+}})\), was studied in [9] in which the authors introduced a notion of weak solutions in the following sense: A function \(u\) is called a weak solution of (2.7) if for any \(b<2s-\tau_{+}\), \(u\in L^{1}(\Omega;|x|^{-b})\) and \(u\) satisfies \[\int_{\Omega}u(-\Delta)^{s}_{\tau_{+}}\psi dx=\int_{\Omega}f\psi|x|^{\tau_{+} }dx,\quad\forall\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b}).\] The solvability for (2.7) and Kato type inequalities were established in [9]. **Theorem 2.3** ([9, Theorem 1.7]).: _Assume \(f\in L^{1}(\Omega;d(x)^{s}|x|^{\tau_{+}})\). Then problem (2.7) admits a unique weak solution \(u\). For any \(b<2s-\tau_{+}\), there exists a positive constant \(C=C(N,\Omega,s,\mu,b)\) such that_ \[\|u\|_{L^{1}(\Omega;|x|^{-b})}\leq C\|f\|_{L^{1}(\Omega;d(x)^{s}|x|^{\tau_{+} })}.\] _Furthermore, there holds_ \[\int_{\Omega}u^{+}(-\Delta)^{s}_{\tau_{+}}\psi dx\leq\int_{\Omega}f\mathrm{ sign}^{+}(u)\psi|x|^{\tau_{+}}dx,\quad\forall\,0\leq\psi\in\mathbf{X}_{\mu}( \Omega;|x|^{-b}) \tag{2.8}\] _and_ \[\int_{\Omega}|u|(-\Delta)^{s}_{\tau_{+}}\psi dx\leq\int_{\Omega}f\mathrm{ sign}(u)\psi|x|^{\tau_{+}}dx,\quad\forall\,0\leq\psi\in\mathbf{X}_{\mu}( \Omega;|x|^{-b}). \tag{2.9}\] _As a consequence, the mapping \(f\mapsto u\) is nondecreasing. In particular, if \(f\geq 0\) then \(u\geq 0\) a.e. in \(\Omega\setminus\{0\}\)._ Note that (2.8) is the Kato's inequality in our weighted distributional sense, which plays a pivotal role in the proof of the uniqueness for weak solution for our problems involving the Radon measures. In case of measure data, weak solutions of \[\begin{cases}\mathcal{L}^{s}_{\mu}u=\nu+\ell\delta_{0}&\text{ in }\Omega,\\ u=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{2.10}\] are understood in the sense of Definition 1.2 with \(g\equiv 0\). Existence, uniqueness and a priori estimates of weak solutions are recalled below. **Theorem 2.4** ([9, Theorem 1.10]).: _Assume \(\ell\in\mathbb{R}\) and \(\nu\in\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\). Then problem (2.10) admits a unique weak solution \(u\). For any \(b<2s-\tau_{+}\), there exists a positive constant \(C=C(N,\Omega,s,\mu,b)\) such that_ \[\|u\|_{L^{1}(\Omega;|x|^{-b})}\leq C\bigg{(}\|\nu\|_{\mathfrak{M}(\Omega \setminus\{0\};|x|^{\tau_{+}})}+\ell\bigg{)}.\] _Moreover, the mapping \((\nu,\ell)\mapsto u\) is nondecreasing. In particular, if \(\nu\geq 0\) and \(\ell\geq 0\) then \(u\geq 0\) a.e. in \(\Omega\setminus\{0\}\)._ ### Local behavior at the origin Our aim in this subsection is to establish the local behavior of the solution to the Poisson problem (2.10) when the measure has the support away from the origin. More precisely, the main result of this subsection is stated below. **Theorem 2.5**.: _Assume \(\Omega\subset\mathbb{R}^{N}\) is a bounded domain satisfying the exterior ball condition and containing the origin, \(\mu_{0}\leq\mu\leq 0\) and \(\nu\in\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) with \(\operatorname{dist}(\operatorname{supp}|\nu|,\{0\})=r>0\). Let \(u\) be the unique solution of (2.10) with \(\ell=0\). Then there exists a positive constant \(C=C(N,\Omega,s,\mu,r)\) such that_ \[\sup_{x\in B_{\frac{r}{4}}(0)\setminus\{0\}}(|x|^{-\tau_{+}}|u(x)|)\leq C\| \nu\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}. \tag{2.11}\] The proof of Theorem 2.5 consists of several intermediate technical lemmas which allow us to bound local subsolutions to \(\mathcal{L}^{s}_{\mu}u\leq 0\) from above in terms of their tails and mass in neighborhood of the origin. The first lemma, which is inspired by [15, Theorem 1.4], is the following. **Lemma 2.6**.: _Let \(x_{0}\in\Omega\) and \(r>0\) such that \(B_{r}(x_{0})\subset\Omega.\) We assume that \(v\in H^{s}_{0}(\Omega;|x|^{\tau_{+}})\) satisfies_ \[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(v(x)-v(y))(\phi(x)-\phi(y))}{ |x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\leq 0,\quad\forall 0\leq\phi\in C ^{\infty}_{0}(B_{r}(x_{0})). \tag{2.12}\] _For any \(k\in\mathbb{R}\), put \(w=v-k\). Then for any nonnegative \(\phi\in C^{\infty}_{0}(B_{r}(x_{0}))\), there holds_ \[\begin{split}&\int_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\frac{\big{(}w ^{+}(x)\phi(x)-w^{+}(y)\phi(y)\big{)}^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{ \tau_{+}}dx\\ \leq&\,6\int_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\max \{w^{+}(x)^{2},w^{+}(y)^{2}\}\frac{(\phi(x)-\phi(y))^{2}}{|x-y|^{N+2s}}|y|^{ \tau_{+}}dy|x|^{\tau_{+}}dx\\ &+8\int_{B_{r}(x_{0})}w^{+}(x)\phi(x)^{2}|x|^{\tau_{+}}dx\Big{(} \sup_{y\in\operatorname{supp}\phi}\int_{\mathbb{R}^{N}\setminus B_{r}(x_{0})} \frac{w^{+}(x)}{|x-y|^{N+2s}}|x|^{\tau_{+}}dx\Big{)}.\end{split} \tag{2.13}\] Proof.: Let \(\phi\in C_{0}^{\infty}(B_{r}(x_{0}))\) and due to the standard density argument, we can take \(w^{+}\phi^{2}\) as a test function in (2.12) to obtain \[\begin{split} 0&\geq\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{(v(x)-v(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2})}{|x-y| ^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &=\int_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\frac{(v(x)-v(y))(w^{+}(x) \phi(x)^{2}-w^{+}(y)\phi(y)^{2})}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx \\ &\quad+2\int_{\mathbb{R}^{N}\setminus B_{r}(x_{0})}\int_{B_{r}(x _{0})}\frac{(v(y)-v(x))w^{+}(y)\phi(y)^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{ \tau_{+}}dx.\end{split} \tag{2.14}\] We note that \((v(y)-v(x))w^{+}(y)\geq-w^{+}(x)w^{+}(y)\) for any \(x,y\in\mathbb{R}^{N}\), hence \[\begin{split}&\int_{\mathbb{R}^{N}\setminus B_{r}(x_{0})}\int_{ B_{r}(x_{0})}\frac{(v(y)-v(x))w^{+}(y)\phi(y)^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{ \tau_{+}}dx\\ &\geq-\int_{\mathbb{R}^{N}\setminus B_{r}(x_{0})}\int_{B_{r}(x_{ 0})}\frac{w^{+}(x)w^{+}(y)\phi(y)^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{ +}}dx\\ &=-\int_{B_{r}(x_{0})}w^{+}(y)\phi(y)^{2}\int_{\mathbb{R}^{N} \setminus B_{r}(x_{0})}\frac{w^{+}(x)}{|x-y|^{N+2s}}|x|^{\tau_{+}}dx|y|^{\tau_{ +}}dy\\ &\geq-\int_{B_{r}(x_{0})}w^{+}(y)\phi(y)^{2}dy\Big{(}\sup_{y\in \operatorname{supp}\phi}\int_{\mathbb{R}^{N}\setminus B_{r}(x_{0})}\frac{w^{+ }(x)}{|x-y|^{N+2s}}|x|^{\tau_{+}}dx\Big{)}.\end{split} \tag{2.15}\] Next, for any \(x,y\in\mathbb{R}^{N}\), we can show that \[(v(x)-v(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2})\geq(w^{+}(x)-w^{+}(y))(w^ {+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2}). \tag{2.16}\] Combining (2.14)-(2.16) yields \[\begin{split}&\int_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\frac{(w^{+}(x) -w^{+}(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2})}{|x-y|^{N+2s}}|y|^{\tau_{+} }dy|x|^{\tau_{+}}dx\\ \leq& 2\int_{B_{r}(x_{0})}w^{+}(y)\phi(y)^{2}dy \Big{(}\sup_{y\in\operatorname{supp}\phi}\int_{\mathbb{R}^{N}\setminus B_{r}(x _{0})}\frac{w^{+}(x)}{|x-y|^{N+2s}}|x|^{\tau_{+}}dx\Big{)}.\end{split} \tag{2.17}\] Take arbitrary \(x,y\in\mathbb{R}^{N}\). If \(\phi(x)\leq\phi(y)\) then we obtain \[(w^{+}(x)-w^{+}(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2})\] \[= (w^{+}(x)-w^{+}(y))^{2}\phi(y)^{2}+w^{+}(x)(w^{+}(x)-w^{+}(y))( \phi(x)^{2}-\phi(y)^{2})\] \[\geq \frac{1}{2}(w^{+}(x)-w^{+}(y))^{2}\phi(y)^{2}-2(w^{+}(x))^{2}( \phi(x)-\phi(y))^{2}.\] If \(\phi(x)\geq\phi(y)\) and \(w^{+}(x)\geq w^{+}(y)\) then \[(w^{+}(x)-w^{+}(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2})\geq(w^{+}(x)-w^{ +}(y))^{2}\phi(x)^{2}.\] Hence, in any case we have that \[(w^{+}(x)-w^{+}(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2}) \geq\frac{1}{2}(w^{+}(x)-w^{+}(y))^{2}\max\{\phi(x)^{2},\phi(y)^ {2}\}\] \[\quad-2\max\{(w^{+}(x))^{2},(w^{+}(y))^{2}\}(\phi(x)-\phi(y))^{2}.\] On the other hand, we find that \[(w^{+}(x)\phi(x)-w^{+}(y)\phi(y))^{2} \leq 2(w^{+}(x)-w^{+}(y))^{2}\max\{\phi(x)^{2},\phi(y)^{2}\}\] \[\quad+2\max\{(w^{+}(x))^{2},(w^{+}(y))^{2}\}(\phi(x)-\phi(y))^{2}.\] Combining the last two displays leads to \[\begin{split}(w^{+}(x)\phi(x)-w^{+}(y)\phi(y))^{2}&\leq 4 (w^{+}(x)-w^{+}(y))(w^{+}(x)\phi(x)^{2}-w^{+}(y)\phi(y)^{2})\\ &\qquad\qquad\qquad+6\max\{(w^{+})^{2}(x),(w^{+})^{2}(y)\}(\phi(x )-\phi(y))^{2}.\end{split} \tag{2.18}\] Finally, plugging (2.17) into (2.18), we derive (2.13). In order to proceed further, we recall that if \(\mu_{0}<\mu\) then for \(u\in C_{0}^{\infty}(\Omega)\) \[\begin{split}\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}dydx+\mu\int_{\mathbb{R}^{N} }\frac{|u|^{2}}{|x|^{2s}}dx&\geq\frac{C_{N,s}}{2}\frac{\mu_{0}- \mu}{\mu_{0}}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{2} }{|x-y|^{N+2s}}dydx\\ &\geq c\left(\int_{\Omega}|u|^{\frac{2N}{N-2s}}dx\right)^{\frac{N -2s}{N}},\end{split} \tag{2.19}\] where \(c=c(N,s,\mu)>0\). Setting \(u=|x|^{\tau_{+}}v\), for any \(\mu>\mu_{0}\), we have that for \(u\in C_{0}^{\infty}(\Omega)\) \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|v(x)-v(y)|^{ 2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\geq c\left(\int_{\Omega}(|x| ^{\tau_{+}}|v|)^{\frac{2N}{N-2s}}dx\right)^{\frac{N-2s}{N}}. \tag{2.20}\] If \(\mu=\mu_{0}\) then by [33, Theorem 3], there exists a positive constant \(C=C(N,s)\) such that for \(u\in C_{0}^{\infty}(\Omega)\), \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{ 2}}{|x-y|^{N+2s}}dydx+\mu_{0}\int_{\mathbb{R}^{N}}\frac{|u|^{2}}{|x|^{2s}}dx \geq C(N,s)\left(\int_{\Omega}|u|^{\frac{2N}{N-2s}}X(|x|)^{\frac{2(N-s)}{N-2s }}dx\right)^{\frac{N-2s}{N}},\] where \(X(t)=1+\ln(\sup_{x\in\Omega}|x|)-\ln t\). Setting \(u=|x|^{\frac{2s-N}{2}}v\), Theorem 2.1 implies that \[\begin{split}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|v(x )-v(y)|^{2}}{|x-y|^{N+2s}}|y|^{\frac{2s-N}{2}}dyd|x|^{\frac{2s-N}{2}}dx& \geq C(N,s)\left(\int_{\Omega}|v|^{\frac{2N}{N-s}}|x|^{-N}X(|x|) ^{\frac{2(N-s)}{N-2s}}dx\right)^{\frac{N-2s}{N}}\\ &\geq C(N,s,\Omega)\left(\int_{\mathbb{R}^{N}}|v|^{\frac{2N}{N-2s }}|x|^{\frac{2s-N}{2}}dx\right)^{\frac{N-2s}{N}}.\end{split} \tag{2.21}\] By Theorem 2.1, inequalities (2.20)-(2.21) are valid for any \(v\in H_{0}^{s}(\Omega;|x|^{\tau_{+}})\). Next, we introduce a notion of the tail adapted to the context of fractional Hardy potential. For any \(v\in H_{0}^{s}(\Omega;|x|^{\tau_{+}})\), the nonlocal tail of \(v\) in the ball \(B_{r}(x_{0})\) is defined by \[T(v;x_{0},r):=r^{2s}\max\{|x_{0}|,r\}^{-\tau_{+}}\int_{\mathbb{R}^{N}\setminus B _{r}(x_{0})}\frac{|v||x|^{\tau_{+}}}{|x-x_{0}|^{N+2s}}dx. \tag{2.22}\] **Proposition 2.7**.: _Assume \(\mu_{0}\leq\mu<0\), \(B_{r}(x_{0})\subset\Omega\) and \(v\in H_{0}^{s}(\Omega;|x|^{\tau_{+}})\) satisfy (2.12). We additionally assume that \(|x_{0}|\geq\theta r\) with \(\theta\in(1,2)\) if \(x_{0}\neq 0\). Then there exists a positive constant \(C=C(\Omega,\mu,N,s,\gamma)\) such that_ \[\sup_{B_{\frac{r}{2}}(x_{0})}(v-k)^{+}\leq\delta T((v-k)^{+};x_{0},\frac{r}{2 })+C\delta^{-\frac{N}{4s}}\left(\int_{B_{r}(x_{0})}((v-k)^{+})^{2}|x|^{\tau_{+ }}dx\right)^{\frac{1}{2}} \tag{2.23}\] _for any \(\delta\in(0,1]\) and \(k\in\mathbb{R}\)._ Proof.: The proof, which is based on a combination of the idea in [15, Theorem 1.1] and the fractional Hardy inequality, consists of several steps. _Step 1._ If \(x_{0}\neq 0\) then by the assumption \(|x_{0}|>\theta r\), we obtain \[\frac{\theta-1}{\theta}|x_{0}|\leq|x|\leq\frac{\theta+1}{\theta}|x_{0}|,\quad \forall x\in B_{r}(x_{0}). \tag{2.24}\] Consequently, \[\int_{B_{r}(x_{0})}|x|^{\tau_{+}}dx\approx r^{N}|x_{0}|^{\tau_{+}}. \tag{2.25}\] Let \(r^{\prime}<r\), for any \(x\in\mathbb{R}^{N}\setminus B_{r}(x_{0})\) and \(y\in B_{r^{\prime}}(x_{0})\), we have \[\frac{|x-x_{0}|}{|x-y|}\leq 1+\frac{|y-x_{0}|}{|x-y|}\leq 1+\frac{r^{\prime}}{r-r^ {\prime}}=\frac{r}{r-r^{\prime}}. \tag{2.26}\] Let \(\zeta\in C_{0}^{\infty}(B_{r^{\prime}}(x_{0}))\), then by (2.24), standard fractional Sobolev inequality and (2.26), we obtain \[\Big{(}\int_{B_{r}(x_{0})}|\zeta|^{\frac{2N}{N-2s}}|x|^{\tau_{+}} dx\Big{)}^{\frac{N-2s}{N}} \leq C|x_{0}|^{\frac{(N-2s)\tau_{+}}{N}}\int_{\mathbb{R}^{N}}\int _{\mathbb{R}^{N}}\frac{|\zeta(x)-\zeta(y)|^{2}}{|x-y|^{N+2s}}dydx\] \[=C|x_{0}|^{\frac{(N-2s)\tau_{+}}{N}}\int_{B_{r}(x_{0})}\int_{B_{ r}(x_{0})}\frac{|\zeta(x)-\zeta(y)|^{2}}{|x-y|^{N+2s}}dydx\] \[\quad+2C|x_{0}|^{\frac{(N-2s)\tau_{+}}{N}}\int_{\mathbb{R}^{N} \setminus B_{r}(x_{0})}\int_{B_{r}(x_{0})}\frac{|\zeta(y)|^{2}}{|x-y|^{N+2s}} dydx\] \[\leq C|x_{0}|^{\frac{(N-2s)\tau_{+}}{N}-2\tau_{+}}\int_{B_{r}(x_ {0})}\int_{B_{r}(x_{0})}\frac{|\zeta(x)-\zeta(y)|^{2}}{|x-y|^{N+2s}}|y|^{\tau_ {+}}dy|x|^{\tau_{+}}dx\] \[\quad+C|x_{0}|^{\frac{(N-2s)\tau_{+}}{N}-\tau_{+}}\left(\frac{r} {r-r^{\prime}}\right)^{N+2s}r^{-2s}\int_{B_{r}(x_{0})}|\zeta(x)|^{2}|x|^{\tau_ {+}}dx, \tag{2.27}\] where \(C=C(N,s,\mu)\). If \(x_{0}=0\), then by using a scaling argument, (2.20) and (2.21), we can show that \[\begin{split}&\left(\int_{B_{r}(0)}|\zeta|^{\frac{2N}{N-2s}}|x|^{ \tau_{+}}dx\right)^{\frac{N-2s}{N}}\\ &\leq Cr^{\frac{N-2s}{N}(N+\tau_{+})-N+2s-2\tau_{+}}\int_{ \mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|\zeta(x)-\zeta(y)|^{2}}{|x-y|^{N+2s }}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\leq Cr^{\frac{N-2s}{N}(N+\tau_{+})-N+2s-2\tau_{+}}\int_{B_{r}( 0)}\int_{B_{r}(0)}\frac{|\zeta(x)-\zeta(y)|^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy |x|^{\tau_{+}}dx\\ &\quad+Cr^{\frac{N-2s}{N}(N+\tau_{+})-N-\tau_{+}}\left(\frac{r}{ r-r^{\prime}}\right)^{N+2s}\int_{B_{r}(0)}|\zeta(x)|^{2}|x|^{\tau_{+}}dx,\end{split} \tag{2.28}\] where \(C=C(N,s,\mu,\Omega)\). For a function \(\varphi\in L^{1}(B_{r}(x_{0}))\), set \[\fint_{B_{r}(x_{0})}\varphi(x)|x|^{\tau_{+}}dx:=\left(\int_{B_{r}(x_{0})}|x|^{ \tau_{+}}dx\right)^{-1}\int_{B_{r}(x_{0})}\varphi|x|^{\tau_{+}}dx.\] Then, by (2.25), (2.27) and (2.28) can be written as follows \[\begin{split}\left(\fint_{B_{r}(x_{0})}|\zeta|^{\frac{2N}{N-2s}}|x |^{\tau_{+}}dx\right)^{\frac{N-2s}{N}}&\leq Cr^{2s}\max\{|x_{0} |,r\}^{-\tau_{+}}\fint_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\frac{|\zeta(x)-\zeta (y)|^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\quad+C\left(\frac{r}{r-r^{\prime}}\right)^{N+2s}\fint_{B_{r}(x_{ 0})}|\zeta(x)|^{2}|x|^{\tau_{+}}dx.\end{split} \tag{2.29}\] _Step 2._ For any \(j\in\mathbb{N}\cup\{0\}\) we set \[r_{j}=\frac{1}{2}(1+2^{-j})r,\quad\tilde{r}_{j}=\frac{r_{j}+r_{j+1}}{2},\quad B _{j}=B_{r_{j}}(x_{0})\quad\text{and}\;\;\tilde{B}_{j}=B_{\tilde{r}_{j}}(x_{0}).\] Let \(k\in\mathbb{R}\), \(\tilde{k}\in\mathbb{R}_{+}\), put \[k_{j}=k+(1-2^{-j})\tilde{k}\quad\text{and}\quad\tilde{k}_{j}=\frac{k_{j}+k_{j+ 1}}{2}.\] Set \[w_{j}=(v-k_{j})^{+}\quad\text{and}\quad\tilde{w}_{j}=(v-\tilde{k}_{j})^{+}.\] Let \(\phi_{j}\in C^{\infty}_{0}(\tilde{B}_{j})\) such that \(0\leq\phi_{j}\leq 1\) in \(\tilde{B}_{j}\), \(\phi_{j}=1\) in \(B_{j+1}\) and \(|\nabla\phi_{j}|\leq\frac{2^{j+3}}{r}\) in \(\tilde{B}_{j}\). By (2.29), we have \[\begin{split}&\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}}| \tilde{w}_{j}\phi_{j}|^{\frac{2N}{N-2s}}|x|^{\tau_{+}}dx\right)^{\frac{N-2s}{ N}}\\ &\qquad\leq Cr^{2s}\max\{|x_{0}|,r\}^{-\tau_{+}}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}}\int_{B_{j}}\frac{|\tilde{w}_{j} (x)\phi_{j}(x)-\tilde{w}_{j}(y)\phi_{j}(y)|^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy |x|^{\tau_{+}}dx\\ &\qquad\quad+C2^{j(N+2s)}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}}|\tilde{w}_{j}(y)\phi_{j}(y)|^{2 }|y|^{\tau_{+}}dy.\end{split} \tag{2.30}\] We will estimate the first term on the right hand side of (2.30) by applying (2.13) with \(B_{j}\), \(\tilde{w}_{j}\), \(\phi_{j}\) in place of \(B_{r}(x_{0})\), \(w^{+}\), \(\phi\) respectively. Hence, we obtain \[\begin{split}&\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}} \int_{B_{j}}\frac{(\tilde{w}_{j}(x)\phi_{j}(x)-\tilde{w}_{j}(y)\phi_{j}(y))^{2 }}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\leq C\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{j}}\int_{B_{j}}\max\{\tilde{w}_{j}(x)^{2},\tilde{ w}_{j}(y)^{2}\}\frac{(\phi_{j}(x)-\phi_{j}(y))^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy |x|^{\tau_{+}}dx\\ &\quad+C\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{j}}\tilde{w}_{j}(x)\phi(x)^{2}|x|^{\tau_{+}}dx \left(\sup_{y\in\operatorname{supp}\phi_{j}}\int_{\mathbb{R}^{N}\setminus B_{j} (x_{0})}\frac{\tilde{w}_{j}(x)}{|x-y|^{N+2s}}|x|^{\tau_{+}}dx\right).\end{split} \tag{2.31}\] We note that for \(y\in\operatorname{supp}\phi_{j}\subset\tilde{B}_{j}\) and \(x\in\mathbb{R}^{N}\setminus B_{j}\), \[\frac{|x-x_{0}|}{|x-y|}\leq 1+\frac{\tilde{r}_{j}}{r_{j}-\tilde{r}_{j}}\leq 2^{j+4}. \tag{2.32}\] Thanks to the estimates \(\tilde{w}_{j}\leq\frac{w_{j}^{2}}{k_{j}-k_{j}}\), \(\tilde{w}_{j}\leq w_{j}\), estimate(2.32) and the definition of nonlocal tail of \(w_{0}\) in \(B_{\frac{r}{2}}(x_{0})\) in (2.22), we have \[\begin{split}& r^{2s}\max\{|x_{0}|,r\}^{-\tau_{+}}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}} \tilde{w}_{j}(x)\phi_{j}^{2}|x|^{\tau_{+}}dx\left(\sup_{y\in\operatorname{ supp}\phi_{j}}\int_{\mathbb{R}^{N}\setminus B_{j}}\frac{\tilde{w}_{j}(x)}{|x-y|^{N+2s}}|x|^{ \tau_{+}}dx\right)\\ &\leq Cr^{2s}\max\{|x_{0}|,r\}^{-\tau_{+}}2^{j(N+2s)}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}} \frac{w_{j}^{2}(x)}{\tilde{k}_{j}-k_{j}}|x|^{\tau_{+}}dx\int_{\mathbb{R}^{N} \setminus B_{j}}\frac{w_{j}(x)}{|x-x_{0}|^{N+2s}}|x|^{\tau_{+}}dx\\ &\leq C\frac{2^{j(N+2s+1)}}{\tilde{k}}\left(\mathchoice{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{j}}w_{j}^{2}(x)|x|^{ \tau_{+}}dx\right)T(w_{0};x_{0},\frac{r}{2}).\end{split} \tag{2.33}\] Next we estimate the first term on the right hand side of (2.31). If \(x_{0}\neq 0\) then by using the estimate \(|\nabla\phi_{j}|\leq\frac{2^{j+3}}{r}\) in \(\tilde{B}_{j}\),(2.24) and \(\tilde{w}_{j}\leq w_{j}\), we obtain \[\begin{split}& r^{2s}|x_{0}|^{-\tau_{+}}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Combining (2.30), (2.33), (2.34) and (2.38), we derive \[\begin{split}&\left(\fint_{B_{j}}|\tilde{w}_{j}\phi_{j}|^{\frac{2N}{ N-2s}}|x|^{\tau_{+}}dx\right)^{\frac{N-2s}{N}}\\ &\leq C2^{j(N+2s+1)}\left(1+\frac{1}{\tilde{k}}T(w_{0};x_{0}, \frac{r}{2})\right)\fint_{B_{j}}w_{j}(x)^{2}|x|^{\tau_{+}}dx.\end{split} \tag{2.39}\] _Step 3._ Since \[|\tilde{w}_{j}|^{\frac{2N}{N-2s}}\geq(k_{j+1}-\tilde{k}_{j})^{\frac{4s}{N-2s}} w_{j+1}^{2}\geq(2^{-j-2}\tilde{k})^{\frac{4s}{N-2s}}w_{j+1}^{2},\] it follows that (noticing that \(\phi_{j}=1\) in \(B_{j+1}\)) \[\left(\fint_{B_{j}}|\tilde{w}_{j}\phi_{j}|^{\frac{2N}{N-2s}}|x|^{\tau_{+}}dx \right)^{\frac{N-2s}{N}}\geq(2^{-j-2}\tilde{k})^{\frac{4s}{N}}\left(\fint_{B_ {j+1}}w_{j+1}^{2}|x|^{\tau_{+}}dx\right)^{\frac{N-2s}{N}}. \tag{2.40}\] Set \[A_{j}:=\left(\fint_{B_{j}}w_{j}^{2}|x|^{\tau_{+}}dx\right)^{\frac{1}{2}},\] then by (2.40) and (2.39), we deduce \[(2^{-j-2}\tilde{k})^{\frac{4s}{N}}A_{j+1}^{\frac{2(N-2s)}{N}}\leq C2^{j(N+2s+1 )}\left(1+\frac{T(w_{0};x_{0},\frac{r}{2})}{\tilde{k}}\right)A_{j}^{2}.\] Assuming \(\tilde{k}\geq\delta T(w_{0};x_{0},\frac{r}{2})\) for some \(\delta\in(0,1]\), we obtain \[\left(\frac{A_{j+1}}{\tilde{k}}\right)\leq C2^{j(\frac{N+2s+1}{2}+\frac{2s}{N })\frac{N}{N-2s}}\delta^{-\frac{N}{2(N-2s)}}\left(\frac{A_{j}}{\tilde{k}} \right)^{1+\frac{2s}{N-2s}}. \tag{2.41}\] We will show by induction that there exists \(\sigma>1\) such that \[A_{j}\leq\sigma^{-j}A_{0},\quad\forall j\in\mathbb{N}\cup\{0\}. \tag{2.42}\] Obviously is valid for \(j=0.\) We assume that it is true for \(j=l-1.\) By (2.41) we have \[A_{l} \leq C2^{l(\frac{N+2s+1}{2}+\frac{2s}{N})\frac{N}{N-2s}}\delta^{ -\frac{N}{2(N-2s)}}\left(\frac{A_{l-1}}{\tilde{k}}\right)^{\frac{2s}{N-2s}}A_{ l-1}\] \[\leq C2^{l(\frac{N+2s+1}{2}+\frac{2s}{N})\frac{N}{N-2s}}\delta^{ -\frac{N}{2(N-2s)}}\sigma^{(1-l)\frac{N}{N-2s}}\left(\frac{A_{0}}{\tilde{k}} \right)^{\frac{2s}{N-2s}}A_{0}\] \[\leq C\sigma^{\frac{N}{N-2s}}\sigma^{-l}2^{l(\frac{N+2s+1}{2}+ \frac{2s}{N})\frac{N}{N-2s}}\delta^{-\frac{N}{2(N-2s)}}\sigma^{-\frac{2s}{N-2 s}l}\left(\frac{A_{0}}{\tilde{k}}\right)^{\frac{2s}{N-2s}}A_{0}.\] Taking \(\sigma=2^{(\frac{N+2s+1}{2}+\frac{2s}{N})\frac{N}{2s}}>1\) we obtain \[A_{l}\leq C\sigma^{\frac{N}{N-2s}}\sigma^{-l}\delta^{-\frac{N}{2(N-2s)}}\left( \frac{A_{0}}{\tilde{k}}\right)^{\frac{2s}{N-2s}}A_{0}.\] Hence it is enough to choose \(\tilde{k}\) such that \[C\sigma^{\frac{N}{N-2s}}\delta^{-\frac{N}{2(N-2s)}}\left(\frac{A_{0}}{\tilde{k }}\right)^{\frac{2s}{N-2s}}\leq 1. \tag{2.43}\] We choose \[\tilde{k}=\delta T(w_{0};x_{0},\frac{r}{2})+C^{\frac{N-2s}{2s}}A_{0}\delta^{ \frac{-N}{4s}}\sigma^{\frac{N}{2s}},\] where \(C\) is the constant in (2.43), then \(A_{l}\leq\sigma^{-l}A_{0}\). Then by induction, we deduce (2.42). Letting \(j\to\infty\) in (2.42), we have that \(\lim_{j\to\infty}A_{j}=0\), namely \[(v-k-\tilde{k})^{+}=0,\,\,\,a.e.\,\,\,\text{in}\,\,\,B_{\frac{r}{2}}(x_{0}),\] which implies (2.23). The proof is complete. Employing the above result and adapting the idea in [27, Corollary 2.1] to our setting, we obtain the following lemma. **Lemma 2.8**.: _Let \(\mu_{0}\leq\mu\leq 0,\,r>0\) be such that \(B_{r}(0)\subset\Omega\) and \(v\in H^{s}_{0}(\Omega;|x|^{\tau_{+}})\) satisfy (2.12). Then there exists a positive constant \(C=C(\Omega,N,s,\mu)\) such that_ \[\sup_{B_{\frac{r}{2}}}v^{+}\leq C\left(T(v^{+};0,\frac{r}{2})+\fint_{B_{r}}v^{ +}(x)|x|^{\tau_{+}}dx\right).\] Proof.: Let \(\frac{1}{2}\leq t<\gamma\leq 1\) and \(z\in B_{tr}\setminus B_{\frac{r}{4}}\). We note that \(B_{\frac{(\gamma-t)r}{100}}(z)\subset B_{\gamma r}\). Then by applying (2.23) with \(B_{\frac{(\gamma-t)r}{100}}(z)\) and \(k=0\), we obtain \[\sup_{x\in B_{\frac{(\gamma-t)r}{200}}(z)}v^{+} \leq CT(v^{+};z,\frac{(\gamma-t)r}{200})+C\left(\fint_{B_{\frac{ (\gamma-t)r}{100}}(z)}(v^{+}(x))^{2}|x|^{\tau_{+}}dx\right)^{\frac{1}{2}} \tag{2.44}\] \[\leq CT(v^{+};z,\frac{(\gamma-t)r}{200})+\frac{C}{(\gamma-t)^{ \frac{N}{2}}}\left(\fint_{B_{\gamma r}}(v^{+}(x))^{2}|x|^{\tau_{+}}dx\right)^{ \frac{1}{2}}.\] Now, since \(\frac{|x|}{|x-z|}\leq\frac{C}{\gamma-t}\) for any \(x\in\mathbb{R}^{N}\setminus B_{\frac{(\gamma-t)r}{200}}(z)\), by (2.22) and Holder inequality, we have \[T(v^{+};z,\frac{(\gamma-t)r}{200}) \leq C(\gamma-t)^{2s}r^{2s-\tau_{+}}\int_{\mathbb{R}^{N}\setminus B _{\frac{(\gamma-t)r}{200}}(z)}\frac{v^{+}}{|x-z|^{N+2s}}|x|^{\tau_{+}}dx \tag{2.45}\] \[\leq C(\gamma-t)^{-N}r^{2s-\tau_{+}}\int_{(\mathbb{R}^{N} \setminus B_{\frac{(\gamma-t)r}{200}}(z))\setminus B_{\frac{r}{2}}}\frac{v^{+ }}{|x|^{N+2s}}|x|^{\tau_{+}}dx\] \[+C(\gamma-t)^{-N}r^{-N-\tau_{+}}\int_{(\mathbb{R}^{N}\setminus B _{\frac{(\gamma-t)r}{200}}(z))\cap B_{\frac{r}{2}}}v^{+}|x|^{\tau_{+}}dx\] \[\leq C(\gamma-t)^{-N}\bigg{(}T(v^{+};0,\frac{r}{2})+\left(\fint_{ B_{\gamma r}}(v^{+}(x))^{2}|x|^{\tau_{+}}dx\right)^{\frac{1}{2}}\bigg{)}.\] Combining (2.44) and (2.45) yields \[\sup_{x\in B_{tr}}v^{+}\leq C(\gamma-t)^{-N}\bigg{(}T(v^{+};0,\frac{r}{2})+ \left(\fint_{B_{\gamma r}}(v^{+}(x))^{2}|x|^{\tau_{+}}dx\right)^{\frac{1}{2} }\bigg{)}. \tag{2.46}\] By Young's inequality, we have \[(\gamma-t)^{-N}\left(\fint_{B_{\gamma r}}(v^{+}(x))^{2}|x|^{\tau_ {+}}dx\right)^{\frac{1}{2}} \leq(\gamma-t)^{-N}\left(\sup_{x\in B_{\gamma r}}v^{+}\fint_{B_{ \gamma r}}v^{+}(x)|x|^{\tau_{+}}dx\right)^{\frac{1}{2}}\] \[\leq\frac{1}{2C}\sup_{x\in B_{\gamma r}}v^{+}+2C(\gamma-t)^{-2N} \fint_{B_{\gamma r}}v^{+}(x)|x|^{\tau_{+}}dx,\] where \(C\) is the constant in (2.46). Therefore \[\sup_{x\in B_{tr}}v^{+}\leq\frac{1}{2}\sup_{x\in B_{\gamma r}}v^{+}+2C^{2}(\gamma- t)^{-2N}\left(\fint_{B_{\gamma r}}v^{+}(x)|x|^{\tau_{+}}dx+T(v^{+};0,\frac{r}{2}) \right). \tag{2.47}\] For any \(t\in[\frac{1}{2},1]\), we set \(f(t)=\sup_{x\in B_{tr}}v^{+}\) and \[c_{0}=2C^{2}\left(\fint_{B_{\gamma r}}v^{+}(x)|x|^{\tau_{+}}dx+T(v^{+};0,\frac{ r}{2})\right).\] Then (2.47) can be written as \[f(t)\leq\frac{1}{2}f(\gamma)+\frac{c_{0}}{(\gamma-t)^{2N}}\quad\text{for all }\frac{1}{2}\leq t<\gamma\leq 1.\] Let \(\tau\in(0,1)\) and \(\alpha=2N.\) We consider the sequence \(t_{0}=t\) and \[t_{i+1}=t_{i}+(1-\tau)\tau^{i}(\gamma-t)=t_{0}+(1-\tau)(\gamma-t)\sum_{j=0}^{i }\tau^{j}\to\gamma.\] By iteration, we have \[f(t)=f(t_{0})=2^{-k}f(t_{k})+\frac{c_{0}}{(1-\tau)^{\alpha}(\gamma-t)^{\alpha }}\sum_{i=0}^{k-1}2^{-i}\tau^{-i\alpha}.\] Choosing \(2^{-\frac{1}{\alpha}}<\tau<1\) and letting \(k\to\infty\) in the above expression, we obtain \[f(t)\leq C(\alpha,\tau)\frac{c_{0}}{(1-\tau)^{\alpha}(\gamma-t)^{\alpha}},\] which yields the desired inequality. Now, we turn to Proof of Theorem 2.5.: Let \(\{\zeta_{\delta}\}_{\delta>0}\) be the sequence of standard mollifiers and denote \(\nu_{1,\delta}=\zeta_{\delta}*\nu^{+}\) and \(\nu_{2,\delta}=\zeta_{\delta}*\nu^{-}\). For \(\delta>0\), \(\nu_{i,\delta}\in C_{0}^{\infty}(\Omega\setminus\{0\})\), \(i=1,2\), and \(\operatorname{dist}(\operatorname{supp}\nu_{1,\delta},\{0\})>\frac{r}{2}\), \(\operatorname{dist}(\operatorname{supp}\nu_{2,\delta},\{0\})>\frac{r}{2}\). Moreover, \[\int_{\Omega}\nu_{1,\delta}|x|^{\tau_{+}}dx\to\int_{\Omega\setminus\{0\}}|x| ^{\tau_{+}}d\nu^{+}\quad\text{and}\quad\int_{\Omega}\nu_{2,\delta}|x|^{\tau_{ +}}dx\to\int_{\Omega\setminus\{0\}}|x|^{\tau_{+}}d\nu^{-}. \tag{2.48}\] Let \(u_{\delta}\) be the weak solution of \[\begin{cases}\mathcal{L}_{\mu}^{s}u=\nu_{1,\delta}-\nu_{2,\delta}&\text{in } \Omega,\\ u=0&\text{in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] Since \(\Omega\) is a bounded domain satisfying the exterior ball condition and containing the origin, by [9, Theorem 1.6], for any \(b<2s-\tau_{+}\), there holds \[\int_{\Omega}|u_{\delta}||x|^{-b}dx\leq C(N,\Omega,\mu,s,b)\int_{\Omega}(\nu_ {1,\delta}+\nu_{2,\delta})|x|^{\tau_{+}}dx. \tag{2.49}\] Furthermore, in view of the proof of [9, Theorem 1.7], \(u_{\delta}\to u\) a.e. in \(\Omega\) and in \(L^{1}(\Omega;|x|^{b})\). Put \(v_{\delta}=|x|^{-\tau_{+}}u_{\delta}\), then \(v_{\delta}\in H_{0}^{s}(\Omega;|x|^{\tau_{+}})\) and \(v_{\delta}\) satisfies \[\langle v_{\delta},\phi\rangle_{s,\tau_{+}}=\int_{\Omega}\big{(}\nu_{1,\delta} (x)-\nu_{2,\delta}(x)\big{)}\phi(x)|x|^{2\tau_{+}}dx,\quad\forall\phi\in H_{0} ^{s}(\Omega;|x|^{\tau_{+}}).\] Hence, \[\langle v_{\delta},\phi\rangle_{s,\tau_{+}}=0,\quad\forall\phi\in C_{0}^{ \infty}(B_{\frac{s}{2}}).\] By Lemma 2.8, the definition of the tail in (2.22) and the fact that \(v_{\delta}^{+}=0\) in \(\Omega\), we deduce that \[\sup_{x\in B_{\frac{r}{4}}\setminus\{0\}}v_{\delta}^{+}(x)\] \[\leq C(N,\Omega,s,\mu)\Big{(}T(v_{\delta}^{+};0,\frac{r}{4})+ \mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B_{\frac{r}{4}}}v_{\delta}^{+}(x)|x|^{\tau_{+}}dx \Big{)}\] \[\leq C(N,\Omega,s,\mu)\Big{(}4^{N+2s}r^{-(N+\tau_{+})}\int_{\mathbb{ R}^{N}\setminus B_{\frac{r}{4}}}v_{\delta}^{+}(x)|x|^{\tau_{+}}dx+\big{(}\int_{B_{ \frac{r}{2}}}|x|^{\tau_{+}}dx\big{)}^{-1}\int_{B_{\frac{r}{2}}}v_{\delta}^{+} (x)|x|^{\tau_{+}}dx\Big{)}\] \[\leq C(N,\Omega,s,\mu,r)\int_{\Omega}v_{\delta}^{+}(x)|x|^{\tau_{ +}}dx.\] Similarly, we can show that \[\sup_{x\in B_{\frac{r}{4}}\setminus\{0\}}v_{\delta}^{-}(x)\leq C(N,\Omega,s, \mu,r)\int_{\Omega}v_{\delta}^{-}(x)|x|^{\tau_{+}}dx.\] Adding two preceding estimates and using estimate (2.49), for any \(b\in[0,2s-\tau_{+})\), we obtain \[\sup_{x\in B_{\frac{r}{4}}\setminus\{0\}}|v_{\delta}(x)| \leq C(N,\Omega,s,\mu,r)\int_{\Omega}|v_{\delta}||x|^{\tau_{+}}dx\] \[\leq C(N,\Omega,s,\mu,r)\int_{\Omega}|u_{\delta}||x|^{-b}dx\ \leq C(N,\Omega,s,\mu,r,b)\int_{\Omega}(\nu_{1,\delta}+\nu_{2,\delta})|x|^{ \tau_{+}}dx.\] By letting \(\delta\to 0\) and employing the convergence (2.48) and the fact that \(v_{\delta}\to|x|^{-\tau_{+}}u\) a.e. in \(\Omega\) as \(\delta\to 0\), we derive (2.11). ## 3. The semilinear problem with interior measure data ### Estimates in the weighted weak Lebesgue space We start this subsection by recalling the definition of weak Lebesgue space. For \(\alpha\in\mathbb{R}\) and \(1\leq q<\infty\), we denote by \(L^{q}_{w}(\Omega\setminus\{0\};|x|^{\alpha})\) the weighted weak Lebesgue space (or weighted Marcinkiewicz space) defined by \[L^{q}_{w}(\Omega\setminus\{0\};|x|^{\alpha}):=\left\{u\in L^{1}_{\rm loc}( \Omega\setminus\{0\}):\sup_{\lambda>0}\lambda^{q}\int_{\{x\in\Omega\setminus \{0\}:|u(x)|\geq\lambda\}}|x|^{\alpha}dx<+\infty\right\}.\] Denote \[\|u\|^{*}_{L^{q}_{w}(\Omega\setminus\{0\};|x|^{\alpha})}:=\left(\sup_{\lambda> 0}\lambda^{q}\int_{\{x\in\Omega\setminus\{0\}:|u(x)|\geq\lambda\}}|x|^{\alpha }dx\right)^{\frac{1}{q}}. \tag{3.1}\] Note that \(\|\cdot\|^{*}_{L^{q}_{w}(\Omega\setminus\{0\};|x|^{\alpha})}\) is not a norm, but for \(\kappa>1\), it is equivalent to the norm \[\|u\|_{L^{q}_{w}(\Omega\setminus\{0\};|x|^{\alpha})}:=\sup\left\{\frac{\int_{A }|u||x|^{\alpha}dx}{\left(\int_{A}|x|^{\alpha}dx\right)^{1-\frac{1}{q}}}:A \subset D,\,A\text{ measurable, }0<\int_{A}|x|^{\alpha}dx<\infty\right\}.\] More precisely, \[\|u\|^{*}_{L^{q}_{w}(\Omega\setminus\{0\};|x|^{\alpha})}\leq\|u\|_{L^{q}_{w}( \Omega\setminus\{0\};|x|^{\alpha})}\leq\frac{q}{q-1}\,\|u\|^{*}_{L^{q}_{w}( \Omega\setminus\{0\};|x|^{\alpha})}\,. \tag{3.2}\] The following strict continuous embedding hold \[L^{q}(\Omega\setminus\{0\};|x|^{\alpha})\hookrightarrow L^{q}_{w}(\Omega \setminus\{0\};|x|^{\alpha})\hookrightarrow L^{m}(\Omega\setminus\{0\};|x|^{ \alpha})\] for \(1\leq m<q<\infty\). **Proposition 3.1**.: _Assume \(\mu\geq\mu_{0}\) and \(\nu\in\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) such that_ \[\operatorname{dist}(\operatorname{supp}|\nu|,\{0\})>4r_{0}.\] _Let \(u\) be the unique solution of (2.10). Then \(u\in L_{w}^{\frac{N}{N-2s}}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) and_ \[\|u\|_{L_{w}^{\frac{N}{N-2s}}(\Omega\setminus\{0\};|x|^{\tau_{+}})}\leq C(N, \Omega,s,\mu,r_{0})\|\nu\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}. \tag{3.3}\] To prove Proposition 3.1, we need the following estimates on the level sets of \(u.\) **Lemma 3.2**.: _Assume \(\mu\geq\mu_{0}\), \(\nu\in\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) and let \(u\) be the unique solution of (2.10)._ _(i) If \(\mu_{0}\leq\mu<0\) then, for any \(\lambda>1\) and any \(r>0\) such that \(B_{4r}(0)\subset\Omega,\) there holds_ \[|\{x\in\Omega\setminus B_{r}(0):\;|u(x)|>\lambda\}|\leq C(N,\Omega,s,\mu,r) \lambda^{-\frac{N}{N-2s}}\|\nu\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau _{+}})}^{\frac{N}{N-2s}}. \tag{3.4}\] _(ii) If \(\mu_{0}\geq 0\) then, for any \(\lambda>1\),_ \[|\{x\in\Omega:\;|u(x)|>\lambda\}|\leq C(N,s,\mu)\lambda^{-\frac{N}{N-2s}}\|\nu \|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}^{\frac{N}{N-2s}}. \tag{3.5}\] Proof.: In view of the proof of Theorem 2.5, we may assume that \(\nu\in C_{0}^{\infty}(\Omega).\) Then \(v=|x|^{-\tau_{+}}u\in H_{0}^{s}(\Omega;|x|^{\tau_{+}})\) and satisfies \[\langle v,\phi\rangle_{s,\tau_{+}}=\int_{\Omega}\phi(x)|x|^{\tau_{+}}\nu(x)dx,\quad\forall\phi\in C_{0}^{\infty}(B_{\frac{s}{2}}(0)).\] Let \(\lambda>0\), taking \(v_{\lambda}=\max\{-\lambda,\min\{v,\lambda\}\}\) as a test function, we have \[\langle v,v_{\lambda}\rangle_{s,\tau_{+}}=\int_{\Omega}v_{\lambda}(x)|x|^{ \tau_{+}}\nu(x)dx\leq\lambda\int_{\Omega}|x|^{\tau_{+}}|\nu(x)|dx.\] We see that \[(v(x)-v(y))(v_{\lambda}(x)-v_{\lambda}(y))\geq(v_{\lambda}(x)-v_{\lambda}(y)) ^{2},\quad\forall x,y\in\mathbb{R}^{N}.\] Hence from the two proceeding inequalities, we obtain \[\langle v_{\lambda},v_{\lambda}\rangle_{s,\tau_{+}}\leq\lambda\int_{\Omega}|x |^{\tau_{+}}|\nu(x)|dx. \tag{3.6}\] (i) Assume \(\mu_{0}\leq\mu<0\). In this case \(\frac{2s-N}{2}\leq\tau_{+}<0\). Let \(r>0\) is small enough such that \(B_{4r}(0)\subset\Omega\) and set \(\lambda_{1}=\lambda r^{-\tau_{+}}\). Then by (2.20), (2.21) and (3.6), we have \[|\{x\in\Omega\setminus B_{r}(0):\;|u(x)|\geq\lambda\}| \leq|\{x\in\Omega\setminus B_{r}(0):\;|v_{\lambda_{1}}(x)|\geq \lambda_{1}\}\] \[\leq C(N,s,\mu,r)\lambda^{-\frac{2N}{N-2s}}\int_{\Omega\setminus B _{r}(0)}|v_{\lambda_{1}}|^{\frac{2N}{N-2s}}dx\] \[\leq C(N,s,\mu,\Omega,r)\lambda^{-\frac{2N}{N-2s}}\int_{\Omega \setminus_{1}}|v_{\lambda_{1}}|^{\frac{2N}{N-2s}}|x|^{\tau_{+}}dx\] \[\leq C(N,s,\mu,\Omega,r)\lambda^{-\frac{2N}{N-2s}}\langle v_{ \lambda_{1}},v_{\lambda_{1}}\rangle_{s,\tau_{+}}^{\frac{N}{N-2s}}\] \[\leq C(N,s,\mu,\Omega,r)\lambda^{-\frac{N}{N-2s}}\left(\int_{ \Omega\setminus\{0\}}|x|^{\tau_{+}}|\nu(x)|dx\right)^{\frac{N}{N-2s}},\] which implies (3.4). (ii) Assume \(\mu\geq 0\). In this case \(\tau_{+}\geq 0\). Set \(\lambda_{2}=(\sup_{x\in\Omega}|x|)^{-\tau_{+}}\lambda\), then by (2.20) and (3.6), we have \[|\{x\in\Omega:\;|u(x)|\geq\lambda\}| \leq C(N,s,\mu)\lambda^{-\frac{2N}{N-2s}}\int_{\{x\in\Omega:\;|v(x) |\geq\lambda|x|^{-\tau_{+}}\}}(|x|^{\tau_{+}}|v|)^{\frac{2N}{N-2s}}dx\] \[\leq C(N,s,\mu)\lambda^{-\frac{2N}{N-2s}}\int_{\Omega}\left(|x|^{ \tau_{+}}|v_{\lambda_{2}}\right)^{\frac{2N}{N-2s}}dx\] \[\leq C(N,s,\mu)\lambda^{-\frac{2N}{N-2s}}\langle v_{\lambda_{2}},v_{\lambda_{2}}\rangle^{\frac{N}{N-2s}}_{s,\tau_{+}}\] \[\leq C(N,s,\mu)\lambda^{-\frac{N}{N-2s}}\left(\int_{\Omega\setminus \{0\}}|x|^{\tau_{+}}|\nu(x)|dx\right)^{\frac{N}{N-2s}},\] which implies (3.5). Now we are ready to prove Proposition 3.1. Proof of Proposition 3.1.: By the linearity, we may assume that \(\|\nu\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}=1\). _Case 1:_\(\mu_{0}\leq\mu<0\). In this case, \(\frac{2s-N}{2}\leq\tau_{+}<0\). Fix \(0<r<r_{0}\) small. Since \(\tau_{+}<0\), by Theorem 2.5, we have (noting that \(\|\nu\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}=1\)) \[|u(x)|\leq C_{1}|x|^{\tau_{+}},\quad\forall x\in B_{r}(0)\setminus\{0\},\] where \(C_{1}=C_{1}(N,\Omega,s,\mu,r)\). It follows that \[\{x\in B_{r}(0)\setminus\{0\}:|u(x)|>\lambda\}\subset B_{R_{1}}(0)\] where \[R_{1}=\left(C_{1}\lambda^{-1}\right)^{-\frac{1}{\tau_{+}}}.\] Hence, \[\int_{\{x\in B_{r}(0)\setminus\{0\}:|u(x)|>\lambda\}}|x|^{\tau_{+}}dx\leq\int _{B_{R_{1}}(0)}|x|^{\tau_{+}}dx=C(N,\Omega,s,\mu,r)\lambda^{\frac{N+\tau_{+}}{ \tau_{+}}}.\] Since \(\tau_{+}\geq\frac{2s-N}{2}\), we have \(-\frac{N+\tau_{+}}{\tau_{+}}\geq\frac{N}{N-2s}\). From the above estimate, we can easily deduce that, for any \(\lambda>1\), \[\int_{\{x\in B_{r}(0)\setminus\{0\}:|u(x)|>\lambda\}}|x|^{\tau_{+}}dx\leq C(N,\Omega,s,\mu,r)\lambda^{-\frac{N}{N-2s}}. \tag{3.7}\] Next by (3.4) and since \(\tau_{+}<0\), we have, for any \(\lambda>1\), \[\int_{\{x\in\Omega\setminus B_{r}(0):|u(x)|>\lambda\}}|x|^{\tau_{+}}dx\leq r^ {\tau_{+}}\big{|}\{x\in\Omega\setminus B_{r}(0):|u(x)|>\lambda\}\big{|}\leq C (N,\Omega,s,\mu,r)\lambda^{-\frac{N}{N-2s}}. \tag{3.8}\] By combining (3.7) and (3.8), we obtain, for any \(\lambda>1\), that \[\int_{\{x\in\Omega\setminus\{0\}:|u(x)|>\lambda\}}|x|^{\tau_{+}}dx\leq C(N, \Omega,s,\mu,r)\lambda^{-\frac{N}{N-2s}}. \tag{3.9}\] Then we can easily show that (3.7) holds for any \(\lambda>0\). _Case 2:_\(\mu\geq 0\). In this case \(\tau_{+}\geq 0\). Put \(v=|x|^{-\tau_{+}}u\) and \(D_{\Omega}=\sup_{x\in\Omega}|x|^{\tau_{+}}\), then \[\int_{\{x\in\Omega\setminus\{0\}:\;|u(x)|\geq\lambda\}}|x|^{\tau_{+}}dx\leq \int_{\{x\in\Omega\setminus\{0\}:\;D_{\Omega}|v(x)|\geq\lambda\}}|x|^{\tau_{+} }dx.\] This and (3.5) imply (3.9) for any \(\lambda>0\). Finally, estimate (3.3) follows from (3.9), (3.1) and (3.2). ### Existence and uniqueness In this subsection, we aim to establish the existence and uniqueness of the weak solution to problem (1.6) with \(\ell=0\). First, we show the solvability for semilinear problem involving the dual operator \((-\Delta)_{\gamma}^{s}\) in the variational framework. We assume that * \(h\in C(\mathbb{R}^{N}\setminus\{0\}\times\mathbb{R})\cap L^{\infty}(\mathbb{R} ^{N}\times\mathbb{R})\); * the map \(s\mapsto h(\cdot,s)\) is nondecreasing and \(h(\cdot,0)=0\) in \(\mathbb{R}^{N}\setminus\{0\}\). In the sequel, we will use the notations: \(H(t)=\int_{0}^{t}h(s)ds\), \((h\circ u)(x)=h(x,u(x))\), \((H\circ u)(x)=H(x,u(x))\) for \(x\in\mathbb{R}^{N}\setminus\{0\}\). **Definition 3.3**.: A function \(u\) is called a _variational solution_ to problem \[\left\{\begin{aligned} (-\Delta)_{\gamma}^{s}u+h\circ u &=f&\quad\text{in }\Omega,\\ u&=0&\quad\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right. \tag{3.10}\] if \(u\in H_{0}^{s}(\Omega;|x|^{\gamma})\) and \[\langle u,\xi\rangle_{s,\gamma}+(h\circ u,\xi)_{\gamma}=(f,\xi)_{\gamma}, \quad\forall\,\xi\in H_{0}^{s}(\Omega;|x|^{\gamma}).\] **Proposition 3.4**.: _Assume \(\gamma\in[\frac{2s-N}{2},2s)\), \(\alpha\in\mathbb{R}\), \(f\in L^{p}(\Omega;|x|^{\alpha})\) with \(p\) satisfying (2.4) and \(h\) satisfies \((\mathbf{h}_{1})\) and \((\mathbf{h}_{2})\). Then problem (3.10) admits a unique variational solution._ Proof.: Under the assumptions \((\mathbf{h}_{1})\) and \((\mathbf{h}_{2})\), we consider the functional \[\mathscr{J}(\varphi):=\frac{1}{2}\|\varphi\|_{s,\gamma}^{2}+\int_{\Omega}(H \circ u)|x|^{\gamma}dx-\int_{\Omega}fu|x|^{\gamma}dx,\quad\varphi\in H_{0}^{s} (\Omega;|x|^{\gamma}).\] Since \(H\) is a nonnegative function, by proceeding similarly as in the proof of Theorem 2.2, we can show that \(\mathscr{J}\) is coercive and semi-continuous in \(H_{0}^{s}(\Omega;|x|^{\gamma})\). Thus \(\mathscr{J}\) has a critical point \(v\in H_{0}^{s}(\Omega;|x|^{\gamma})\), which is a variational solution of (3.10). The uniqueness follows from Kato type inequality (2.6). **Theorem 3.5**.: _Assume \(\mu_{0}\leq\mu\), \(\nu_{i}\in\mathfrak{M}^{+}(\Omega\setminus\{0\};|x|^{\tau_{+}})\), \(i=1,2\), and \(g\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\) is a nondecreasing function such that \(g(0)=0\). Then there exist unique weak solutions \(u,u_{1},u_{2},v_{1},v_{2}\) of the following problems respectively_ \[\left\{\begin{aligned} \mathcal{L}_{\mu}^{s}u+g(u)& =\nu_{1}-\nu_{2}&\quad\text{in }\Omega,\\ u&=0&\quad\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right. \tag{3.11}\] \[\left\{\begin{aligned} \mathcal{L}_{\mu}^{s}u_{1}+g(u_{1})& =\nu_{1}&\quad\text{in }\Omega,\\ u_{1}&=0&\quad\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right. \tag{3.12}\] \[\left\{\begin{aligned} \mathcal{L}_{\mu}^{s}u_{2}-g(-u_{2})& =\nu_{2}&\quad\text{in }\Omega,\\ u_{2}&=0&\quad\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right. \tag{3.13}\] _and_ \[\left\{\begin{aligned} \mathcal{L}_{\mu}^{s}v_{i}& =\nu_{i}&\quad\text{in }\Omega,\\ v_{i}&=0&\quad\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right. \tag{3.14}\] _such that_ \[-v_{2}\leq-u_{2}\leq u\leq u_{1}\leq v_{1}\quad\text{in }\Omega\setminus\{0\}. \tag{3.15}\] Proof.: Existence. **Step 1.** First we assume that \(\nu_{i}\), \(i=1,2\), has compact support in \(\Omega\setminus\{0\}\). Let \(\{\zeta_{\delta}\}_{\delta>0}\) be the sequence of standard mollifiers. Put \(\nu_{i,\delta}=\zeta_{\delta}*\nu_{i}\) then there exists an open set \(D\Subset\Omega\setminus\{0\}\) such that \(\nu_{i,\delta}\in C_{0}^{\infty}(D)\) for \(\delta>0\) small enough. Then \[\int_{\Omega}d(x)^{s}|x|^{\tau_{+}}\nu_{\delta}dx\to\int_{\Omega \setminus\{0\}}d(x)^{s}|x|^{\tau_{+}}d\nu\quad\text{as $\delta\to 0$} \tag{3.16}\] and \[\|\nu_{i,\delta}\|_{L^{1}(\Omega;|x|^{\tau_{+}})}\leq c\|\nu_{i} \|_{\mathfrak{M}(\Omega\setminus\{0\};d(x)^{s}|x|^{\tau_{+}})},\quad\forall \delta>0. \tag{3.17}\] Define the function \(h:\mathbb{R}^{N}\setminus\{0\}\times\mathbb{R}\to\mathbb{R}\) by \(h(x,t)=g(|x|^{\tau_{+}}t)\) for \(x\in\mathbb{R}^{N}\setminus\{0\}\), \(t\in\mathbb{R}\). By Proposition 3.4, there exist variational solutions \(\tilde{u}_{\delta},\tilde{u}_{1,\delta},\tilde{u}_{2,\delta},\tilde{v}_{1, \delta},\tilde{v}_{2,\delta}\in H^{s}_{0}(\Omega;|x|^{\tau_{+}})\) of the following problems respectively \[\begin{cases}(-\Delta)^{s}_{\tau_{+}}\tilde{u}_{\delta}+h\circ \tilde{u}_{\delta}=\nu_{1,\delta}-\nu_{2,\delta}&\text{in $\Omega$},\\ \tilde{u}_{\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$},\end{cases}\] \[\begin{cases}(-\Delta)^{s}_{\tau_{+}}\tilde{u}_{1,\delta}+h\circ \tilde{u}_{1,\delta}=\nu_{1,\delta}&\text{in $\Omega$},\\ \tilde{u}_{1,\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$},\end{cases}\] \[\begin{cases}(-\Delta)^{s}_{\tau_{+}}\tilde{u}_{2,\delta}-h\circ (-\tilde{u}_{2,\delta})=\nu_{2,\delta}&\text{in $\Omega$},\\ \tilde{u}_{2,\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$},\end{cases}\] and \[\begin{cases}(-\Delta)^{s}_{\tau_{+}}\tilde{v}_{i,\delta}=\nu_{i, \delta}&\text{in $\Omega$},\\ \tilde{v}_{i,\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$}.\end{cases}\] We infer from Kato type inequality (2.6) that \(\tilde{u}_{i,\delta}\geq 0\), \(\tilde{v}_{i,\delta}\geq 0\), \(i=1,2\), and \[-\tilde{v}_{2,\delta}\leq-\tilde{u}_{2,\delta}\leq\tilde{u}_{n} \leq\tilde{u}_{1,\delta}\leq\tilde{v}_{1,\delta}\quad\text{a.e. in $\Omega\setminus\{0\}$}.\] Put \(u_{\delta}=|x|^{\tau_{+}}\tilde{u}_{\delta}\), \(u_{i,\delta}=|x|^{\tau_{+}}\tilde{u}_{\delta}\) and \(v_{i,n}=|x|^{\tau_{+}}\tilde{v}_{i,\delta}\), then \(u_{i,\delta}\geq 0\), \(v_{i,\delta}\geq 0\) and \[-v_{2,\delta}\leq-u_{2,\delta}\leq u_{\delta}\leq u_{1,\delta} \leq v_{1,\delta}\quad\text{a.e. in $\Omega\setminus\{0\}$}. \tag{3.18}\] This implies that \[|u_{\delta}|\leq u_{1,\delta}+u_{2,\delta}\leq v_{1,\delta}+v_{2, \delta}\quad\text{a.e. in $\Omega\setminus\{0\}$}.\] Moreover, since \(g(u_{\delta}(x))=(h\circ\tilde{u}_{\delta})(x)\) and \(g(u_{i,\delta}(x))=(h\circ\tilde{u}_{i,\delta})(x)\), \(i=1,2\), it follows that \[|g(u_{\delta})|\leq g(u_{1,\delta})-g(-u_{2,\delta})\quad\text{a.e. in $\Omega\setminus\{0\}$}.\] From Theorem 2.3, we find that \(u_{\delta}\), \(u_{1,\delta}\), \(u_{2,\delta}\), \(v_{1,\delta}\) and \(v_{2,\delta}\) are respectively weak solutions to the following problems \[\begin{cases}\mathcal{L}^{s}_{\mu}u_{\delta}+g(u_{\delta})=\nu_{ 1,\delta}-\nu_{2,\delta}&\text{in $\Omega$},\\ u_{\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$},\end{cases} \tag{3.19}\] \[\begin{cases}\mathcal{L}^{s}_{\mu}u_{1,\delta}+g(u_{1,\delta})= \nu_{1,\delta}&\text{in $\Omega$},\\ u_{1,\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$},\end{cases} \tag{3.20}\] \[\begin{cases}\mathcal{L}^{s}_{\mu}u_{2,\delta}-g(-u_{2,\delta})= \nu_{2,\delta}&\text{in $\Omega$},\\ u_{2,\delta}=0&\text{in $\mathbb{R}^{N}\setminus\Omega$},\end{cases} \tag{3.21}\] and \[\begin{cases}\mathcal{L}_{\mu}^{s}v_{i,\delta}=\nu_{i,\delta}&\text{ in }\Omega,\\ v_{i,\delta}=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] Since \(g\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), we can modify the approximation argument in [9, Theorem 5.1] in order to show that there exist functions \(u,u_{1},u_{2},v_{1},v_{2}\) such that \(\{u_{\delta}\}\), \(\{u_{i,\delta}\}\) and \(\{v_{i,\delta}\}\) converge to \(u\), \(u_{i}\) and \(v_{i}\), \(i=1,2\), a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\), for any \(b<2s-\tau_{+}\), respectively as \(\delta\to 0\). From Theorem 2.3, \(v_{i}\) is the solution to problem (3.14), \(i=1,2\). Since \(g\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), we deduce from the dominated convergence theorem that \(\{g(u_{\delta})\}\), \(\{g(u_{i,\delta})\}\) and \(\{g(v_{i,\delta})\}\) convergence to \(g(u)\), \(g(u_{i})\) and \(g(v_{i})\), \(i=1,2\), a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{\tau_{+}})\) respectively as \(\delta\to 0\). Therefore, by passing to the limit in the weak formulation for problems (3.19)-(3.21), we derive that \(u\), \(u_{1}\), \(u_{2}\) are solutions to problems (3.11)-(3.13) respectively. The uniqueness for problems (3.11)-(3.13) follows from Kato type inequality (2.8) and the monotonicity of \(g\). Moreover, from (3.18), we obtain (3.15). **Step 2.** Next we drop the assumption that \(\nu_{i}\), \(i=1,2\), has compact support. Let \(\{O_{l}\}_{l\in\mathbb{N}}\) be a smooth exhaustion of \(\Omega\setminus\{0\}\), i.e. smooth open sets \(\{O_{l}\}_{l\in\mathbb{N}}\) such that \[O_{l}\Subset O_{l+1}\Subset\Omega\setminus\{0\}\quad\text{and}\quad\cup_{l \in\mathbb{N}}O_{l}=\Omega\setminus\{0\}.\] Set \(\nu_{i,l}=\mathbf{1}_{\overline{O}_{l}}\nu_{i}\), \(i=1,2\), and let \(u_{l}\), \(u_{1,l}\), \(u_{2,l}\), \(v_{i,l}\) be respectively the nonnegative weak solutions to (3.11), (3.12), (3.13) and (3.14) with \(\nu_{i}\) replaced by \(\nu_{i,l}\), \(i=1,2\). Then we have \[-v_{2,l}\leq-u_{2,l}\leq u_{l}\leq u_{1,l}\leq v_{1,l}\quad\text{a.e. in }\Omega \setminus\{0\}. \tag{3.22}\] Let \(b<2s-\tau_{+}\), then \(u_{l}\in L^{1}(\Omega;|x|^{-b})\) and \(u_{l}\) satisfies the weak formulation \[\int_{\Omega}u_{l}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}g(u_{l})\psi|x |^{\tau_{+}}dx=\int_{\Omega\setminus\{0\}}\psi|x|^{\tau_{+}}d(\nu_{1,l}-\nu_{ 2,l}) \tag{3.23}\] for all \(0\leq\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). We know from Step 1 that \(u_{l}\) is the limit of the sequence of \(\{u_{l,\delta}\}_{\delta>0}\), where \(u_{l,\delta}\) is the weak solution of \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\nu_{1,l,\delta}-\nu_{2,l,\delta}& \text{ in }\Omega,\\ u_{\delta}=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\] where \(\nu_{i,l,\delta}:=\zeta_{\delta}*\nu_{i,l}\), \(i=1,2\). For \(l>l^{\prime}\), since \(\nu_{i,l}\geq\nu_{i,l^{\prime}}\), it follows that \(\nu_{i,l,\delta}\geq\nu_{i,l^{\prime},\delta}\) for \(i=1,2\) and any \(\delta>0\). Let \(\xi_{b}\) be the solution of \[\begin{cases}\mathcal{L}_{\mu}^{s}\xi=|x|^{-b}&\text{ in }\Omega,\\ \quad\xi=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases} \tag{3.24}\] Then \(\xi_{b}\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\) and \(|\xi_{b}|\leq Cd^{s}\) a.e. in \(\Omega\) (see [9, estimate (1.16)]). By using Kato type inequality (2.9) for \(u_{l,\delta}-u_{l^{\prime},\delta}\) and \(\psi=\xi_{b}\) as the test function, we obtain that \[\begin{split}&\int_{\Omega}|u_{l,\delta}-u_{l^{\prime},\delta}||x |^{-b}dx+\int_{\Omega}|g(u_{l,\delta})-g(u_{l^{\prime},\delta})|\xi_{b}|x|^{ \tau_{+}}dx\\ &\leq\int_{\Omega\setminus\{0\}}\xi_{b}\text{sign}(u_{l,\delta}-u _{l^{\prime},\delta})|x|^{\tau_{+}}[(\nu_{1,l,\delta}-\nu_{1,l^{\prime},\delta}) -(\nu_{2,l,\delta}-\nu_{2,l^{\prime},\delta})]dx\\ &\leq C\int_{\Omega\setminus\{0\}}|x|^{\tau_{+}}(\nu_{1,l,\delta}-\nu _{1,l^{\prime},\delta})dx+C\int_{\Omega\setminus\{0\}}|x|^{\tau_{+}}(\nu_{2,l, \delta}-\nu_{2,l^{\prime},\delta})dx.\end{split} \tag{3.25}\] We note that \(u_{l,\delta}-u_{l^{\prime},\delta}\to u_{l}-u_{l^{\prime}}\) in \(L^{1}(\Omega;|x|^{-b})\), \(g(u_{l,\delta})-g(u_{l^{\prime},\delta})\to g(u_{l})-g(u_{l^{\prime}})\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(\delta\to 0\) and for any \(l>0\), \[\int_{\Omega\setminus\{0\}}|x|^{\tau_{+}}\nu_{i,l,\delta}dx\to\int_{\Omega \setminus\{0\}}|x|^{\tau_{+}}d\nu_{i,l}\quad\text{as $\delta\to 0$.}\] Therefore, by letting \(\delta\to 0\) in (3.25), we find \[\|u_{l}-u_{l^{\prime}}\|_{L^{1}(\Omega;|x|^{-b})}\leq C(\|\nu_{1,l}-\nu_{1,l^ {\prime}}\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}+\|\nu_{2,l}- \nu_{2,l^{\prime}}\|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}). \tag{3.26}\] Since \(\nu_{i,l}\uparrow\nu_{i}\) as \(l\to\infty\), we infer from (3.26) that \(\{u_{l}\}_{l\in\mathbb{N}}\) is a Cauchy sequence in \(L^{1}(\Omega;|x|^{-b})\). Therefore there exists \(u\in L^{1}(\Omega;|x|^{-b})\) such that \(u_{l}\to u\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\). Since \(g\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), by the dominated convergence theorem, we deduce that \(g(u_{l})\to g(u)\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{\tau_{+}})\). Thus by letting \(l\to\infty\) in (3.23), we derive that \(u\) is a weak solution of (3.11). By a similar argument, we can show that \(\{u_{i,l}\}_{l\in\mathbb{N}}\) and \(\{v_{i,l}\}_{l\in\mathbb{N}}\) converge to \(u_{i}\) and \(v_{i}\) in \(L^{1}(\Omega;|x|^{-b})\) respectively as \(l\to\infty\) and \(\{g(u_{i,l})\}_{l\in\mathbb{N}}\) converges to \(g(u_{i})\), \(i=1,2\), in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(l\to\infty\). Moreover, \(u_{1},u_{2},v_{i}\) are solutions of problems (3.12), (3.13) and (3.14). Estimate (3.15) follows from (3.22). Uniqueness. The uniqueness for problem (3.14) was established in Theorem 2.3. The uniqueness for problems (3.11)-(3.13) follows from Kato's inequality (2.8) and the monotonicity of \(g\). **Lemma 3.6**.: _Assume_ \[\int_{1}^{\infty}t^{-q-1}(\ln t)^{m}(g(t)-g(-t))dt<\infty \tag{3.27}\] _for \(q,m\in\mathbb{R}\), \(q>0\) and \(m\geq 0\). Let \(v\) be a function defined in \(\Omega\setminus\Sigma\). For \(t>0\) and \(G\subseteq\Omega\setminus\{0\}\), set_ \[E_{G}(t):=\{x\in\Omega\setminus\{0\}:|v(x)|>t\}\quad\text{and}\quad e_{G}(t):= \int_{E_{G}(t)}|x|^{\tau_{+}}dx.\] _Assume that there exists a positive constant \(C_{0}\) such that_ \[e_{G}(t)\leq C_{0}t^{-q}(\ln s)^{m},\quad\forall t>e^{\frac{2m}{q}}. \tag{3.28}\] _Then for any \(t_{0}>e^{\frac{2m}{q}}\,,\) there holds_ \[\int_{G}g(|v|)|x|^{\tau_{+}}dx \leq g(t_{0})\int_{G}|x|^{\tau_{+}}dx+C_{0}q\int_{t_{0}}^{\infty} t^{-q-1}(\ln t)^{m}g(t)dt, \tag{3.29}\] \[-\int_{G}g(-|v|)|x|^{\tau_{+}}dx \leq-g(-t_{0})\int_{G}|x|^{\tau_{+}}dx-C_{0}q\int_{t_{0}}^{\infty }t^{-q-1}(\ln t)^{m}g(-t)dt. \tag{3.30}\] Proof.: We note that \(g(|v|)\geq g(0)=0\). Let \(G\subseteq\Omega\setminus\{0\}\) and \(t_{0}>1\) to be determined later on. Using the fact that \(g\) is nondecreasing, we obtain \[\int_{G}(|v|)|x|^{\tau_{+}}dx \leq\int_{G\setminus E_{t_{0}}(v)}g(|v|)|x|^{\tau_{+}}dx+\int_{E _{t_{0}}(v)}g(|v|)|x|^{\tau_{+}}dx\] \[\leq g(t_{0})e(t)-\int_{t_{0}}^{\infty}g(t)de(t).\] From (3.27), we deduce that there exists an increasing sequence \(\{T_{n}\}\) such that \[\lim_{T_{n}\to\infty}T_{n}^{-q}(\ln T_{n})^{m}g(T_{n})=0. \tag{3.31}\] For \(T_{n}>t_{0}\), we have \[-\int_{t_{0}}^{T_{n}}g(t)de(t) =-g(T_{n})e(T_{n})+g(t_{0})e(t_{0})+\int_{t_{0}}^{T_{n}}e(t)dg(t)\] \[\leq-g(T_{n})e(T_{n})+g(t_{0})e(t_{0})+C_{0}\int_{t_{0}}^{T_{n}}t^ {-q}(\ln t)^{m}dg(t)\] \[\leq(CT_{n}^{-q}(\ln T_{n})^{m}-e(T_{n}))g(T_{n})-C_{0}\int_{t_{0 }}^{T_{n}}(t^{-q}(\ln t)^{m})^{\prime}g(t)dt.\] Here in the last estimate, we have used (3.28). Note that if we choose \(t_{0}>e^{\frac{2m}{q}}\) then \[-qt^{-q-1}(\ln t)^{m}<(t^{-q}(\ln t)^{m})^{\prime}<-\frac{q}{2}t^{-q-1}(\ln t)^ {m}\quad\forall t\geq t_{0}. \tag{3.32}\] Combining (3.31)-(3.32) and then letting \(n\to\infty\), we obtain \[-\int_{t_{0}}^{\infty}g(t)de(t)<C_{0}q\int_{t_{0}}^{\infty}t^{-q-1}(\ln t)^{m }g(t)dt.\] Thus we have proved estimate (3.29). By applying estimate (3.29) with \(g\) replaced by \(\tilde{g}(t)=-g(-t)\), we obtain (3.30). **Lemma 3.7**.: _Assume that \(\{g_{n}\}_{n\in\mathbb{N}}\) is a sequence of nondecreasing continuous functions vanishing at \(0\) such that_ \[\begin{split}& g_{n}(t)-g_{n}(-t)\leq C_{1}(t),\quad\forall t\geq 1,\,\forall n\in\mathbb{N},\\ &\int_{1}^{\infty}(g_{n}(t)-g_{n}(-t))t^{-q-1}dt<C_{2},\quad \forall n\in\mathbb{N},\end{split} \tag{3.33}\] _for some \(q>1\) and positive constants \(C_{1},C_{2}\) independent of \(n\). Let \(\{v_{n}\}_{n\in\mathbb{N}}\) be a uniformly bounded in \(L^{q}_{w}(\Omega;|x|^{\tau_{+}})\). Then the sequence \(\{g_{n}(v_{n})\}_{n\in\mathbb{N}}\) is uniformly bounded and equi-integrable in \(L^{1}(\Omega;|x|^{\tau_{+}})\)._ _Proof_. By the assumption, there exists a positive constant \(C\) such that \[\|v_{n}\|_{L^{q}_{\omega}(\Omega\setminus\{0\};|x|^{\tau_{+}})}\leq C,\quad \forall n\in\mathbb{N},\] which implies that \[\int_{\{x\in\Omega\setminus\{0\}:|v_{n}(x)|>t\}}|x|^{\tau_{+}}dx\leq t^{-q}\|v _{n}\|_{L^{q}_{\omega}(\Omega\setminus\{0\};|x|^{\tau_{+}})}\leq Ct^{-q},\quad \forall t>0.\] Take arbitrarily \(t_{0}\geq 1\) and \(G\subseteq\Omega\setminus\{0\}\). By Lemma 3.6, we obtain \[\begin{split}\int_{G}|g_{n}(v_{n})||x|^{\tau_{+}}dx& \leq\int_{\Omega\setminus\{0\}}(g_{n}(|v_{n}|)-g_{n}(-|v_{n}|))|x|^ {\tau_{+}}dx\\ &\leq(g_{n}(t_{0})-g_{n}(-t_{0}))\int_{G}|x|^{\tau_{+}}dx+Cq\int_{ t_{0}}^{\infty}t^{-q-1}(g_{n}(t)-g_{n}(-t))dt\\ &\leq C(N,\Omega,s,\mu,q,C_{1}(t_{0}),C_{2}).\end{split} \tag{3.34}\] Here in the last estimate we have used the assumption (3.33). Therefore, by taking \(t_{0}=1\) and \(G=\Omega\setminus\{0\}\) in (3.34), we derive that \(\{g_{n}(v_{n})\}\) is uniformly bounded in \(L^{1}(\Omega;|x|^{\tau_{+}})\). From (3.34), we can also deduce that \(\{g_{n}(v_{n})\}\) is equi-integrable in \(L^{1}(\Omega;|x|^{\tau_{+}})\). _Proof of Theorem 1.3_. Set \(g_{n}:=\max\{-n,\min\{g,n\}\}\) with \(n\in\mathbb{N}\) and let \(\{\zeta_{\delta}\}_{\delta>0}\) be the sequence of standard mollifiers. Let \(r_{0}>0\) small such that \(B_{4r_{0}}(0)\subset\Omega\). For \(\delta<\frac{1}{8}r<\frac{1}{8}r_{0}\), put \[\nu_{r,\delta}=\zeta_{\delta}*(\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu), \quad\nu_{1,r,\delta}=\nu_{r,\delta}^{+},\quad\nu_{2,r,\delta}=\nu_{r,\delta} ^{-}.\] Then \(\{\nu_{r,\delta}\}\), \(\{\nu_{1,r,\delta}\}\), \(\{\nu_{2,r,\delta}\}\) converge weakly to \(\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu\), \(\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu^{+}\), \(\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu^{-}\) in \(\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) respectively. By Theorem 3.5, there exist weak solutions \(u_{r,\delta,n}\), \(u_{1,r,\delta,n}\), \(u_{2,r,\delta,n}\), \(v_{1,r,\delta}\), \(v_{2,r,\delta}\) of the following problems respectively \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g_{n}(u)=\nu_{r,\delta}&\text{ in }\Omega,\\ u=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\] \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g_{n}(u)=\nu_{1,r,\delta}&\text{ in }\Omega,\\ u=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{3.34}\] \[\begin{cases}\mathcal{L}_{\mu}^{s}u-g_{n}(-u)=\nu_{2,r,\delta}&\text{ in }\Omega,\\ u=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{3.35}\] and \[\begin{cases}\mathcal{L}_{\mu}^{s}v=\nu_{i,r,\delta}&\text{ in }\Omega,\\ v=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] Moreover, for any \(b<2s-\tau_{+}\), \(u_{r,\delta,n}\), \(u_{1,r,\delta,n}\), \(u_{2,r,\delta,n}\), \(v_{1,r,\delta}\), \(v_{2,r,\delta}\in L^{1}(\Omega;|x|^{-b})\). We also have \(u_{i,r,\delta,n}\geq 0\), \(v_{i,r,\delta}\geq 0\) and \[-v_{2,r,\delta}\leq-u_{2,r,\delta,n}\leq u_{r,\delta,n}\leq u_{1,r,\delta,n} \leq v_{1,r,\delta}. \tag{3.36}\] Moreover, since \(g_{n}\) is nondecreasing with \(g_{n}(0)=0\), we deduce that \(\{u_{i,r,\delta,n}\}\), \(i=1,2\), is nonincreasing with respect to \(n\). Therefore, there exist positive functions \(u_{i,r,\delta}\), \(i=1,2\), such that \(u_{i,r,\delta,n}\downarrow u_{i,r,\delta}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) as \(n\to+\infty\) for any \(b<2s-\tau_{+}\). Using Dini's lemma with respect to the sequence \(\{g_{n}\}_{n\in\mathbb{N}}\) in compact sets of \((0,+\infty)\), we deduce that \(g_{n}(u_{i,r,\delta,n})\to g(u_{i,r,\delta})\) a.e. in \(\Omega\) as \(n\to+\infty\). Since \(0\leq u_{i,r,\delta,n}\leq v_{i,r,\delta}\in L_{w^{-2s}}^{\frac{N}{N-2s}}( \Omega\setminus\{0\};|x|^{\tau_{+}})\) (due to Proposition 3.1), it follows that \(\{u_{i,r,\delta,n}\}\) is uniform bounded in \(L_{w}^{\frac{N}{N-2s}}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) with respect to \(n\). From Lemma 3.7, we infer that the sequences \(\{g_{n}(u_{i,r,\delta,n})\}_{n\in\mathbb{N}}\), \(i=1,2\), are uniformly bounded and equi-integrable in \(L^{1}(\Omega;|x|^{\tau_{+}})\) with respect to \(n\). By Vitali's convergence theorem, \(g_{n}(u_{i,r,\delta,n})\to g(u_{i,r,\delta})\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(n\to+\infty\). By letting \(n\to+\infty\) in the weak formulation for problem (3.34) and (3.35), we derive that \(u_{1,r,\delta}\) and \(u_{2,r,\delta}\) are respectively weak solutions to the problems \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\nu_{1,r,\delta}&\text{ in }\Omega,\\ u=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{3.37}\] \[\begin{cases}\mathcal{L}_{\mu}^{s}u-g(-u)=\nu_{2,r,\delta}&\text{ in }\Omega,\\ u=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases} \tag{3.38}\] From (3.36), we deduce that \[\begin{split}|u_{r,\delta,n}|&\leq u_{1,r,\delta,n}+u_{2,r, \delta,n}\leq v_{1,r,\delta}+v_{2,r,\delta},\\ |g_{n}(u_{r,\delta,n})|&\leq g_{n}(u_{1,r,\delta,n})-g_{n}(-u_{2,r,\delta,n}),\end{split} \tag{3.39}\] which in turn implies that \(\{u_{r,\delta}\}_{n\in\mathbb{N}}\) is uniformly bounded in \(L^{1}(\Omega;|x|^{-b})\) for any \(b<2s-\tau_{+}\), as well as \(\{g_{n}(u_{i,r,\delta,n})\}_{n\in\mathbb{N}}\) is uniformly bounded in \(L^{1}(\Omega;|x|^{\tau_{+}})\). By using a similar argument as in the proof of Theorem 3.5, we derive that \(\{u_{r,\delta,n}\}\) converges to a function \(u_{r,\delta}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) as \(n\to+\infty\). We note that \(g_{n}^{+}=\min\{g^{+},n\}\) and \(g_{n}^{-}=\min\{g^{-},n\}\), hence \(\{g_{n}^{+}\}\) and \(\{g_{n}^{-}\}\) are nondecreasing sequences of bounded continuous nonnegative functions and \(g_{n}^{\pm}\uparrow g^{\pm}\) as \(n\to\infty\). Using Dini's lemma with respect to \(\{g_{n}^{\pm}\}\) in compact sets of \((0,+\infty)\), we deduce that \(g_{n}^{\pm}(u_{r,\delta,n})\to g^{\pm}(u_{r,\delta})\) a.e. in \(\Omega\setminus\{0\}\) as \(n\to+\infty\), consequently, \(g_{n}(u_{r,\delta,n})\to g(u_{r,\delta})\) a.e. in \(\Omega\). By (3.40) and the generalized dominated convergence theorem, we deduce that \(g_{n}(u_{r,\delta,n})\to g(u_{r,\delta})\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(n\to+\infty\). Passing to the limit, we deduce that \(u_{r,\delta}\) is a weak solution of \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\nu_{r,\delta}&\text{ in }\Omega,\\ \hskip 14.226378ptu=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases} \tag{3.41}\] From (3.37) and (3.40), we deduce that \(u_{i,r,\delta}\geq 0\) and \[-v_{2,r,\delta}\leq-u_{2,r,\delta}\leq u_{r,\delta}\leq u_{1,r, \delta}\leq v_{1,r,\delta},\] \[|u_{r,\delta}|\leq u_{1,r,\delta}+u_{2,r,\delta}\leq v_{1,r, \delta}+v_{2,r,\delta},\] \[|g(u_{r,\delta})|\leq g(u_{1,r,\delta})-g(-u_{2,r,\delta}).\] By Theorem 2.3, for any \(b<2s-\tau_{+}\), we have \[\|v_{i,r,\delta}\|_{L^{1}(\Omega;|x|^{-b})}\leq C(N,\Omega,s,\mu,b)\|\nu_{i,r, \delta}\|_{L^{1}(\Omega;|x|^{\tau_{+}})}\leq C(N,\Omega,s,\mu,b,r)\|\nu_{i,r} \|_{\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}.\] Therefore \[\|u_{r,\delta}\|_{L^{1}(\Omega;|x|^{-b})}+\sum_{i=1,2}\|u_{i,r,\delta}\|_{L^{1 }(\Omega;|x|^{-b})}\leq C(N,\Omega,s,\mu,b,r)\|\nu_{i,r}\|_{\mathfrak{M}( \Omega\setminus\{0\};|x|^{\tau_{+}})}. \tag{3.42}\] As above, we can show that the sequences \(\{g(u_{r,\delta})\}_{\delta}\), \(\{g(u_{i,r,\delta})\}_{\delta}\), \(i=1,2\), are uniformly bounded and equi-integrable in \(L^{1}(\Omega;|x|^{\tau_{+}})\) with respect to \(\delta\). Again, by a similar argument as in the proof of Theorem 3.5, we deduce that there exist functions \(u_{r}\), \(u_{1,r}\), \(u_{2,r}\) such that \(\{u_{r,\delta}\}_{\delta>0}\), \(\{u_{1,r,\delta}\}_{\delta>0}\), \(\{u_{2,r,\delta}\}_{\delta>0}\) converge to \(u_{r}\), \(u_{1,r}\) and \(u_{2,r}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) for any \(b<2s-\tau_{+}\) respectively as \(\delta\to 0\). This implies that \(\{g(u_{r,\delta})\}_{\delta>0}\), \(\{g(u_{1,r,\delta})\}_{\delta>0}\), \(\{g(u_{2,r,\delta})\}_{\delta>0}\) converge to \(g(u_{r})\), \(g(u_{1,r})\) and \(g(u_{2,r})\) a.e. in \(\Omega\setminus\{0\}\) as \(\delta\to 0\) respectively. Employing an analogous argument as above, we can show that \(\{g(u_{2,r,\delta})\}_{\delta>0}\) converge to \(g(u_{r})\), \(g(u_{1,r})\) and \(g(u_{2,r})\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(\delta\to 0\). By passing to the limit in the weak formulation for problems (3.41), (3.38), (3.39), we deduce that \(u_{r}\), \(u_{1,r}\), \(u_{2,r}\) are respectively weak solutions to the following problems \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\nu_{r}&\text{ in }\Omega,\\ \hskip 14.226378ptu=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\] \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\nu_{1,r}&\text{ in }\Omega,\\ \hskip 14.226378ptu=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\] \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\nu_{2,r}&\text{ in }\Omega,\\ \hskip 14.226378ptu=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] Proceeding as in _Step 2_ of the proof of Theorem 3.5, we can show that there exist functions \(u,u_{i},v_{i}\), \(i=1,2\), such that \(\{u_{r}\}\), \(\{u_{i,r}\}\), \(\{v_{i,r}\}\) converge to \(u\), \(u_{i}\), \(v_{i}\), \(i=1,2\). Moreover, \(u\), \(u_{1}\), \(u_{2}\), \(v_{i}\) are the unique weak solutions to problems (3.11), (3.12), (3.13), (3.14) respectively. Finally, estimate (1.13) follows from (3.42) by letting \(\delta\to 0\) and \(r\to 0\) successively. ## 4. Semilinear problem with Dirac measures concentrated at \(0\) Let \(\ell\in\mathbb{R}\) and \(\Omega^{\prime}\) be an open bounded domain such that \(\Omega\Subset\Omega^{\prime}\) and \(\varepsilon\leq\frac{d(0)}{16}.\) Set \(H(x):=\ell\Phi_{s,\mu}^{\Omega}(x)|x|^{-\tau_{+}}\) for \(x\neq 0\), where \(\Phi_{s,\mu}^{\Omega}\) is given in (1.5). Inspired by [26], we will prove the existence using the respective monotone operator. Set \(\Omega^{\prime}_{\varepsilon}=\Omega^{\prime}\setminus B_{\varepsilon}(0)\) and \(\Omega_{\varepsilon}=\Omega\setminus B_{\varepsilon}(0).\) For any \(u,v\in H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}}),\) we define the operators \[\mathcal{A}_{1}u(v):=\frac{C_{N,s}}{2}\int_{\Omega^{\prime}_{\frac{\varepsilon }{2}}}\int_{\Omega^{\prime}_{\frac{\varepsilon}{2}}}\frac{(u(x)-u(y))(v(x)-v( y))}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\] and \[\mathcal{A}_{2}u(v):=C_{N,s}\int_{\mathbb{R}^{N}\setminus\Omega^{\prime}_{ \frac{\varepsilon}{2}}}\int_{\Omega_{\varepsilon}}\frac{(u(y)-H(x))v(y)}{|x- y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx+\int_{\Omega_{\varepsilon}}g(|x|^{ \tau_{+}}u)v|x|^{\tau_{+}}dx,\] where \(g\in C(\mathbb{R})\cap L^{\infty}(\Omega)\) is a nondecreasing function such that \(g(0)=0.\) We set \[\mathcal{A}u(v):=\mathcal{A}_{1}u(v)+\mathcal{A}_{2}u(v)\quad\text{for }u,v\in H^{s}(\Omega^{\prime}_{\frac{ \varepsilon}{2}}). \tag{4.1}\] Finally, set \(M=\sup_{t\in\mathbb{R}}|g(t)|.\) **Lemma 4.1**.: _Let \(u\in H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}}),\) then \(\mathcal{A}u\in(H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}}))^{*}.\)_ Proof.: Let \(v\in H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}}),\) then using the Holder inequality, we can easily show that \[|\mathcal{A}_{1}u(v)|\leq C(N,s,\Omega^{\prime},\varepsilon)\left\|u\right\|_ {H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})}\left\|v\right\|_{H^{s}( \Omega^{\prime}_{\frac{\varepsilon}{2}})}.\] By the Holder inequality and the definition of \(H\), we have \[\bigg{|}\int_{\mathbb{R}^{N}\setminus\Omega^{\prime}_{\frac{ \varepsilon}{2}}}\int_{\Omega_{\varepsilon}}\frac{(u(y)-H(x))v(y)}{|x-y|^{n+2 s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\bigg{|}\] \[\leq C(\Omega,\Omega^{\prime},\varepsilon,N,s)\bigg{(}\left\|u \right\|_{H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})}+\int_{\mathbb{R}^{ N}\setminus B_{d(0)}(0)}\frac{H(x)|x|^{\tau_{+}}}{1+|x|^{N+2s}}\bigg{)}\left\|v \right\|_{H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})}\] \[\leq C(\Omega,\Omega^{\prime},\varepsilon,N,s,\ell)(\left\|u \right\|_{H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})}+1)\left\|v\right\|_ {H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})}.\] Again, by the Holder inequality, we also obtain \[\bigg{|}\int_{\Omega_{\varepsilon}}g(|x|^{\tau_{+}}u)v|x|^{\tau_{+}}dx\bigg{|} \leq C(\Omega,M,\varepsilon)\left\|v\right\|_{H^{s}(\Omega^{\prime}_{\frac{ \varepsilon}{2}})}.\] From the above preceding estimates and (4.1), we obtain \[|\mathcal{A}u(v)|\leq C(N,s,\Omega,\Omega^{\prime},\varepsilon,\ell,M,\|u\|_ {H^{s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})})\left\|v\right\|_{H^{s}( \Omega^{\prime}_{\frac{\varepsilon}{2}})},\] which implies the desired result. Set \[\mathcal{K}_{\varepsilon}:=\big{\{}v\in H^{s}(\Omega^{\prime}_{\frac{ \varepsilon}{2}}):\;v=H\text{ a.e. in }\;\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\big{\}}. \tag{4.2}\] We note that \(\mathcal{K}_{\varepsilon}\) is not empty since \(H\in\mathcal{K}_{\varepsilon}\) convex and closed. **Lemma 4.2**.: _The operator \(\mathcal{A}\) is monotone, coercive and weakly continuous on the set \(\mathcal{K}_{\varepsilon}.\)_ Proof.: We split the proof into three steps. **Step 1**.: _We claim that the operator \(\mathcal{A}\) is monotone on the set \(\mathcal{K}_{\varepsilon},\) namely_ \[\mathcal{A}u(u-v)-\mathcal{A}v(u-v)\geq 0,\quad\forall u,v\in\mathcal{K}_{\varepsilon}. \tag{4.3}\] Indeed, we have \[\mathcal{A}_{1}u(u-v)-\mathcal{A}_{1}v(u-v)=\frac{C_{N,s}}{2}\int_{\Omega^{ \prime}_{\frac{\varepsilon}{2}}}\int_{\Omega^{\prime}_{\frac{\varepsilon}{2}}} \frac{|u(x)-v(x)-(u(y)-v(y))|^{2}}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\geq 0 \tag{4.4}\] and \[\begin{split}\mathcal{A}_{2}u(u-v)-\mathcal{A}_{2}v(u-v)& =C_{N,s}\int_{\mathbb{R}^{N}\setminus\Omega_{\frac{\sigma}{2}}^{ \prime}}\int_{\Omega_{\varepsilon}}\frac{|u(y)-v(y)|^{2}}{|x-y|^{N+2s}}|y|^{ \tau_{+}}dy|x|^{\tau_{+}}dx\\ &\quad+\int_{\Omega_{\varepsilon}}(g(|x|^{\tau_{+}}u)-g(|x|^{ \tau_{+}}v))(u-v)|x|^{\tau_{+}}dx\\ &\geq C_{N,s}\int_{\mathbb{R}^{N}\setminus\Omega_{\frac{\sigma}{ 2}}^{\prime}}\int_{\Omega_{\varepsilon}}\frac{|u(y)-v(y)|^{2}}{|x-y|^{N+2s}}| y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\geq 0,\end{split} \tag{4.5}\] since \(g\) is nondecreasing. Combining (4.4) and (4.5) leads to (4.3). **Step 2**.: _We claim that the operator \(\mathcal{A}\) is coercive on the set \(\mathcal{K}_{\varepsilon},\) namely, there exists \(v\in\mathcal{K}_{\varepsilon}\) such that_ \[\frac{\mathcal{A}u_{j}(u_{j}-v)-\mathcal{A}v(u_{j}-v)}{\left\|u_{j}-v\right\| _{H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime})}}\rightarrow+\infty\quad\text{ as}\ \ \left\|u_{j}-v\right\|_{H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime})}\rightarrow+\infty.\] _We fix a function \(v\in\mathcal{K}_{\varepsilon}.\) In view of the proof of (4.4) and (4.5), we have_ \[\begin{split}\mathcal{A}u_{j}(u_{j}-v)-\mathcal{A}v(u_{j}-v)& \geq\frac{C_{N,s}}{2}\int_{\Omega_{\frac{\sigma}{2}}^{\prime}} \int_{\Omega_{\frac{\sigma}{2}}^{\prime}}\frac{|u_{j}(x)-v(x)+v(y)-u_{j}(y)|^{ 2}}{|x-y|^{n+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\geq C(N,s,\Omega,\Omega^{\prime},\varepsilon)\left\|u_{j}-v \right\|_{H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime})}^{2},\end{split}\] _where in the last inequality we have used the standard fractional Sobolev embedding since \(u_{j}-v\in W_{0}^{s,2}(\Omega_{\frac{\sigma}{2}}^{\prime}).\)_ **Step 3**.: _We claim that the operator \(\mathcal{A}\) is weakly continuous on the set \(\mathcal{K}_{\varepsilon}\). To this purpose, we need to show that if \(\{u_{j}\}_{j=1}^{\infty}\subset\mathcal{K}_{\varepsilon}\) such that \(u_{j}\to u\in\mathcal{K}_{\varepsilon}\) in \(H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime})\) then \(\mathcal{A}u_{j}(v)-\mathcal{A}u(v)\to 0\) for any \(v\in H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime}).\)_ First we note that, for any \(v\in H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime}),\) \[\begin{split}&\quad|\mathcal{A}_{1}u_{j}(v)-\mathcal{A}_{1}u(v)| \\ &\leq\frac{C_{N,s}}{2}\int_{\Omega_{\frac{\sigma}{2}}^{\prime}} \int_{\Omega_{\frac{\sigma}{2}}^{\prime}}\frac{|u_{j}(x)-u(x)-(u_{j}(y)-u(y))||v (x)-v(y)|}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\leq\frac{C_{N,s}}{2}\|u_{j}-u\|_{H^{s}(\Omega_{\frac{\sigma}{2} }^{\prime})}\|v\|_{H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime})}\to 0\quad\text{as $j \rightarrow+\infty.$}\end{split} \tag{4.6}\] Next we have \[\begin{split}|\mathcal{A}_{2}u_{j}(v)-\mathcal{A}_{2}u(v)|& \leq C_{N,s}\int_{\mathbb{R}^{N}\setminus\Omega_{\frac{\sigma}{2} }^{\prime}}\int_{\Omega_{\varepsilon}}\frac{|u_{j}(y)-u(y)||v(y)|}{|x-y|^{N+2 s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\quad+\int_{\Omega_{\varepsilon}}(g(|x|^{\tau_{+}}u_{j}(x))-g(|x |^{\tau_{+}}u(x)))v(x)|x|^{\tau_{+}}dx.\end{split}\] Since \(u_{j}\to u\in\mathcal{K}_{\varepsilon}\) in \(H^{s}(\Omega_{\frac{\sigma}{2}}^{\prime}),\) we have that \(u_{j}\to u\) in \(L^{2}(\Omega_{\varepsilon}).\) By the Holder inequality and the estimate (due to the fact \(\frac{2s-N}{2}\leq\tau_{+}<2s<N\)) \[\int_{\mathbb{R}^{N}\setminus\Omega_{\frac{\sigma}{2}}^{\prime}}\frac{|x|^{ \tau_{+}}}{|x-y|^{N+2s}}dx<C(N,\Omega,\Omega^{\prime},s,\mu,\varepsilon), \quad\forall y\in\Omega_{\varepsilon},\] we deduce that \[\begin{split}&\int_{\mathbb{R}^{N}\setminus\Omega^{\prime}_{\frac{ 2}{2}}}\int_{\Omega_{\varepsilon}}\frac{|u_{j}(y)-u(y)||v(y)|}{|x-y|^{N+2s}}|y|^ {\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\leq C(s,\mu,\varepsilon)\int_{\Omega_{\varepsilon}}|u_{j}(y)-u (y)||v(y)|\int_{\mathbb{R}^{N}\setminus\Omega^{\prime}_{\frac{2}{2}}}\frac{|x |^{\tau_{+}}}{|x-y|^{N+2s}}dxdy\\ &\leq C(N,\Omega,\Omega^{\prime},s,\mu,\varepsilon)\|u_{j}-u\|_{ L^{2}(\Omega_{\varepsilon})}\|v\|_{L^{2}(\Omega_{\varepsilon})}\to 0\quad\text{as $j\to\infty$}.\end{split}\] Since \(g\) is bounded and uniformly continuous in \(\mathbb{R}\), \(\varepsilon<|x|<\text{diam}(\Omega)\) for any \(x\in\Omega_{\varepsilon}\) and \(u_{\varepsilon}\to u\) in measure in \(\Omega_{\varepsilon}\), we may deduce that \[\int_{\Omega_{\varepsilon}}|g(|x|^{\tau_{+}}u_{j})-g(|x|^{\tau_{+}}u)|^{2}dx \to 0\quad\text{as $j\to+\infty$}.\] Therefore, by the Holder inequality, we obtain \[\begin{split}&\bigg{|}\int_{\Omega_{\varepsilon}}(g(|x|^{\tau_{+}} u_{j})-g(|x|^{\tau_{+}}u))v|x|^{\tau_{+}}dx\bigg{|}\\ &\leq C(s,\mu,\varepsilon,\Omega)\left(\int_{\Omega_{\varepsilon }}|g(|x|^{\tau_{+}}u_{j})-g(|x|^{\tau_{+}}u)|^{2}dx\right)^{\frac{1}{2}}\|v\|_ {L^{2}(\Omega_{\varepsilon})}\to 0\quad\text{as $j\to+\infty$}.\end{split} \tag{4.7}\] Combining (4.6)-(4.7), we conclude that \(\mathcal{A}u_{j}(v)-\mathcal{A}u(v)\to 0\) as \(j\to+\infty\). The proof is complete. **Lemma 4.3**.: _Assume \(\ell>0\) and \(g\in C(\mathbb{R})\cap L^{\infty}(\Omega)\) is a nondecreasing function such that \(g(0)=0\). There exists a function \(u_{\varepsilon}\in W^{s,2}(\Omega^{\prime}_{\frac{s}{2}})\) such that \(u_{\varepsilon}=\ell\Phi^{\Omega}_{s,\mu}\) in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\) and_ \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(u_{ \varepsilon}(x)-u_{\varepsilon}(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx+\mu \int_{\Omega_{\varepsilon}}\frac{u_{\varepsilon}(x)\phi(x)}{|x|^{2s}}dx+\int _{\Omega_{\varepsilon}}g(u_{\varepsilon})\phi dx=0 \tag{4.8}\] _for all \(\phi\in C^{\infty}_{0}(\Omega_{\varepsilon})\). Furthermore there holds_ \[(\ell\Phi^{\Omega}_{s,\mu}-\psi_{M})^{+}\leq u_{\varepsilon}\leq\ell\Phi^{ \Omega}_{s,\mu}\quad\text{in $\Omega_{\varepsilon}$}, \tag{4.9}\] _where \(M=\sup_{t\in\mathbb{R}}|g(t)|\), \(\psi_{M}\) is the nonnegative weak solution of_ \[\begin{cases}\mathcal{L}^{s}_{\mu}\psi=M&\text{in $\Omega$},\\ \quad\quad u=0&\text{in $\mathbb{R}^{N}\setminus\Omega$}.\end{cases} \tag{4.10}\] Proof.: By Lemma 4.2 and the standard theory of monotone operators (see, e.g., [24, Proposition 17.2]), there exists \(v_{\varepsilon}\in\mathcal{K}_{\varepsilon}\) such that \[\mathcal{A}v_{\varepsilon}(\zeta-v_{\varepsilon})\geq 0,\] for any \(\zeta\in\mathcal{K}_{\varepsilon}\). Set \(\zeta_{\pm}=\pm\tilde{\phi}+v_{\varepsilon}\) for \(\tilde{\phi}\in C^{\infty}_{0}(\Omega_{\varepsilon})\), then \(\zeta_{\pm}\in\mathcal{K}_{\varepsilon}\) and by the above inequality we can easily show that \[\begin{split} 0&=\mathcal{A}v_{\varepsilon}(\tilde{\phi})\\ &=\frac{C_{N,s}}{2}\int_{\Omega^{\prime}_{\frac{s}{2}}}\int_{ \Omega^{\prime}_{\frac{s}{2}}}\frac{(v_{\varepsilon}(x)-v_{\varepsilon}(y))( \tilde{\phi}(x)-\tilde{\phi}(y))}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx \\ &\quad+C_{N,s}\int_{\mathbb{R}^{N}\setminus\Omega^{\prime}_{ \frac{s}{2}}}\int_{\Omega_{\varepsilon}}\frac{(v_{\varepsilon}(y)-H(x))\tilde {\phi}(y)}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx+\int_{\Omega_{ \varepsilon}}g(|x|^{\tau_{+}(s,\mu)}v_{\varepsilon})\tilde{\phi}|x|^{\tau_{+ }}dx\\ &=\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}} \frac{(v_{\varepsilon}(x)-v_{\varepsilon}(y))(\tilde{\phi}(x)-\tilde{\phi}( y))}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx+\int_{\Omega_{ \varepsilon}}g(|x|^{\tau_{+}}v_{\varepsilon})\tilde{\phi}|x|^{\tau_{+}}dx. \end{split} \tag{4.11}\] Setting \(u_{\varepsilon}=|x|^{\tau_{+}}v_{\varepsilon}\) and \(\phi=|x|^{\tau_{+}}\tilde{\phi}\), we obtain (4.8). We have \[\langle H,\tilde{\phi}\rangle_{s,\tau_{+}}+\int_{\Omega_{\varepsilon}}g(|x|^{\tau_ {+}}H)\tilde{\phi}|x|^{\tau_{+}}dx\geq 0,\quad\forall\;0\leq\tilde{\phi}\in C_{0}^{ \infty}(\Omega_{\varepsilon}). \tag{4.12}\] Put \(w_{\varepsilon}=v_{\varepsilon}-H\) then from (4.11) and (4.12), we have \[\langle w_{\varepsilon},\tilde{\phi}\rangle_{s,\tau_{+}}+\int_{\Omega_{ \varepsilon}}(g(|x|^{\tau_{+}}v_{\varepsilon})-g(|x|^{\tau_{+}}H))\tilde{\phi }|x|^{\tau_{+}}dx\leq 0,\quad\forall\;0\leq\tilde{\phi}\in C_{0}^{\infty}( \Omega_{\varepsilon}). \tag{4.13}\] Note that \(w_{\varepsilon}^{+}\in W^{s,2}(\Omega_{\frac{\varepsilon}{2}}^{\prime})\) and \(w_{\varepsilon}^{+}=0\) in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}.\) Since \(\Omega_{\varepsilon}\) is smooth, we deduce that \(w_{\varepsilon}^{+}\in W_{0}^{s,2}(\Omega_{\varepsilon})\), hence we may use it as a test function in (4.13) and the standard density argument together with the monotonicity assumption on \(g\) to obtain that \[0\leq\langle w_{\varepsilon},w_{\varepsilon}^{+}\rangle_{s,\tau_{+}}+\int_{ \Omega_{\varepsilon}}(g(|x|^{\tau_{+}}v_{\varepsilon})-g(|x|^{\tau_{+}}H))w_ {\varepsilon}^{+}|x|^{\tau_{+}}dx\leq 0.\] This implies \(w_{\varepsilon}^{+}=0\) a.e. in \(\mathbb{R}^{N}\setminus\{0\}\), hence \(v_{\varepsilon}\leq H\) a.e. in \(\Omega_{\varepsilon}\). This implies the upper bound in (4.9). As for the lower bound in (4.9), by taking \(\tilde{\phi}=v_{\varepsilon}^{-}\in W_{0}^{s,2}(\Omega_{\varepsilon})\) in (4.11), the standard density argument and the assumption that \(g\) is nondecreasing and \(g(0)=0\), we have \[0=\langle v_{\varepsilon},v_{\varepsilon}^{-}\rangle_{s,\tau_{+}}+\int_{ \Omega_{\varepsilon}}g(|x|^{\tau_{+}}v_{\varepsilon})v_{\varepsilon}^{-}|x|^ {\tau_{+}}dx\leq 0,\] which implies \(v_{\varepsilon}^{-}=0\) a.e. in \(\mathbb{R}^{N}\setminus\{0\}\). Therefore \(v_{\varepsilon}\geq 0\) in \(\Omega_{\varepsilon}\), and hence \(u_{\varepsilon}\geq 0\) a.e. in \(\Omega_{\varepsilon}\). Since \(\psi_{M}\) is the nonnegative solution of (4.10), we have \[\mathcal{L}_{\mu}^{s}(\ell\Phi_{s,\mu}^{\Omega}-\psi_{M})+g(\ell\Phi_{s,\mu}^ {\Omega}-\psi_{M})\leq 0\] in the sense of distribution in \(\Omega_{\varepsilon}.\) By a similar argument as above, we may show that \(u_{\varepsilon}\geq\ell\Phi_{s,\mu}^{\Omega}-\psi_{M}\) a.e. in \(\Omega_{\varepsilon}\). Thus \(u\geq(\ell\Phi_{s,\mu}^{\Omega}-\psi_{M})^{+}\) a.e. in \(\Omega_{\varepsilon}\). **Definition 4.4**.: _Assume \(\ell\in\mathbb{R}\) and \(g:\mathbb{R}\to\mathbb{R}\) is a nondecreasing continuous function such that \(g(0)=0\). A function \(u\) is called a weak solution to problem_ \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g(u)=\ell\delta_{0}&\text{ in }\Omega,\\ \hskip 14.226378ptu=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases} \tag{4.14}\] _if for any \(b<2s-\tau_{+}\), \(u\in L^{1}(\Omega;|x|^{-b}),\)\(g(u)\in L^{1}(\Omega;|x|^{\tau_{+}})\) and_ \[\int_{\Omega}u(-\Delta)_{\tau_{+}}^{s}\psi dx+\int_{\Omega}g(u)\psi|x|^{\tau_{+ }}dx=\ell\int_{\Omega}\Phi_{s,\mu}^{\Omega}(-\Delta)_{\tau_{+}}^{s}\psi dx, \quad\forall\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b}), \tag{4.15}\] _where \(\mathbf{X}_{\mu}(\Omega;|x|^{-b})\) is defined in Definition 1.1._ **Theorem 4.5**.: _Assume \(\ell>0\) and \(g\in C(\mathbb{R})\cap L^{\infty}(\Omega)\) is a nondecreasing function such that \(g(0)=0\). Then there exists a unique weak solution \(u\in W^{s,2}_{\rm loc}(\Omega^{\prime}\setminus\{0\})\cap C(\Omega\setminus\{0\})\) of (4.14). The solution \(u\) satisfies_ \[(\ell\Phi_{s,\mu}^{\Omega}-\psi_{M})^{+}\leq u\leq\ell\Phi_{s,\mu}^{\Omega} \quad\text{a.e. in }\Omega\setminus\{0\} \tag{4.16}\] _and_ \[\lim_{\Omega\ni x-0}\frac{u(x)}{\Phi_{s,\mu}^{\Omega}(x)}=\ell. \tag{4.17}\] Proof.: Uniqueness. Suppose that \(u_{1},u_{2}\) are two weak solutions of (4.14). Then for any \(b<2s-\tau_{+}\), by Theorem 2.3, for any nonnegative \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\), \[\int_{\Omega}(u_{1}-u_{2})^{+}(-\Delta)_{\tau_{+}}^{s}\psi dx+\int_{\Omega}(g(u _{1})-g(u_{2}))({\rm sign}^{+}(u_{1}-u_{2}))\psi|x|^{\tau_{+}}dx\leq 0. \tag{4.18}\] Taking \(\psi=\xi_{b}\), the solution of (3.24), in (4.18) and noting that \(g\) is nondecreasing, we deduce \((u_{1}-u_{2})^{+}=0\) in \(\Omega\). This implies \(u_{1}\leq u_{2}\) a.e. in \(\Omega\). Similarly, \(u_{2}\leq u_{1}\) a.e. in \(\Omega\). Thus \(u_{1}=u_{2}\) a.e. in \(\Omega\). Existence. For \(\varepsilon>0\), let \(u_{\varepsilon}\) be the solution in Lemma 4.3 and \(\eta\in C^{\infty}(\mathbb{R})\) such that \(0\leq\eta\leq 1\), \(\eta(t)=0\) for any \(|t|\leq 1\) and \(\eta(t)=1\) for any \(|t|\geq 2\). For \(\varepsilon>0\), set \(\eta_{\varepsilon}(x)=\eta(\varepsilon^{-1}|x|)\). Consider \(\varepsilon\) small enough such that \(B_{16\varepsilon}(0)\subset\Omega.\) Using \(\eta_{\varepsilon}^{2}(x)u_{\varepsilon}(x)\) as a test function in (4.8), we obtain \[\begin{split}&\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{(u_{\varepsilon}(x)-u_{\varepsilon}(y))(\eta_{ \varepsilon}^{2}(x)u_{\varepsilon}(x)-\eta_{\varepsilon}^{2}(y)u_{ \varepsilon}(y))}{|x-y|^{N+2s}}dy\\ =&-\int_{\Omega_{\varepsilon}}g(u_{\varepsilon})u_ {\varepsilon}\eta_{\varepsilon}^{2}dx-\mu\int_{\Omega_{\varepsilon}}\frac{u_{ \varepsilon}^{2}(x)\eta_{\varepsilon}^{2}(x)}{|x|^{2s}}dx.\end{split} \tag{4.19}\] By (4.9), it is easy to see that \[\left|\int_{\Omega_{\varepsilon}}g(u_{\varepsilon})u_{\varepsilon}\eta_{ \varepsilon}^{2}dx\right|+\left|\mu\int_{\Omega_{\varepsilon}}\frac{u_{ \varepsilon}^{2}(x)\eta_{\varepsilon}^{2}(x)}{|x|^{2s}}dx\right|\leq C(N,s, \mu,\Omega,\varepsilon,\ell,M),\] where \(M=\sup_{t\in\mathbb{R}}|g(t)|\). We write \[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(u_{\varepsilon}( x)-u_{\varepsilon}(y))(\eta_{\varepsilon}^{2}(x)u_{\varepsilon}(x)-\eta_{ \varepsilon}^{2}(y)u_{\varepsilon}(y))}{|x-y|^{N+2s}}dydx\] \[= \int_{\Omega_{\varepsilon}^{\prime}}\eta_{\varepsilon}^{2}(x) \int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\frac{|u_{\varepsilon}(x)-u_{ \varepsilon}(y)|^{2}}{|x-y|^{N+2s}}dydx\] \[+\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\int_{\Omega_{ \frac{\varepsilon}{2}}^{\prime}}u_{\varepsilon}(y)\frac{(u_{\varepsilon}(x)-u _{\varepsilon}(y))(\eta_{\varepsilon}^{2}(x)-\eta_{\varepsilon}^{2}(y))}{|x-y |^{N+2s}}dydx+2\int_{\mathbb{R}^{N}\setminus\Omega_{\frac{\varepsilon}{2}}^{ \prime}}\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\frac{\eta_{\varepsilon }^{2}(y)u_{\varepsilon}^{2}(y)}{|x-y|^{N+2s}}dydx.\] By Young's inequality, we have \[\left|\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\int_{\Omega _{\frac{\varepsilon}{2}}^{\prime}}u_{\varepsilon}(y)\frac{(u_{\varepsilon}(x)- u_{\varepsilon}(y))(\eta_{\varepsilon}^{2}(x)-\eta_{\varepsilon}^{2}(y))}{|x-y|^{N+2s}} dydx\right|\] \[\leq \frac{1}{8}\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\int_{ \Omega_{\frac{\varepsilon}{2}}^{\prime}}\frac{(u_{\varepsilon}(x)-u_{ \varepsilon}(y))^{2}(\eta_{\varepsilon}(x)+\eta_{\varepsilon}(y))^{2}}{|x-y| ^{N+2s}}dydx+8\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\int_{\Omega_{ \frac{\varepsilon}{2}}^{\prime}}u_{\varepsilon}^{2}(y)\frac{(\eta_{\varepsilon }(x)-\eta_{\varepsilon}(y))^{2}}{|x-y|^{N+2s}}dydx\] \[\leq \frac{1}{2}\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\eta_{ \varepsilon}^{2}(x)\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\frac{(u_{ \varepsilon}(x)-u_{\varepsilon}(y))^{2}}{|x-y|^{N+2s}}dydx+8\int_{\Omega_{ \frac{\varepsilon}{2}}^{\prime}}\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime} }u_{\varepsilon}^{2}(y)\frac{(\eta_{\varepsilon}(x)-\eta_{\varepsilon}(y))^{2 }}{|x-y|^{N+2s}}dydx\] and by Lemma 4.3 and the regularity of \(\eta_{\varepsilon}\), we deduce \[\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\int_{\Omega_{\frac{\varepsilon }{2}}^{\prime}}u_{\varepsilon}^{2}(y)\frac{(\eta_{\varepsilon}(x)-\eta_{ \varepsilon}(y))^{2}}{|x-y|^{N+2s}}dydx\leq C(N,\Omega,s,\mu,\varepsilon,\ell). \tag{4.20}\] Moreover, by using the fact that \(\eta_{\varepsilon}=0\) in \(B_{\varepsilon}(0)\) and \(u_{\varepsilon}=0\) in \(\mathbb{R}^{N}\setminus\Omega\), we have \[\int_{\mathbb{R}^{N}\setminus\Omega_{\frac{\varepsilon}{2}}^{\prime}}\int_{ \Omega_{\frac{\varepsilon}{2}}^{\prime}}\frac{\eta_{\varepsilon}^{2}(y)u_{ \varepsilon}^{2}(y)}{|x-y|^{N+2s}}dydx=\int_{\mathbb{R}^{N}\setminus\Omega_{ \frac{\varepsilon}{2}}^{\prime}}\int_{\Omega_{\varepsilon}}\frac{\eta_{ \varepsilon}^{2}(y)u_{\varepsilon}^{2}(y)}{|x-y|^{N+2s}}dydx\leq C(N,s,\mu, \Omega,\Omega^{\prime},\varepsilon,\ell). \tag{4.21}\] Combining (4.19)-(4.21) yields \[\int_{\Omega_{\frac{\varepsilon}{2}}^{\prime}}\eta_{\varepsilon}^{2}(x)\int_{ \Omega_{\frac{\varepsilon}{2}}^{\prime}}\frac{(u_{\varepsilon}(x)-u_{ \varepsilon}(y))^{2}}{|x-y|^{N+2s}}dydx\leq C(N,s,\mu,\Omega,\Omega^{\prime}, \varepsilon,\ell,M). \tag{4.22}\] Now, let us estimate the following term \[\begin{split}&\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|\eta_{ \varepsilon}(x)u_{\varepsilon}(x)-\eta_{\varepsilon}(y)u_{\varepsilon}(y)|^{2} }{|x-y|^{N+2s}}dydx\\ =&\int_{\Omega^{\prime}_{\frac{\pi}{2}}}\int_{ \Omega^{\prime}_{\frac{\pi}{2}}}\frac{|\eta_{\varepsilon}(x)u_{\varepsilon}( x)-\eta_{\varepsilon}(y)u_{\varepsilon}(y)|^{2}}{|x-y|^{N+2s}}dydx+2\int_{\mathbb{R}^{N} \setminus\Omega^{\prime}_{\frac{\pi}{2}}}\int_{\Omega^{\prime}_{\frac{\pi}{2}} }\frac{\eta^{2}_{\varepsilon}(y)u^{2}_{\varepsilon}(y)}{|x-y|^{N+2s}}dydx. \end{split} \tag{4.23}\] The first term on the right of (4.23) is bounded from above by using (4.22) and (4.20) \[\begin{split}&\int_{\Omega^{\prime}_{\frac{\pi}{2}}}\int_{ \Omega^{\prime}_{\frac{\pi}{2}}}\frac{|\eta_{\varepsilon}(x)u_{\varepsilon}( x)-\eta_{\varepsilon}(y)u_{\varepsilon}(y)|^{2}}{|x-y|^{N+2s}}dydx\\ &\leq 2\int_{\Omega^{\prime}_{\frac{\pi}{2}}}\eta^{2}_{ \varepsilon}(x)\int_{\Omega^{\prime}_{\frac{\pi}{2}}}\frac{(u_{\varepsilon}( x)-u_{\varepsilon}(y))^{2}}{|x-y|^{N+2s}}dydx+2\int_{\Omega^{\prime}_{\frac{\pi}{2}}} \int_{\Omega^{\prime}_{\frac{\pi}{2}}}u^{2}_{\varepsilon}(y)\frac{(\eta_{ \varepsilon}(x)-\eta_{\varepsilon}(y))^{2}}{|x-y|^{N+2s}}dydx\\ &\leq C(N,s,\mu,\Omega,\Omega^{\prime},\varepsilon,\ell,M).\end{split} \tag{4.24}\] Plugging (4.24) and (4.21) into (4.23) leads to \[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|\eta_{\varepsilon}(x)u_{ \varepsilon}(x)-\eta_{\varepsilon}(y)u_{\varepsilon}(y)|^{2}}{|x-y|^{N+2s}}dydx \leq C(N,s,\mu,\Omega,\Omega^{\prime},\varepsilon,\ell,M). \tag{4.25}\] By the standard fractional compact Sobolev embedding and a diagonal argument, there exists a subsequence \(\{u_{\varepsilon_{k}}\}_{k\in\mathbb{N}}\) such that \(u_{\varepsilon_{k}}\to u\) a.e. in \(\Omega\setminus\{0\}\). We find that \(u\in W^{s,2}_{\rm loc}(\mathbb{R}^{N}\setminus\{0\})\cap C(\Omega\setminus\{ 0\})\) and \(u=0\) in \(\mathbb{R}^{N}\setminus\Omega\). Moreover, for any \(\phi\in C^{\infty}_{0}(\Omega\setminus\{0\})\), there exists \(\bar{\varepsilon}>0\) such that \(\phi\in C^{\infty}_{0}(\Omega_{\bar{\varepsilon}})\). Thus, for \(2\varepsilon_{k}\leq\bar{\varepsilon}\), \(\phi\in C^{\infty}_{0}(\Omega_{\varepsilon_{k}})\). Therefore, by the dominated convergence theorem, we obtain \[\begin{split}\lim_{k\to\infty}\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{(u_{\varepsilon_{k}}(x)-u_{\varepsilon_{k}}(y))(\phi(x)- \phi(y))}{|x-y|^{N+2s}}dydx&=\int_{\mathbb{R}^{N}}\int_{\mathbb{ R}^{N}}\frac{(u(x)-u(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx,\\ \lim_{n\to\infty}\int_{\Omega_{\varepsilon_{k}}}\frac{u_{ \varepsilon_{k}}(x)\phi(x)}{|x|^{2s}}dx&=\int_{\Omega}\frac{u(x) \phi(x)}{|x|^{2s}}dx,\\ \lim_{k\to\infty}\int_{\Omega_{\varepsilon_{k}}}g(u_{\varepsilon _{k}})\phi dx&=\int_{\Omega}g(u)\phi dx.\end{split}\] Thus by passing \(k\to\infty\) in (4.8) with \(\varepsilon\) replaced by \(\varepsilon_{k}\), we obtain \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(u(x)-u(y))( \phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx+\mu\int_{\Omega}\frac{u(x)\phi(x)}{|x|^{2s} }dx+\int_{\Omega}g(u)\phi dx=0 \tag{4.26}\] for all \(\phi\in C^{\infty}_{0}(\Omega\setminus\{0\})\). We see that \(u\) satisfies (4.16) due to (4.9). Next by [9, Lemma 4.4,4.5], \(\psi_{M}\in C^{\beta}(\Omega\setminus\{0\})\) for any \(\beta\in(0,2s)\) and \[\psi_{M}(x)\leq C(M,\Omega,s,N,\mu)d(x)^{s}|x|^{\tau_{+}}\quad\forall x\in \Omega\setminus\{0\},\] which implies \[\lim_{\Omega\ni x\to 0}\frac{\psi_{M}(x)}{\Phi^{\Omega}_{s,\mu}(x)}=0. \tag{4.27}\] This and (4.16) lead to (4.17). Since \(g\in L^{\infty}(\mathbb{R})\), by Theorem 2.3, there exists a unique weak solution \(w\in L^{1}(\Omega;|x|^{-b})\) of \[\begin{cases}\mathcal{L}^{s}_{\mu}w=g(u)&\text{ in }\Omega,\\ \quad w=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\] namely \[\int_{\Omega}w(-\Delta)^{s}_{\tau_{+}}\psi dx=\int_{\Omega}g(u)\psi|x|^{\tau_{+}} dx,\quad\forall\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b}). \tag{4.28}\] Thus, together with (4.26), we obtain that for \(\phi\in C^{\infty}_{0}(\Omega\setminus\{0\})\) \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(u(x)+w(x)-u( y)-w(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx+\mu\int_{\Omega}\frac{(u(x)+w(x)) \phi(x)}{|x|^{2s}}dx=0.\] By [9, Lemma 4.4, 4.5], \(w\in C^{\beta}(\Omega\setminus\{0\})\) for any \(\beta\in(0,2s)\) and \[w(x)\leq C(M,\Omega,s,N,\mu)d(x)^{s}|x|^{\tau_{+}},\quad\forall x\in\Omega \setminus\{0\}. \tag{4.29}\] Combining (4.17), (4.29), the definition of \(\Phi_{s,\mu}\) in (1.4) and the fact that \(\tau_{+}\geq\tau_{-}\), we derive \[\lim_{|x|\to 0}\frac{u(x)+w(x)}{\Phi_{s,\mu}(x)}=\ell.\] Hence by [13, Theorem 4.14], we have that \(u_{i}+w_{i}=\ell\Phi^{\Omega}_{s,\mu}\) a.e. in \(\Omega\setminus\{0\}\). This and (4.28) imply the desired result. _Remark 4.6_.: If \(\ell<0\) then we put \(\tilde{g}(t)=-g(-t)\) and consider the problem \[\left\{\begin{aligned} \mathcal{L}^{s}_{\mu}u+\tilde{g}(u)& =-\ell\delta_{0}&&\text{ in }\Omega,\\ u&=0&&\text{ in }\mathbb{R}^{N} \setminus\Omega.\end{aligned}\right. \tag{4.30}\] By Theorem 4.5, there exists a unique weak solution \(\tilde{u}\) of problem (4.30). Therefore, \(-\tilde{u}\) is the unique weak solution of problem (4.14). Proof of Theorem 1.4.: In view of Remark 4.6, we may assume that \(\ell>0\). The uniqueness follows from Kato inequality (2.9). Next we show the existence of a solution to problem (4.14). Let \(g_{n}=\max\{-n,\min\{g,n\}\}\) then \(g_{n}\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\). By Theorem 4.5, there exists a unique positive weak solution \(u_{n}\) of \[\left\{\begin{aligned} \mathcal{L}^{s}_{\mu}u_{n}+g_{n}(u)& =\ell\delta_{0}&&\text{ in }\Omega,\\ u&=0&&\text{ in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right.\] namely, for any \(b<2s-\tau_{+}\), \(u_{n}\in L^{1}(\Omega;|x|^{-b})\) and \[\int_{\Omega}u_{n}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}g_{n}(u_{n}) \psi|x|^{\tau_{+}}dx=\ell\int_{\Omega}\Phi^{\Omega}_{s,\mu}(-\Delta)^{s}_{\tau _{+}}\psi dx,\quad\forall\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b}). \tag{4.31}\] From the above formula and (2.8), we deduce that \(u_{n+1}\leq u_{n}\) for any \(n\in\mathbb{N}\). Put \(u:=\lim\limits_{n\to\infty}u_{n}\). For any \(n\in\mathbb{N}\), by Theorem 4.5, we have \[u_{n}(x)\leq\ell\Phi^{\Omega}_{s,\mu}(x)\leq\ell|x|^{\tau_{-}},\quad\forall x \in\Omega\setminus\{0\}, \tag{4.32}\] which implies \(u(x)\leq\ell\Phi^{\Omega}_{\mu}(x)\) for all \(x\in\Omega\setminus\{0\}\). By the monotone convergence theorem, \(u_{n}\to u\) in \(L^{1}(\Omega;|x|^{-b})\). We notice that \(g_{n}(u_{n})\to g(u)\) a.e. in \(\Omega\setminus\{0\}\) and in view of (4.32) and assumption \(\Lambda_{g}<\infty\), \(0\leq g_{n}(u_{n})\leq g(\ell|x|^{\tau_{-}})\in L^{1}(\Omega;|x|^{\tau_{+}})\). Hence by the dominated convergence theorem, we have that \(g_{n}(u_{n})\to g(u)\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\). Let \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\) then by [9, Lemma 4.4], \(|\psi|\leq Cd^{s}\) in \(\Omega\). By letting \(n\to\infty\) in (4.31), we conclude (4.15). Proof of Theorem 1.5.: The proof of this Theorem is similar to that of Theorem (1.4) with minor modification, hence we omit it. ## 5. Measures on \(\Omega\) ### Bounded absorption Let \(\ell\in\mathbb{R}\) and \(\Omega^{\prime}\) be an open bounded domain such that \(\Omega\Subset\Omega^{\prime}\) and \(\varepsilon\leq\frac{\min_{x\in\partial\Omega}|x|}{16}\). Recall that \(H(x)=\ell\Phi^{\Omega}_{s,\mu}(x)|x|^{-\tau_{+}}\) for \(x\neq 0\). Set \(\Omega^{\prime}_{\varepsilon}=\Omega^{\prime}\backslash B_{\varepsilon}(0)\) and \(\Omega_{\varepsilon}=\Omega\setminus B_{\varepsilon}(0)\). For any \(u,v\in W^{s,2}(\Omega^{\prime}_{\frac{\varepsilon}{2}})\), we define the operators \[\mathcal{B}u(v):=\int_{\Omega_{\varepsilon}}fv|x|^{\tau_{+}}dx\ \ \ \text{and}\ \ \ \mathcal{T}u(v):=\mathcal{A}u(v)-\mathcal{B}u(v),\] where \(g\in C(\mathbb{R})\cap L^{\infty}(\Omega)\) is a nondecreasing function such that \(g(0)=0\), \(f\in L^{\infty}(\Omega_{\varepsilon})\) and \(\mathcal{A}\) is defined in (4.1). Recall that \(M=\sup_{t\in\mathbb{R}}|g(t)|\) and \(\mathcal{K}_{\varepsilon}\) is the nonempty convex closed set of \(W^{s,2}(\Omega^{\prime}_{\frac{\varepsilon}{2}})\) defined in (4.2). Using an analogous argument as in the proof of Lemma 4.1, Lemma 4.2, we can show that \(\mathcal{T}\) is monotone, coercive and weakly continuous on \(\mathcal{K}_{\varepsilon}\). By proceeding as in the proof of Lemma 4.3, we can obtain the following result. **Lemma 5.1**.: _Assume \(\ell\geq 0\) and \(g\in C(\mathbb{R})\cap L^{\infty}(\Omega)\) is a nondecreasing function such that \(g(0)=0\) and \(0\leq f\in L^{\infty}(\Omega_{\varepsilon})\). There exists a function \(u_{\varepsilon}=u_{\varepsilon,f,\ell}\in W^{s,2}(\Omega^{\prime}_{\frac{ \varepsilon}{2}})\) such that \(u_{\varepsilon,f,\ell}=\ell\Phi^{\Omega}_{s,\mu}\) in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\) and_ \[\begin{split}\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}&\frac{(u_{\varepsilon,f,\ell}(x)-u_{\varepsilon,f,\ell}(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx\\ &+\mu\int_{\Omega_{\varepsilon}}\frac{u_{\varepsilon,f,\ell}(x) \phi(x)}{|x|^{2s}}dx+\int_{\Omega_{\varepsilon}}g(u_{\varepsilon,f,\ell}) \phi dx=\int_{\Omega_{\varepsilon}}f\phi dx\end{split} \tag{5.1}\] _for all \(\phi\in C^{\infty}_{0}(\Omega_{\varepsilon})\). Furthermore there holds_ \[\max\{(\ell\Phi^{\Omega}_{s,\mu}-\psi_{M})^{+},u_{\varepsilon,f,0}\}\leq u_{ \varepsilon,f,\ell}\leq\ell\Phi^{\Omega}_{s,\mu}+u_{\varepsilon,f,0}\ \ \ \text{in}\ \Omega_{\varepsilon}, \tag{5.2}\] _where \(\psi_{M}\) is the nonnegative solution of (4.10) and \(u_{\varepsilon,f,0}\) satisfies (5.1) and \(u_{\varepsilon,f,0}=0\) in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\)._ _Proof_. By the standard theory of monotone operators (see, e.g., [24, Proposition 17.2]), there exists \(v_{\varepsilon,f,\ell}\in\mathcal{K}_{\varepsilon}\) such that \[\mathcal{A}v_{\varepsilon,f,\ell}(\zeta-v_{\varepsilon,f,\ell})\geq 0,\] for any \(\zeta\in\mathcal{K}_{\varepsilon}.\) Set \(\zeta_{\pm}=\pm\tilde{\phi}+v_{\varepsilon,f,\ell}\) for \(\tilde{\phi}\in C^{\infty}_{0}(\Omega_{\varepsilon})\), then \(\zeta_{\pm}\in\mathcal{K}_{\varepsilon}\) and by the above inequality we can easily show that \[\begin{split} 0&=\mathcal{A}v_{\varepsilon,f,\ell}(\tilde{ \phi})\\ &=\frac{C_{N,s}}{2}\int_{\Omega^{\prime}_{\frac{\varepsilon}{2}}} \int_{\Omega^{\prime}_{\frac{\varepsilon}{2}}}\frac{(v_{\varepsilon,f,\ell}(x) -v_{\varepsilon,f,\ell}(y))(\tilde{\phi}(x)-\tilde{\phi}(y))}{|x-y|^{N+2s}}|y |^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\quad+C_{N,s}\int_{\mathbb{R}^{N}\setminus\Omega^{\prime}_{\frac {\varepsilon}{2}}}\int_{\Omega_{\varepsilon}}\frac{(v_{\varepsilon,f,\ell}(y) -H(x))\tilde{\phi}(y)}{|x-y|^{N+2s}}|y|^{\tau_{+}}dy|x|^{\tau_{+}}dx\\ &\quad+\int_{\Omega_{\varepsilon}}g(|x|^{\tau_{+}}v_{\varepsilon,f,\ell})\tilde{\phi}|x|^{\tau_{+}}dx-\int_{\Omega_{\varepsilon}}f\tilde{\phi} |x|^{\tau_{+}}dx\\ &=\langle v_{\varepsilon,f,\ell},\tilde{\phi}\rangle_{s,\tau_{+} }+\int_{\Omega_{\varepsilon}}g(|x|^{\tau_{+}}v_{\varepsilon,f,\ell})\tilde{ \phi}|x|^{\tau_{+}}dx-\int_{\Omega_{\varepsilon}}f\tilde{\phi}|x|^{\tau_{+}}dx. \end{split} \tag{5.3}\] Setting \(u_{\varepsilon,f,\ell}=|x|^{\tau_{+}}v_{\varepsilon,f,\ell}\) and \(\phi=|x|^{\tau_{+}}\tilde{\phi}\), we obtain (5.1). Moreover, since \(v_{\varepsilon,f,\ell}\in\mathcal{K}_{\varepsilon}\), \(v_{\varepsilon,f,\ell}=H\) a.e. in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\), hence \(u_{\varepsilon,f,\ell}=\ell\Phi^{\Omega}_{s,\mu}\) a.e. in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\). By taking \(\tilde{\phi}=(v_{\varepsilon,f,\ell})^{-}\in W^{s,2}_{0}(\Omega_{\varepsilon})\) in (5.3), the standard density argument and the assumption that \(g\) is nondecreasing and \(g(0)=0\), we have \[0=\langle v_{\varepsilon,f,\ell},(v_{\varepsilon,f,\ell})^{-}\rangle_{s,\tau_ {+}}+\int_{\Omega_{\varepsilon}}g(|x|^{\tau_{+}}v_{\varepsilon,f,\ell})(v_{ \varepsilon,f,\ell})^{-}|x|^{\tau_{+}}dx-\int_{\Omega_{\varepsilon}}f(v_{ \varepsilon,f,\ell})^{-}|x|^{\tau_{+}}dx\leq 0,\] which implies \((v_{\varepsilon,f,\ell})^{-}=0\) a.e. in \(\mathbb{R}^{N}\). Therefore \(v_{\varepsilon,f,\ell}\geq 0\), and hence \(u_{\varepsilon,f,\ell}\geq 0\) a.e. in \(\Omega_{\varepsilon}\). In particular, \(v_{\varepsilon,f,0}\) satisfies (5.3) with \(\ell=0\), namely \[\langle v_{\varepsilon,f,0},\tilde{\phi}\rangle_{s,\tau_{+}}+\int_{\Omega_{ \varepsilon}}g(|x|^{\tau_{+}}v_{\varepsilon,f,0})\tilde{\phi}|x|^{\tau_{+}} dx-\int_{\Omega_{\varepsilon}}f\tilde{\phi}|x|^{\tau_{+}}dx=0,\quad\forall\tilde{ \phi}\in C^{\infty}_{0}(\Omega_{\varepsilon}). \tag{5.4}\] Moreover, \(0\leq v_{\varepsilon,f,0}\in W^{2,s}(\Omega^{\prime}_{\frac{\varepsilon}{2}})\) and \(v_{\varepsilon,f,0}=0\) a.e. in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\), which in turn implies \(v_{\varepsilon,f,0}\in W^{2,s}_{0}(\Omega_{\varepsilon})\). Therefore, by a density argument, we can take \(v_{\varepsilon,f,0}\) as a test function in (5.4) and use estimate \(g(t)t\geq 0\) for any \(t\in\mathbb{R}\), and by the embedding inequalities (2.19)-(2.20), we obtain \[\|v_{\varepsilon,f,0}\|^{2}_{s,\tau_{+}}\leq C(N,s,\Omega,\mu)\|f\|_{L^{ \infty}(\Omega_{\varepsilon})}.\] Put \(u_{\varepsilon,f,0}=|x|^{\tau_{+}}v_{\varepsilon,f,0}\) then \(u_{\varepsilon,f,0}=0\) a.e. in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}\), and \[\|u_{\varepsilon,f,0}\|^{2}_{\mu}\leq C(N,s,\Omega,\mu)\|f\|_{L^{\infty}( \Omega_{\varepsilon})}. \tag{5.5}\] We have \[\langle H,\tilde{\phi}\rangle_{s,\tau_{+}}=0,\quad\forall\;\tilde{\phi}\in C^ {\infty}_{0}(\Omega_{\varepsilon}). \tag{5.6}\] Therefore, \[\langle v_{\varepsilon,f,0}+H,\tilde{\phi}\rangle_{s,\tau_{+}}+\int_{\Omega_{ \varepsilon}}g(|x|^{\tau_{+}}(\tilde{v}_{\varepsilon,f,0}+H))\tilde{\phi}|x|^{ \tau_{+}}dx\geq\int_{\Omega_{\varepsilon}}f\tilde{\phi}|x|^{\tau_{+}}dx,\quad \forall 0\leq\tilde{\phi}\in C^{\infty}_{0}(\Omega_{\varepsilon}).\] Put \(w_{\varepsilon}=v_{\varepsilon,f,\ell}-v_{\varepsilon,f,0}-H\) then from (5.3) and (5.6), we have \[\langle w_{\varepsilon},\tilde{\phi}\rangle_{s,\tau_{+}}+\int_{\Omega_{ \varepsilon}}(g(|x|^{\tau_{+}}v_{\varepsilon,f,\ell})-g(|x|^{\tau_{+}}(\tilde{ v}_{\varepsilon,f,0}+H)))\tilde{\phi}|x|^{\tau_{+}}dx\leq 0,\;\forall\;0\leq\tilde{\phi}\in C ^{\infty}_{0}(\Omega_{\varepsilon}). \tag{5.7}\] Note that \(w_{\varepsilon}^{+}\in W^{s,2}(\Omega^{\prime}_{\frac{\varepsilon}{2}})\) and \(w_{\varepsilon}^{+}=0\) in \(\mathbb{R}^{N}\setminus\Omega_{\varepsilon}.\) Since \(\Omega_{\varepsilon}\) is smooth, we deduce that \(w_{\varepsilon}^{+}\in W^{s,2}_{0}(\Omega_{\varepsilon})\), hence, by the density argument, we may use it as test function in (5.7) together with the monotonicity assumption on \(g\) to obtain that \[0\leq\langle w_{\varepsilon},w_{\varepsilon}^{+}\rangle_{s,\tau_{+}}+\int_{ \Omega_{\varepsilon}}(g(|x|^{\tau_{+}}v_{\varepsilon,f,\ell})-g(|x|^{\tau_{+} }(v_{\varepsilon,f,0}+H)))w_{\varepsilon}^{+}|x|^{\tau_{+}}dx\leq 0. \tag{5.8}\] This implies \((w_{\varepsilon})^{+}=0\) a.e. in \(\mathbb{R}^{N}\), hence \(v_{\varepsilon,f,\ell}\leq H+v_{\varepsilon,f,0}\) a.e. in \(\Omega_{\varepsilon}\). This implies \[u_{\varepsilon,f,\ell}\leq\ell\Phi^{\Omega}_{s,\mu}+u_{\varepsilon,f,0}\quad \text{a.e. in }\Omega_{\varepsilon}.\] Next we show the lower bound in (5.2). Since \(M=\sup\limits_{t\in\mathbb{R}}|g(t)|\) and \(\psi_{M}\) is the nonnegative solution of (4.10), we have \[\mathcal{L}^{s}_{\mu}(\ell\Phi^{\Omega}_{s,\mu}-\psi_{M})+g(\ell\Phi^{\Omega}_{ s,\mu}-\psi_{M})\leq 0\] in the sense of distribution in \(\Omega_{\varepsilon}.\) By a similar argument as above, we may show that \(u_{\varepsilon,f,\ell}\geq\ell\Phi^{\Omega}_{s,\mu}-\psi_{M}\) a.e. in \(\Omega_{\varepsilon}\). Thus \(u_{\varepsilon,f,\ell}\geq(\ell\Phi^{\Omega}_{s,\mu}-\psi_{M})^{+}\) a.e. in \(\Omega_{\varepsilon}\). From (5.3) and (5.4), we see that \[\langle v_{\varepsilon,f,0}-v_{\varepsilon,f,\ell},\tilde{\phi}\rangle_{s,\tau _{+}}+\int_{\Omega_{\varepsilon}}(g(|x|^{\tau_{+}}v_{\varepsilon,f,0})-g(|x|^{ \tau_{+}}v_{\varepsilon,f,\ell}))\tilde{\phi}|x|^{\tau_{+}}dx=0,\quad\forall \tilde{\phi}\in C^{\infty}_{0}(\Omega_{\varepsilon}). \tag{5.9}\] We note that \((v_{\varepsilon,f,0}-v_{\varepsilon,f,\ell})^{+}\in W^{s,2}_{0}(\Omega_{ \varepsilon})\). By density argument, we can take \((v_{\varepsilon,f,0}-v_{\varepsilon,f,\ell})^{+}\) as a test function in (5.9) to deduce \((v_{\varepsilon,f,0}-v_{\varepsilon,f,\ell})^{+}=0\) in \(\Omega_{\varepsilon}\). Therefore \(v_{\varepsilon,f,0}\leq v_{\varepsilon,f,\ell}\) a.e. in \(\Omega_{\varepsilon}\). The proof is complete. **Theorem 5.2**.: _Assume \(\ell>0\) and \(g\in C(\mathbb{R})\cap L^{\infty}(\Omega)\) is a nondecreasing function such that \(g(0)=0\) and \(\nu\in\mathfrak{M}^{+}(\Omega\setminus\{0\};|x|^{\tau_{+}})\). Then there exists a unique weak solution \(u_{\nu,\ell}\in W^{s,2}_{\rm loc}(\Omega^{\prime}\setminus\{0\})\cap C(\Omega \setminus\{0\})\) of (1.1). The solution \(u\) satisfies_ \[\max\{(\ell\Phi^{\Omega}_{s,\mu}-\psi_{M})^{+},u_{\nu,0}\}\leq u_{\nu,\ell} \leq u_{\nu,0}+\ell\Phi^{\Omega}_{s,\mu}\quad\text{a.e. in }\;\Omega\setminus\{0\}, \tag{5.10}\] _where \(u_{\nu,0}\) is the weak solution to_ \[\begin{cases}\mathcal{L}^{s}_{\mu}u+g(u)=\nu&\text{in }\Omega,\\ \quad\quad\quad\quad u=0&\text{in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] Proof.: Uniqueness. Suppose that \(u_{1},u_{2}\) are two weak solutions of (1.1). Then by Theorem 2.3, \[\int_{\Omega}(u_{1}-u_{2})^{+}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}(g( u_{1})-g(u_{2}))\psi|x|^{\tau_{+}}dx\leq 0,\quad\forall\,0\leq\psi\in\mathbf{X}_{ \mu}(\Omega;|x|^{-b}). \tag{5.11}\] Taking \(\psi=\xi_{b}\), the solution of (3.24), in (5.11) and noting that \(g\) is nondecreasing, we deduce \((u_{1}-u_{2})^{+}=0\) in \(\Omega\). This implies \(u_{1}\leq u_{2}\) a.e. in \(\Omega\). Similarly, \(u_{2}\leq u_{1}\) a.e. in \(\Omega\). Thus \(u_{1}=u_{2}\) a.e. in \(\Omega\). Existence. The proof of the existence is divided into three steps. **Step 1.** First we consider the case \(\nu=f\in L^{\infty}(\Omega)\). For \(\varepsilon>0\), let \(u_{\varepsilon,f,\ell}\) be the solution in Lemma 5.1 and \(\eta\in C^{\infty}(\mathbb{R})\) such that \(0\leq\eta\leq 1\), \(\eta(t)=0\) for any \(|t|\leq 1\) and \(\eta(t)=1\) for any \(|t|\geq 2\). For \(\varepsilon>0\), set \(\eta_{\varepsilon}(x)=\eta(\varepsilon^{-1}|x|)\). Consider \(\varepsilon\) small enough such that \(B_{16\varepsilon}(0)\subset\Omega.\) Using \(\eta_{\varepsilon}^{2}(x)u_{\varepsilon}(x)\) as a test function in (5.1), we obtain \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{ (u_{\varepsilon,f,\ell}(x)-u_{\varepsilon,f,\ell}(y))(\eta_{\varepsilon}^{2} (x)u_{\varepsilon,f,\ell}(x)-\eta_{\varepsilon}^{2}(y)u_{\varepsilon,f,\ell} (y))}{|x-y|^{N+2s}}dy\] \[=-\int_{\Omega_{\varepsilon}}g(u_{\varepsilon,f,\ell})u_{ \varepsilon,f,\ell}\eta_{\varepsilon}^{2}dx-\mu\int_{\Omega_{\varepsilon}} \frac{u_{\varepsilon,f,\ell}^{2}(x)\eta_{\varepsilon}^{2}(x)}{|x|^{2s}}dx+\int _{\Omega_{\varepsilon}}fu_{\varepsilon,f,\ell}\eta_{\varepsilon}^{2}dx.\] By Lemma 5.1, it is easy to see that \[\Big{|}\int_{\Omega_{\varepsilon}}g(u_{\varepsilon,f,\ell})u_{ \varepsilon,f,\ell}\eta_{\varepsilon}^{2}dx\Big{|}+\Big{|}\mu\int_{\Omega_{ \varepsilon}}\frac{u_{\varepsilon,f,\ell}^{2}(x)\eta_{\varepsilon}^{2}(x)}{|x |^{2s}}dx\Big{|}+\Big{|}\int_{\Omega_{\varepsilon}}fu_{\varepsilon,f,\ell}\eta_ {\varepsilon}^{2}dx\Big{|}\] \[\leq C(N,s,\mu,\Omega,\varepsilon,\ell,M,\|f\|_{L^{\infty}(\Omega )}).\] By (4.25), we have that \[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|\eta_{\varepsilon}(x)u_{ \varepsilon,f,\ell}(x)-\eta_{\varepsilon}(y)u_{\varepsilon,f,\ell}(y)|^{2}}{|x -y|^{N+2s}}dydx\leq C(N,s,\mu,\Omega,\Omega^{\prime},\varepsilon,\ell,M,\|f\|_{L ^{\infty}(\Omega)}).\] By the standard fractional compact Sobolev embedding and a diagonal argument, there exists a subsequence \(\{u_{\varepsilon_{n},f,\ell}\}_{n\in\mathbb{N}}\) such that \(u_{\varepsilon_{n},f,\ell}\to u_{f,\ell}\) a.e. in \(\Omega\setminus\{0\}\) as \(\varepsilon_{n}\to 0\). We find that \(u_{f,\ell}\in W^{s,2}_{\rm loc}(\mathbb{R}^{N}\setminus\{0\})\cap C(\Omega \setminus\{0\})\) and \(u=0\) in \(\mathbb{R}^{N}\setminus\Omega\). Moreover, for any \(\phi\in C^{\infty}_{0}(\Omega\setminus\{0\})\), there exists \(\bar{\varepsilon}>0\) such that \(\phi\in C^{\infty}_{0}(\Omega_{\bar{\varepsilon}})\). Thus, for \(2\varepsilon_{n}\leq\bar{\varepsilon}\), \(\phi\in C^{\infty}_{0}(\Omega_{\varepsilon_{n}})\). Therefore, by dominated convergence theorem, we obtain \[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(u_{ \varepsilon_{n},f,\ell}(x)-u_{\varepsilon_{n},f,\ell}(y))(\phi(x)-\phi(y))}{|x-y |^{N+2s}}dydx\] \[\qquad=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{(u_{f,\ell }(x)-u_{f,\ell}(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx,\] \[\lim_{n\to\infty}\int_{\Omega_{\varepsilon_{n}}}\frac{u_{ \varepsilon_{n},f,\ell}(x)\phi(x)}{|x|^{2s}}dx=\int_{\Omega}\frac{u_{f,\ell}(x )\phi(x)}{|x|^{2s}}dx,\] \[\lim_{n\to\infty}\int_{\Omega_{\varepsilon_{n}}}g(u_{\varepsilon _{n},f,\ell})\phi dx=\int_{\Omega}g(u_{f,\ell})\phi dx,\] \[\lim_{n\to\infty}\int_{\Omega_{\varepsilon_{n}}}f\phi dx=\int_{ \Omega}f\phi dx.\] Thus by passing \(n\to\infty\) in (5.1) with \(\varepsilon\) replaced by \(\varepsilon_{n}\), we obtain \[\begin{split}\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R }^{N}}\frac{(u_{f,\ell}(x)-u_{f,\ell}(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dydx \\ +\mu\int_{\Omega}\frac{u_{f,\ell}(x)\phi(x)}{|x|^{2s}}dx+\int_{ \Omega}g(u_{f,\ell})\phi dx=\int_{\Omega}f\phi dx,\quad\forall\phi\in C_{0}^{ \infty}(\Omega\setminus\{0\}).\end{split} \tag{5.12}\] Employing estimate (5.2) with \(\varepsilon\) replaced by \(\varepsilon_{n}\) and letting \(\varepsilon_{n}\to 0\), we find that \[\max\{(\ell\Phi_{s,\mu}^{\Omega}-\psi_{M})^{+},u_{f,0}\}\leq u_{f,\ell}\leq u_ {f,0}+\ell\Phi_{s,\mu}^{\Omega}\quad\text{a.e. in }\;\Omega\setminus\{0\}. \tag{5.13}\] Next, by (5.5), \(u_{f,0}\in\mathbf{H}_{\mu,0}^{s}(\Omega)\) and by [9, Lemma 4.4,4.5], we deduce that \(u_{f,0}\in C^{\beta}(\Omega\setminus\{0\})\) for any \(\beta\in(0,2s)\) and \[u_{f,0}(x)\leq C(N,\Omega,s,\mu)d(x)^{s}|x|^{\tau_{+}}\|f\|_{L^{\infty}(\Omega )}\quad\forall x\in\Omega\setminus\{0\},\] which implies \[\lim_{|x|\to 0}\frac{u_{f,0}(x)}{\Phi_{\mu}^{\Omega}(x)}=0. \tag{5.14}\] Combining (5.13), (4.27) and (5.14) lead to \[\lim_{|x|\to 0}\frac{u_{f,\ell}(x)}{\Phi_{\mu}(x)}=\ell.\] Since \(g\in L^{\infty}(\mathbb{R})\), by Theorem 2.3, there exists a unique weak solution \(w_{f,\ell}\) of \[\begin{cases}\mathcal{L}_{\mu}^{s}w=g(u_{f,\ell})+f&\text{in}\;\Omega,\\ w=0&\text{in}\;\mathbb{R}^{N}\setminus\Omega,\end{cases}\] namely for any \(b<2s-\tau_{+}\), \(w_{f,\ell}\in L^{1}(\Omega;|x|^{-b})\) and \[\int_{\Omega}w_{f,\ell}(-\Delta)^{s}_{\tau_{+}}\psi dx=\int_{\Omega}g(u_{f, \ell})\psi|x|^{\tau_{+}}dx+\int_{\Omega}f\psi|x|^{\tau_{+}}dx,\quad\forall \psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b}). \tag{5.15}\] This and (5.12) imply \[\frac{C_{N,s}}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac {(u_{f,\ell}(x)+w_{f,\ell}(x)-u_{f,\ell}(y)-w_{f,\ell}(y))(\phi(x)-\phi(y))}{|x- y|^{N+2s}}dydx\] \[\qquad+\mu\int_{\Omega}\frac{(u_{f,\ell}(x)+w_{f,\ell}(x))\phi(x) }{|x|^{2s}}dx=0,\quad\forall\phi\in C_{0}^{\infty}(\Omega\setminus\{0\}).\] Since \(g\in L^{\infty}(\mathbb{R})\) and \(f\in L^{\infty}(\Omega)\), in view of the proof of Theorem 2.3, \(w_{f,\ell}\in\mathbf{H}_{\mu,0}^{s}(\Omega)\). It follows by [9, Lemma 4.4,4.5] that \(w_{f,\ell}\in C^{\beta}(\Omega\setminus\{0\})\) for any \(\beta\in(0,2s)\) and \[w_{f,\ell}(x)\leq C(M,\Omega,s,N,\mu)d(x)^{s}|x|^{\tau_{+}},\quad\forall x\in \Omega\setminus\{0\}. \tag{5.16}\] Combining (4.17), (5.16), the definition of \(\Phi_{s,\mu}\) in (1.4) and the fact that \(\tau_{+}\geq\tau_{-}\), we derive \[\lim_{|x|\to 0}\frac{u_{f,\ell}(x)+w_{f,\ell}(x)}{\Phi_{s,\mu}(x)}=\ell.\] Hence by [13, Theorem 4.14], we have that \(u_{f,\ell}+w_{f,\ell}=\ell\Phi_{s,\mu}^{\Omega}\) a.e. in \(\Omega\setminus\{0\}\). Plugging it into (5.15) leads to (1.10), namely \(u_{f,\ell}\) is a weak solution of (1.1). **Step 2.** We assume that \(\nu\in\mathfrak{M}^{+}(\Omega\setminus\{0\};|x|^{\tau_{+}})\) has compact support in \(\Omega\setminus\{0\}\). Let \(\{\zeta_{\delta}\}\) be the sequence of standard mollifiers. Put \(\nu_{\delta}=\zeta_{\delta}*\nu\) then \(0\leq\nu_{\delta}\in C_{0}^{\infty}(D)\) where \(D\Subset\Omega\setminus\{0\}\), then (3.16) and (3.17) hold. Let \(u_{\nu_{\delta},\ell}\) be the unique weak solution of (1.1) with \(\nu\) replaced by \(\nu_{\delta}\), namely for any \(b<2s-\tau_{+}\), \(u_{\nu_{\delta},\ell}\in L^{1}(\Omega;|x|^{-b})\), \(g(u_{\nu_{\delta},\ell})\in L^{1}(\Omega;|x|^{\tau_{+}})\) and \[\int_{\Omega}u_{\nu_{\delta},\ell}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega }g(u_{\nu_{\delta},\ell})\psi|x|^{\tau_{+}}dx=\int_{\Omega\setminus\{0\}} \psi|x|^{\tau_{+}}\nu_{\delta}dx+\ell\int_{\Omega}\Phi_{s,\mu}^{\Omega}(- \Delta)^{s}_{\tau_{+}}\psi dx, \tag{5.17}\] for any \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). Since \(g\in L^{\infty}(\mathbb{R})\), by Theorem 2.3, there exists a unique weak solution \(w_{\nu_{\delta},\ell}\) of \[\begin{cases}\mathcal{L}_{\mu}^{s}w=g(u_{\nu_{\delta},\ell})+\nu_{\delta}& \text{ in }\Omega,\\ w=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] As in step 1, \(u_{\nu_{\delta},\ell}+w_{\nu_{\delta},\ell}=\ell\Phi_{s,\mu}^{\Omega}\) in \(\Omega\setminus\{0\}\) for any \(n\in\mathbb{N}\). By proceeding analogously as in the proof of Theorem 1.3 and employing (3.17) and the fact that \(g\in L^{\infty}(\mathbb{R})\), we may show that there exists \(w_{\nu,\ell}\) such that, up to a subsequence, \(w_{\nu_{\delta},\ell}\to w_{\nu,\ell}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) as \(\delta\to 0\) for any \(b<2s-\tau_{+}\). Put \(u_{\nu,\ell}=\ell\Phi_{s,\mu}^{\Omega}-w_{\nu,\ell}\) then \(u_{\nu_{\delta},\ell}\to u_{\nu,\ell}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) as \(\delta\to 0\) for any \(b<2s-\tau_{+}\). Since \(g\in L^{\infty}(\Omega)\cap C(\mathbb{R})\), by the dominated convergence theorem, \(g(u_{\nu_{\delta},\ell})\to g(u_{\nu,\ell})\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(\delta\to 0\). Therefore, letting \(\delta\to 0\) in (5.17) leads to \[\int_{\Omega}u_{\nu,\ell}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}g(u_{\nu,\ell})\psi|x|^{\tau_{+}}dx=\int_{\Omega\setminus\{0\}}\psi|x|^{\tau_{+}}\nu dx +\ell\int_{\Omega}\Phi_{s,\mu}^{\Omega}(-\Delta)^{s}_{\tau_{+}}\psi dx\] for any \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). It means \(u_{\nu,\ell}\) is a weak solution of (1.1). **Step 3.** We consider \(\nu\in\mathfrak{M}^{+}(\Omega\setminus\{0\};|x|^{\tau_{+}})\). Put \(\nu_{r}=\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu\) and \(\nu_{r,\delta}=\zeta_{\delta}*(\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu)\). Denote by \(u_{\nu_{r},\ell}\) and \(u_{\nu_{r,\delta},\ell}\) the nonnegative weak solutions of (1.1) with \(\nu\) replaced by \(\nu_{r}\) and by \(\nu_{r,\delta}\) respectively. By step 2, \(u_{\nu_{r,\delta},\ell}\to u_{\nu_{r},\ell}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) as \(\delta\to 0\) for any \(b<2s-\tau_{+}\). Since \(\nu_{r}\geq\nu_{r^{\prime}}\geq 0\) for \(0<r\leq r^{\prime}\), it follows that \(\nu_{r,\delta}\geq\nu_{r^{\prime},\delta}\). By (2.8) and the monotonicity of \(g\), we deduce that \(u_{\nu_{r,\delta},\ell}\geq u_{\nu_{r^{\prime},\delta},\ell}\geq 0\) for any \(0<r<r^{\prime}\) and \(\delta>0\). Letting \(\delta\to 0\) yields \(u_{\nu_{r},\ell}\geq u_{\nu_{r^{\prime}},\ell}\geq 0\) for any \(0<r<r^{\prime}\). Employing the monotonicity convergence theorem, we derive that \(u_{\nu_{r},\ell}\uparrow u_{\nu,\ell}\) a.e. in \(\Omega\) and in \(L^{1}(\Omega;|x|^{-b})\) as \(r\to 0\) for any \(b<2s-\tau_{+}\). Consequently, \(g(u_{\nu_{r},\ell})\uparrow g(u_{\nu,\ell})\) a.e. in \(\Omega\) and in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(r\to 0\). Passing to the limit, we conclude that \(u_{\nu,\ell}\) is a weak solution to (1.1). ### Unbounded absorption Proof of Theorem 1.6.: The uniqueness follows from Kato inequality (2.9). Next we show the existence of a solution to problem (1.1). Let \(g_{n}=\max\{-n,\min\{g,n\}\}\) then \(g_{n}\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\). For \(r>0\) and \(0<\delta<\frac{r}{4}\), put \(\nu_{r}=\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu\) and \(\nu_{r,\delta}=\zeta_{\delta}*(\mathbf{1}_{\Omega\setminus B_{r}(0)}\nu)\). By Theorem 5.2, there exists a unique positive weak solution \(u_{\nu_{r,\delta},\ell,n}\) of \[\begin{cases}\mathcal{L}_{\mu}^{s}u+g_{n}(u)=\nu_{r,\delta}+\ell\delta_{0}& \text{ in }\Omega,\\ \hskip 14.226378ptu=0&\text{ in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\] namely, for any \(b<2s-\tau_{+}\), \(u_{\nu_{r,\delta},\ell,n}\in L^{1}(\Omega;|x|^{-b})\) and \[\int_{\Omega}u_{\nu_{r,\delta},\ell,n}(-\Delta)_{\tau_{+}}^{s}\psi dx+\int_{ \Omega}g_{n}(u_{\nu_{r,\delta},\ell,n})\psi|x|^{\tau_{+}}dx=\int_{\Omega \setminus\{0\}}\psi|x|^{\tau_{+}}\nu_{r,\delta}dx+\ell\int_{\Omega}\Phi_{s, \mu}^{\Omega}(-\Delta)_{\tau_{+}}^{s}\psi dx \tag{5.18}\] for all \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). From the above formula and (2.8), we deduce that \(u_{\nu_{r,\delta},\ell,n+1}\leq u_{\nu_{r,\delta},\ell,n}\) for any \(n\in\mathbb{N}\). Put \(u_{\nu_{r,\delta},\ell}:=\lim_{n\to\infty}u_{\nu_{r,\delta},\ell,n}\). For any \(n\in\mathbb{N}\), by Theorem 5.2, we have \[\max\{(\ell\Phi_{s,\mu}^{\Omega}-\psi_{M})^{+},u_{\nu_{r,\delta},0,n}\}\leq u_ {\nu_{r,\delta},\ell,n}\leq u_{\nu_{r,\delta},0,n}+\ell\Phi_{s,\mu}^{\Omega} \quad\text{a.e. in }\Omega\setminus\{0\}, \tag{5.19}\] which implies (5.10). By Theorem 3.5, \(u_{\nu_{r,\delta},0,n}\leq v_{\nu_{r,\delta}}\) a.e. in \(\Omega\setminus\{0\}\), where \(v_{\nu_{r,\delta}}\) is the unique solution of \[\begin{cases}\mathcal{L}_{\mu}^{s}v=\nu_{r,\delta}&\text{ in }\Omega,\\ \hskip 14.226378ptv=0&\text{ in }\mathbb{R}^{N}\setminus\Omega.\end{cases}\] The existence and uniqueness of \(v_{\nu_{r,\varepsilon}}\) is guaranteed by Theorem 2.3. Moreover \(v_{\nu_{r},\delta}\geq 0\) and for any \(b<2s-\tau_{+}\), \[\|v_{\nu_{r,\delta}}\|_{L^{1}(\Omega;|x|^{-b})}\leq C(N,\Omega,s,\mu,b)\|\nu_ {r,\delta}\|_{L^{1}(\Omega;|x|^{\tau_{+}})}\leq C(N,\Omega,s,\mu,b,r)\|\nu\|_ {\mathfrak{M}(\Omega\setminus\{0\};|x|^{\tau_{+}})}.\] We also note that \(\Phi_{s,\mu}^{\Omega}\in L^{1}(\Omega;|x|^{-b})\). Therefore, by the monotone convergence theorem, \(u_{\nu_{r,\delta},\ell,n}\to u_{\nu_{r,\delta},\ell}\) in \(L^{1}(\Omega;|x|^{-b})\) as \(n\to\infty\). By Lemma 3.1, \[\|v_{\nu_{r,\delta}}\|_{L^{\frac{N}{N-2s}}_{w}(\Omega\setminus\{0\};|x|^{\tau _{+}})}\leq C(N,\Omega,s,\mu,r)\|\nu_{r,\delta}\|_{L^{1}(\Omega\setminus\{0\} ;|x|^{\tau_{+}})},\] which implies \[\|u_{\nu_{r,\delta},0,n}\|_{L^{\frac{N}{N-2s}}_{w}(\Omega\setminus\{0\};|x|^{ \tau_{+}})}\leq C(N,\Omega,s,\mu,r)\|\nu_{r,\delta}\|_{L^{1}(\Omega\setminus\{ 0\};|x|^{\tau_{+}})},\quad\forall n\in\mathbb{N}. \tag{5.20}\] We can also check that \[\|\Phi_{s,\mu}^{\Omega}\|_{L^{\frac{N+\tau_{+}}{w-}}_{w-}(\Omega\setminus\{0 \};|x|^{\tau_{+}})}\leq C(N,s,\mu). \tag{5.21}\] Combining (5.19), (5.20) and (5.21) yields \[\|u_{\nu_{r,\delta},\ell,n}\|_{L^{p_{r,\mu}^{\tau_{+}}}_{w}(\Omega\setminus\{0 \};|x|^{\tau_{+}})}\leq C(N,\Omega,s,\mu,r)\|\nu_{r,\delta}\|_{L^{1}(\Omega \setminus\{0\};|x|^{\tau_{+}})}+C(N,s,\mu,\ell),\quad\forall n\in\mathbb{N}.\] Then Lemma 3.7 ensures that the sequence \(\{g_{n}(u_{\nu_{r,\delta},\ell,n})\}\) is uniformly bounded and equi-integrable in \(L^{1}(\Omega;|x|^{\tau_{+}})\). On the other hand, we derive that \(g_{n}(u_{\nu_{r,\delta},\ell,n})\to g(u_{\nu_{r,\delta},\ell})\) a.e. in \(\Omega\setminus\{0\}\) as \(n\to\infty\). By the Vitali convergence theorem, we conclude that \(g_{n}(u_{\nu_{r,\delta},\ell,n})\to g(u_{\nu_{r,\delta},\ell})\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(n\to\infty\). Therefore, by sending \(n\to\infty\) in (5.18), we derive \[\int_{\Omega}u_{\nu_{r,\delta},\ell}(-\Delta)_{\tau_{+}}^{s}\psi dx+\int_{ \Omega}g(u_{\nu_{r,\delta},\ell})\psi|x|^{\tau_{+}}dx=\int_{\Omega\setminus\{0 \}}\psi|x|^{\tau_{+}}\nu_{r,\delta}dx+\ell\int_{\Omega}\Phi_{s,\mu}^{\Omega}(- \Delta)_{\tau_{+}}^{s}\psi dx \tag{5.22}\] for all \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). By a similar argument as above, we can show that \(u_{\nu_{r},\delta,\ell}\to u_{\nu_{r},\ell}\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{-b})\) for any \(b<2s-\tau_{+}\) and \(g(u_{\nu_{r},\delta,\ell})\to g(u_{\nu_{r},\ell})\) a.e. in \(\Omega\setminus\{0\}\) and in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(\delta\to 0\). By letting \(\delta\to 0\) in (5.22), we obtain \[\int_{\Omega}u_{\nu_{r},\ell}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}g(u_ {\nu_{r},\ell})\psi|x|^{\tau_{+}}dx=\int_{\Omega\setminus\{0\}}\psi|x|^{\tau_{ +}}d\nu_{r}+\ell\int_{\Omega}\Phi^{\Omega}_{s,\mu}(-\Delta)^{s}_{\tau_{+}}\psi dx \tag{5.23}\] for all \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). Next we see that \(\nu_{r^{\prime},\delta}\geq\nu_{r,\delta}\) for any \(0<r<r^{\prime}\). By the Kato type inequality (2.8) and the monotonicity of \(g\), we deduce that \(u_{\nu_{r^{\prime}},\delta},\ell\leq u_{\nu_{r},\delta,\ell}\). It follows that \(u_{\nu_{r^{\prime}},\ell}\leq u_{\nu_{r},\ell}\) for any \(0<r<r^{\prime}\). Put \(u_{\nu,\ell}=\lim_{r\to 0}u_{\nu_{r},\ell}\). Next by taking \(\psi=\xi_{b}\), the solution of problem (3.24), we deduce that for any \(b<2s-\tau_{+}\), \[\|u_{\nu_{r},\ell}\|_{L^{1}(\Omega;|x|^{-b})}+\|g(u_{\nu_{r},\ell})\|_{L^{1}( \Omega;|x|^{\tau_{+}})}\leq\|\nu\|_{\mathfrak{R}(\Omega\setminus\{0\};|x|^{ \tau_{+}})}+C(N,s,\mu,\ell).\] Therefore, from the monotone convergence theorem and the monotonicity of \(g\), we deduce that \(u_{\nu_{r},\ell}\to u_{\nu,\ell}\) in \(L^{1}(\Omega;|x|^{-b})\) and \(g(u_{\nu_{r},\ell})\to g(u_{\nu,\ell})\) in \(L^{1}(\Omega;|x|^{\tau_{+}})\) as \(r\to 0\). Thus, letting \(r\to 0\) in (5.23), we obtain \[\int_{\Omega}u_{\nu,\ell}(-\Delta)^{s}_{\tau_{+}}\psi dx+\int_{\Omega}g(u_{ \nu,\ell})\psi|x|^{\tau_{+}}dx=\int_{\Omega\setminus\{0\}}\psi|x|^{\tau_{+}}d \nu+\ell\int_{\Omega}\Phi^{\Omega}_{s,\mu}(-\Delta)^{s}_{\tau_{+}}\psi dx\] for all \(\psi\in\mathbf{X}_{\mu}(\Omega;|x|^{-b})\). It means \(u_{\nu,\ell}\) is a weak solution to problem (1.1). Proof of Theorem 1.7.: The proof of this theorem can be proceeded similarly to that of Theorem 1.6.
2304.06171
Potential Major Improvement in Superconductors for High-Field Magnets
Fusion reactors are limited by the magnetic field available to confine their plasma. The commercial fusion industry uses the larger magnetic field and higher operating temperature of the cuprate superconductor $\mathbf{YBa_{2}Cu_{3}O_{7-\delta}}$ (YBCO) in order to confine their plasma into a dense volume. A superconductor is a macroscopic quantum state that is protected from the metallic (resistive) state by an energy gap. Unfortunately, YBCO has an anisotropic gap, known as D-wave because it has the shape of a $\mathbf{d_{x^2-y^2}}$ chemical orbital. This D-wave gap means that poly-crystalline wire cannot be made because a few degree misalignment between grains in the wire leads to a drastic loss in its supercurrent carrying ability, and thereby its magnetic field limit. The superconductor industry has responded by growing nearly-single-crystal superconducting YBCO films on carefully prepared substrate tapes kilometers in length. Heroic development programs have made such tapes commercially available, but they are very expensive and delicate. MRI magnet superconductors, such as $\mathbf{NbTi}$ and $\mathbf{Nb_{3}Sn}$, are formed into poly-crystalline wires because they have an isotropic gap in the shape of an s chemical orbital (called S-wave) that makes them insensitive to grain misalignment. However, these materials are limited to lower magnetic fields and liquid-He temperatures. Here, we modified YBCO by doping the Y site with Ca and Ce atoms to form $\mathbf{(Y_{1-x-y}Ca_{x}Ce_{y})Ba_{2}Cu_{3}O_{7-\delta}}$, and show evidence that it changes to an S-wave gap. Its superconducting transition temperature, $\mathbf{T_c}$, of $\mathbf{\sim 70K}$, while lower than that of D-wave YBCO at $\mathbf{\sim 90K}$, is easily maintained using common, economic cryogenic equipment.
Jamil Tahir-Kheli, Tomas Hlasek, Michal Lojka, Michael S. Osofsky, Carver A. Mead
2023-04-12T22:07:15Z
http://arxiv.org/abs/2304.06171v1
# Potential Major Improvement in Superconductors for High-Field Magnets ###### Abstract **Fusion reactors are limited by the magnetic field available to confine their plasma. The commercial fusion industry uses the larger magnetic field and higher operating temperature of the cuprate superconductor YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) (YBCO) in order to confine their plasma into a dense volume. A superconductor is a macroscopic quantum state that is protected from the metallic (resistive) state by an energy gap. Unfortunately, YBCO has an anisotropic gap, known as D-wave because it has the shape of a \(\mathrm{d_{x^{2}-y^{2}}}\) chemical orbital. This D-wave gap means that poly-crystalline wire cannot be made because a few degree misalignment between grains in the wire leads to a drastic loss in its subcurrent carrying ability, and thereby its magnetic field limit. The superconductor industry has responded by growing nearly-single-crystal superconducting YBCO films on carefully prepared substrate tapes kilometers in length. Heroic development programs have made such tapes commercially available, but they are very expensive and delicate. MRI magnet superconductors, such as NbTi and Nb\({}_{3}\)Sn, are formed into poly-crystalline wires because they have an isotropic gap in the shape of an s chemical orbital (called S-wave) that makes them insensitive to grain misalignment. However, these materials are limited to lower magnetic fields and liquid-He temperatures. Here, we modified YBCO by doping the Y site with Ca and Ce atoms to form (Y\({}_{1-x-y}\)Ca\({}_{x}\)Ce\({}_{y}\))Ba\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\), and show evidence that it changes to an S-wave gap. Its superconducting transition temperature, T\({}_{\mathrm{c}}\), of \(\sim 70\)K, while lower than that of D-wave YBCO at \(\sim 90\)K, is easily maintained using common, economic cryogenic equipment.** **The power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of power of the power of the power of the power of the power of the power of power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of power of the power of the power of the power of the power of the power of power of the power of the power of the power of power of the power of the power of power of the power of power of the power of power of the power of power of power of the power of the power of the power of power of the power of power of the power of the power of the power of power of the power of power of power of the power of the power of power of the power of the power of the power of power of the power of the power of the power of the power of power of the power of power of the power of the power of power of the power of the power of power of the power of the power of power of power of power of power of the power of power of the power of the power of the power of the power of the power of power of the power of power of the power of the power of power of the power of the power of the power of power of power of power of the power of power of power of power of the power of power of power of the power of power of the power of power of power of power of power of power of the power of power Three experiments to distinguish between D-wave and S-wave gap symmetries The superconducting gap is represented by a complex number that is a function of direction with respect to the crystal axes of YBCO. It has magnitude and phase in every direction. The gap is invariant to an overall phase change. The T\({}_{\rm c}\) is proportional to the maximum magnitude of the gap. For S-wave and D-wave gap symmetries, the gap function can be taken to be real in every direction. An S-wave gap is positive, while a D-wave gap is positive and negative with a zero in-between. The D-wave gap in YBCO has zeros, or nodes, along directions 45\({}^{\circ}\) from the axes along the Cu-O bond directions in the CuO\({}_{2}\) planes and opposite signs along the two perpendicular Cu-O bond directions. As described above, the T\({}_{\rm c}\) of D-wave YBCO should fall with increasing non-magnetic impurities while the T\({}_{\rm c}\) of an S-wave YBCO phase should remain approximately constant with varying impurity concentrations. The first experiment measures the T\({}_{\rm c}\) evolution with impurity concentration. The results of the experiment are shown in Figure 2 with additional details of the experiment below. The second experiment looks for the existence of a sign change in the gap in order to distinguish a D-wave gap from and S-wave gap. A D-wave gap leads to a zero-bias conductance peak (ZBCP) in Point-Contact-Andreev-Reflection (PCAR) tunneling current, while an S-wave gap has no ZBCP. The results of this experiment are shown in Figure 3 with additional details below. The third experiment searches for nodes in the superconducting gap by measuring the evolution of the superconducting penetration depth, \(\lambda\) as a function of temperature, T, at low-temperatures. A D-wave gap has nodes and will lead to \(\lambda\sim\) T, and an S-wave gap leads to \(\lambda\sim\) T\({}^{2}\) because it has no nodes [6]. The results of this experiment are shown in Figure 4 with additional details below. The non-magnetic impurities used in this paper are Ca atoms that substitute at the Y sites in a +2 oxidation state, and Ce atom that substitute at the Y site in a +4 oxidation state. Both of these atoms are non-magnetic. Since the oxidation state of Y in YBCO is +3, Ca and Ce have a \(-1\) and +1 charge relative to the Y atoms, respectively. Also, the ionic radii of Ca (+2), Ce (+4), and Y (+3) are very close. They are 1.00 A, 1.11 A, and 1.02 A, respectively. Thus Ca and Ce atoms do not strain YBCO. This Ca and Ce charge "counter-doping" has the benefit of permitting very large amounts of non-magnetic dopants to substitute at the Y site. Large doping is desirable since we want to push the D-wave T\({}_{\rm c}\) down as far as possible in order to increase our chances of unearthing the S-wave gap phase. There were many other potential non-magnetic atoms. We chose Ca and Ce because they both substitute at the Y site of YBCO, Ca-doped YBCO is a superconductor [7; 8], and Ce-doped YBCO is also a superconductor [9]. For both Ca and Ce doping, the T\({}_{\rm c}\) is only modestly lower than pure YBCO. Samples of (Y\({}_{1-x\sim}\)Y\({}_{\rm a}\)Xe\({}_{\rm c}\))Ba\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) [hereinafter, denoted by (X,Y)] for (X,Y) = (0.0, 0.0), (0.13, 0.0), (0.13, 0.13), (0.26, 0.13), (0.32, 0.16), (0.36, 0.16), (0.26, 0.26), (0.29, 0.29), and (0.32, 0.32) were synthesized. Since pure YBCO, \((0,0)\) in our notation, has a superconducting T\({}_{\rm c}\) dome that rises from zero, peaks, and then decreases as the number of Oxygen atoms in the CuO chains increases, we expect that all (X,Y) samples will have similar T\({}_{\rm c}\) domes [10]. Many low-temperature anneals were done on each sample to change the Oxygen content and thereby obtain the maximum T\({}_{\rm c}\), (T\({}_{\rm c,max}\)) value for each (X,Y) for all samples reported in this paper. Extended Data Figures 1, 2, 3, and Extended Data Tables II and III are materials characterization data that show the samples are single-phase with the stated composition. **Fig. 2 \(|\) Evolution of the maximum superconducting T\({}_{\rm c,max}\)_vs_ Ca and Ce counter-doping in YBCO compared to D-wave theory predictions._ Blue points are T\({}_{\rm c,max}\) which initially drops with counter-doping and then "saturates" at \(\sim 72\) K. Saturation of T\({}_{\rm c,max}\) suggests the known D-wave of pure YBCO has changed to S-wave because the T\({}_{\rm c}\) of S-wave superconductors is only weakly dependent on non-magnetic counter-doping [4] (Anderson’s Theorem). T\({}_{\rm c,max}\) is the maximum T\({}_{\rm c}\) for each (X,Y) after low-temperature annealing that changes the Oxygen content of the sample. Red and green points are the predicted T\({}_{\rm c,max}\) results. Red points assume simple pair-breaking, where Ca and Ce atoms lead to identical pair-breaking strengths. Green points assume that Ca and Ce atoms are close together in the material because Ca and Ce have +1 and \(-1\) charges relative to Y, respectively (hence, the name "dipole pair-breaking"). Details are given in the Supplement [11]. The red and green points do not explain the experiment (blue points) suggesting that highly counter-doped YBCO is an S-wave superconductor. ### Evolution of the maximum T\({}_{c}\) with impurity concentration Figure 2 shows the maximum superconducting T\({}_{c,\max}\) as a function of (X,Y) counter-doping. It shows that T\({}_{c,\max}\) initially falls, as expected for a D-wave superconductor, and then "saturates" at higher doping. The red and green plots are the results of two different theoretical models, using Abrikosov-Gorkov theory [2; 3], for the drop in T\({}_{c}\) versus counter-doping if all the samples remained D-wave [11]. The saturation of the measured T\({}_{c,\max}\) (blue points) is not compatible with D-wave superconductivity predictions and suggests that highly counter-doped YBCO is an S-wave superconductor. A very important detail of this experiment is that the maximum T\({}_{c}\) for each counter-doping was used. Tallon et al. [12; 13] showed that the T\({}_{c}\) dome of cuprate superconductors corresponds to the hole doping in the CuO\({}_{2}\) planes and that this hole doping leads to a unique room-temperature thermopower. From these two relations, T\({}_{c,\max}\) is predicted to occur when the room-temperature thermopower is \(\approx+2\)\(\mu\)V/K. For all (X,Y) values, this relation was found to be true. We conclude that for each (X,Y), the number of holes in the CuO\({}_{2}\) planes is the same when the transition temperature is maximum. Therefore, any change in T\({}_{c,\max}\) as a function of (X,Y) is not due to changes in the Fermi level, Fermi surface, or hole doping in the CuO\({}_{2}\) planes. ### Evolution of a zero-bias-conductance peak in Point-Contact-Andreev-Reflection The second experiment looked for a sign change in the superconducting gap using Point-Contact-Andreev-Reflection [14] (PCAR). PCAR measures the tunneling current from a normal metal (in this case Cu) point contact into the superconductor. The normal metal has a continuum of states in the neighborhood of the Fermi level, whereas the superconductor has its energy gap centered on its Fermi level. Thus no normal current can tunnel from the normal level into the superconductor for bias voltages less than half the gap. However, there are normal states in a D-wave superconductor in the regions where its gap changes sign, and electrons from the normal metal tip can tunnel into these states, thereby showing a sharp peak near zero bias voltage, known as the zero-bias conductance peak (ZBCP) [15; 16]. An S-wave gap has no sign change and hence no ZBCP in PCAR. See Figure 3. The figures shows representative conductance plots from many spectra for \((0,0)\) and \((0.32,0.32)\). Since \((0,0)\), pure YBCO, is D-wave, a ZBCP is seen, as expected. For heavily counter-doped \((0.32,0.32)\), no ZBCP was found. We conclude that PCAR suggests highly counter-doped YBCO is an S-wave superconductor. ### Evolution of the low-temperature penetration depth The third experiment searched for nodes in the superconducting gap by the measuring the evolution of the superconducting penetration depth, \(\lambda\), as a function of temperature, T. A D-wave gap has nodes leading to \(\lambda\sim\) T. An S-wave superconductor in the London limit (short coherence length) has \(\lambda\sim\) T\({}^{2}\) because it does not have nodes [6]. Figure 4 shows the changes in inductance, \(L\), of a pancake coil placed on top of \((\text{X},\text{Y})\) samples for \((\text{X},\text{Y})=(0,0),\ (0.13,0),\)\((0.36,0.16)\), and \((0.32,0.32)\). Changes in \(L\) are proportional to changes in the superconducting penetration depth, \(\lambda\)[11]. The \((0,0)\) and \((0.13,0)\) are known D-wave gap materials. Hence, we expect that their \(L\) changes linearly with \(T,\) as observed. We find \(L\sim T^{2}\) for \((0.36,0.16)\) and \((0.26,0.26)\) leading to \(\lambda\sim T^{2}\) for both samples, as expected for an S-wave gap superconductor. Figure 3: **Point-Contact-Andreev-Reflection [20] (PCAR) on pure YBCO, or \((\text{X},\text{Y})=(\mathbf{0},\mathbf{0})\), and \((\text{X},\text{Y})=(\mathbf{0}.\mathbf{32},\mathbf{0}.\mathbf{32})\).** A ZBCP in PCAR is a signature of a D-wave phase. No ZBCP is expected for an S-wave phase. This experiment searches for a sign change in the superconducting gap. (a) A ZBCP is seen for pure YBCO, as expected, since it has a D-wave gap. (b) No ZBCP is seen for \((X,Y)=(0.32,0.32)\), suggesting it has an S-wave gap. These curves are representative of several measurements on each sample. The changes in \(L\) in Figure 4 are several 10s of nanoHenrys. The magnitude of the extrapolated \(L\) at \(T=0\) is \(\approx 20\)\(\mu\)H for the four curves. A 1 nH change is a relative \(L\) change of \(\sim 5\times 10^{-5}\), making this experiment the most difficult of the three experiments in Figures 2, 3, and 4. A detailed description of this experiment and estimates of many potential errors is in the Supplement [11]. A possible D-wave gap explanation for \(\lambda\sim T^{2}\) exists. Hirschfeld et al. [2] showed that a D-wave superconductor with non-magnetic impurities can lead to \(\lambda\sim T^{2}\) for \(T<T^{*}\), where \(T^{*}\) depends on the magnitude of single impurity scattering (in our case, a single Ca or Ce atom) and the ratio of \(\rm{T_{c,max}}\) for \(\rm{(X,Y)}\) to \(\rm{T_{c,max}}\) for \(\rm{(0,0)}\) (pure YBCO). For \(T>T^{*}\), the theory predicts \(\lambda\sim T\). Since Ca and Ce impurities reside at the Y site in YBCO and this site is not in the \(\rm{CuO_{2}}\) planes, where most of the density of the metallic band is located, the magnitude of single impurity scattering is small [17]. Extended Data Figure 4 shows that the theory prediction [2] for weak (Born) scattering plus the \(\rm{T_{c,max}}\) values measured in Figure 2 lead to the conclusion that \(T^{*}\ll 1\) K for our experiment. Hence, a D-wave superconductor with impurity scattering does not explain the observed \(\lambda\sim T^{2}\) up to 26 K as seen for \(\rm{(0.36,0.16)}\) and \(\rm{(0.26,0.26)}\). ### Summary of the three experiments Table 1 summarizes the findings from the three experiments in Figures 2, 3, and 4. In all three experiments, the results favored a crossover from a D-wave gap at low counter-doping to an S-wave gap at high counter-doping. These results imply that Figure 1b is correct--a technologically useful S-wave superconducting gap YBCO phase resides at \(\sim 70\) K in YBCO heavily counter-doped with Ca and Ce impurities. We found two papers in the literature where a crossover from D-wave superconductivity to S-wave was seen. First, in 2001, Yeh et al. [18] observed a \(d+s\) superconducting gap symmetry for highly doped (\(\rm{Y_{0.7}Ca_{0.3}}\))Ba\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) with \(\rm{T_{c}=78\pm 2\ K}\). In our notation, this sample has \(\rm{(X,Y)=(0.3,0.0)}\). From Figure 2, the \(\rm{T_{c}}\) measured by Yeh et al., falls on the blue line for \(\rm{T_{c,max}}\) and is in the crossover region between a D-wave gap to an S-wave gap. Second, in 2012, Reid et al. [19] found a crossover from a D-wave superconducting gap in the iron-pnictide KFe\({}_{2}\)As\({}_{2}\) to an S-wave gap in \(\rm{(Ba_{0.6}K_{0.4})Fe_{2}As_{2}}\). The authors of this paper attribute the gap symmetry change to a change in the Fermi surface and Fermi level between the two pnictide samples. In our samples, we believe the gap symmetry has changed without altering the Fermi surface or the Fermi level. Figure 4: **Evolution of the measured superconducting penetration depth change as a function of the normalized temperature, as seen by the change in inductance.** The figure shows the change in inductance for two known D-wave gap phases, pure YBCO, \(\rm{(X,Y)=(0,0)}\) (green), \(\rm{(X,Y)=(0.13,0.0)}\) (blue), and two phases with \(\rm{(X,Y)=(0.36,0.16)}\) (black) and \(\rm{(0.26,0.26)}\) (red). For clarity, the black data are shifted downward by 3 nH and the red data are shifted downward by 5 nH. The data points are the measured inductance with \(\pm 3\sigma\) error bars. The x-axis is the ratio \(\rm{T/T_{c,max}}\) where \(\rm{T_{c,max}}\) is the maximum superconducting temperature from Figure 2. For all curves, the change in inductance was measured from 4 K to 26 K. In the Supplement [11], we show that the measured change in inductance is proportional to the change in penetration depth of the superconducting sample, and also estimate many sources of errors in this measurement. The extrapolated \(T=0\) K inductance values are 19.276 \(\mu\)H, 20.623 \(\mu\)H, 20.349 \(\mu\)H, and 21.243 \(\mu\)H for \(\rm{(0,0)}\), \(\rm{(0.3,0.36,0.16)}\), and \(\rm{(0.26,0.26)}\), respectively. The solid blue and green curves are linear in \(T\) fits and the black and red curves are \(T^{2}\) fits to the data. A linear \(T\) evolution of the penetration depth is expected for a D-wave gap and \(T^{2}\) is expected for an S-wave gap. The green and blue curves show that YBCO with \(\rm{(X,Y)=(0.0,0.0)}\) and \(\rm{(0.13,0.0)}\) are D-wave, as expected. The black and red curves suggest that \(\rm{(X,Y)=(0.36,0.16)}\) and \(\rm{(0.26,0.26)}\) are S-wave gap phases of YBCO. ## II Conclusions The intent of this paper was to search for an S-wave gap symmetry YBCO phase beneath the known D-wave gap YBCO phase at \(\sim 90\) K and determine its superconducting transition temperature. Our conjecture was that an S-wave YBCO phase was at \(\sim 70\) K, and if true, will have huge implications for making high-field magnets using poly-crystalline superconducting wires. To test this conjecture, we counter-doped YBCO with Ca and Ce impurities, performed three experiments to study the superconducting gap symmetry, and found evidence suggesting that an S-wave gap symmetry phase does exist at \(\sim 70\) K in all three experiments (see Table 1). A potential S-wave gap symmetry crossover in Ca and Ce counter-doped cuprate, YBa\({}_{2}\)Cu\({}_{4}\)O\({}_{8}\) (Y124) should be studied. Y124 is a stoichiometric crystal that is much more three-dimensional than YBCO (vastly improved conduction normal to its CuO\({}_{2}\) plane compared to YBCO). Y124 is intrinsically underdoped. Doping with 0.1 Ca brings Y124 up to optimal doping (highest T\({}_{c}\)). Hence, counter-doping with 0.1 more Ca than Ce is desired for the highest T\({}_{c}\). Poly-crystalline counter-doped wires of Y124 will be mechanically strong. Using the metallic precursor method [21] to form polycrystalline wires of Y124 is already known to lead to grain alignments \(<10^{o}\) normal to the CuO\({}_{2}\) planes and grain alignments \(<15^{o}\) in the planes. While these grain mis-alignments made D-wave poly-crystalline Y124 impractical, a counter-doped Y124 that becomes S-wave may have a very large supercurrent density using this mature manufacturing process. The results in this paper were obtained on poly-crystalline counter-doped samples. Ideally, one would like to repeat these experiments and additional experiments on single-crystals. The change in critical current density as a function of grain misalignment should also be measured to determine how much supercurrent can be transported in poly-crystalline wires. Poly-crystalline wires should be synthesized and characterized. Two methods exist for making poly-crystalline wires from cuprates: Powder-in-Tube [22; 23; 24] and metallic precursors [25; 26; 21]. If there is a large improvement in the supercurrent carrying ability of these wires, then an enormous opportunity exists for creating new and useful practical wires for high-magnetic field applications on a short timescale.
2305.06258
The protein dynamical transition is independent of hydration
Terahertz time-domain spectroscopy and differential scanning calorimetry were used to study the role of the dynamics of biomolecules decoupled from solvent effects. Lyophilised sucrose exhibited steadily increasing absorption with temperature as anharmonic excitations commence as the system emerges from a deep minimum of the potential energy landscape where harmonic vibrations dominate. The polypeptide bacitracin and two globular proteins, lysozyme and human serum albumin, showed a more complex temperature dependence. Further analysis focused on the spectral signature below and above the boson peak. We found evidence for the onset of anharmonic motions that are characteristic for partial unfolding and molecular jamming in the dry biomolecules. The activation of modes of the protein molecules at temperatures comparable to the protein dynamical transition temperature was observed in the absence of hydration. No evidence for Fr\"ohlich coherence, postulated to facilitate biological function, was found in our experiments.
Johanna Kölbel, Moritz L. Anuschek, Ivonne Stelzl, Supawan Santitewagun, Wolfgang Frieß, J. Axel Zeitler
2023-05-10T15:43:40Z
http://arxiv.org/abs/2305.06258v1
# The protein dynamical transition is independent of hydration ###### Abstract Terahertz time-domain spectroscopy and differential scanning calorimetry were used to study the role of the dynamics of biomolecules decoupled from solvent effects. Lyophilised sucrose exhibited steadily increasing absorption with temperature as anharmonic excitations commence as the system emerges from a deep minimum of the potential energy landscape where harmonic vibrations dominate. The polypeptide backtracian and two globular proteins, lysozyme and human serum albumin, showed a more complex temperature dependence. Further analysis focused on the spectral signature below and above the boson peak. We found evidence for the onset of anharmonic motions that are characteristic for partial unfolding and molecular jamming in the dry biomolecules. The activation of modes of the protein molecules at temperatures comparable to the protein dynamical transition temperature was observed in the absence of hydration. No evidence for Frohlich coherence, postulated to facilitate biological function, was found in our experiments. ## I Introduction The number of protein drugs in therapy is steadily increasing. These biopharmaceuticals typically need to be administered by injection. Due to the intrinsically limited stability of the molecules in aqueous solution, the protein-drug products are frequently freeze-dried into an amorphous solid matrix for storage and reconstituted immediately before use. The molecules surrounding the protein molecules in solution constitute their solvation shell (sometimes also called hydration shell); and the molecular mobility of this shell affects the rates of conformational change, catalysis, and protein/DNA-protein interactions [1; 2]. The misfolding propensity and the pathways of protein aggregation depend on the protein's local environment, which is influenced by the solvent, water, as well as sugars, salts, metal ions, and lipids that are part of the physiological environment or the reconstituted formulation. Molecular crowding at high protein concentration, pH and buffer also play a role [3; 4; 5; 6; 7; 8]. The predominant intermolecular interactions that proteins form with their surrounding environment are hydrogen bonds with water. A recent paper highlighted the importance that the mobility of water plays in protein dynamics and, ultimately, aggregation. Simulations in combination with various experimental techniques focusing on the intrinsically disordered model protein \(\alpha\)-synuclein (aSyn) showed that water mobility and aSyn mobility are inextricably linked. Enhancing the water mobility reduces the propensity of aSyn to aggregate [9]. The timescales of solvent motions and conformational changes in proteins differ significantly. Solvent motions are rapid, occurring on the femtosecond to picosecond timescale, whereas conformational changes in proteins happen on the nanosecond to millisecond timescale. Still, solvent mobility affects protein motions [10]. The coupling of water motions, the presence of ions, and the protein dynamics are protein specific due to differing charge distribution, hydrophobicity, and surface roughness [11]. Thus, protein unfolding in solution occurs via a complex pathway and the properties of the hydration shell are of critical importance [12]. These interactions are restrained in lyophilised samples with strongly reduced water content. Conversely, complete unfolding in the solid state is usually not observed as thermal decomposition occurs before sufficiently high temperatures for unfolding are reached. Instead of studying solvated biomolecules, here we removed all possible water molecules from protein samples. This results in the molecules coming into close contact but without aggregating, as was the subject of a previous study by Stephens et al. [9]. We can thus study protein dynamics decoupled from solvent effects. This is of fundamental interest, as well as of practical importance, for lyophiliseds of protein drugs [13]. Lyophilisates typically comprise cryo- and lyoprotectants, surfactants, and the active biomolecule. The design of the products and the lyophilisation process require good understanding of the underlying stabilisation mechanisms of the formulation components [13; 14]. In this context, the residual water content in dried protein samples plays an essential role at low temperatures. Lyophilised lysozyme samples with residual hydration larger than 27 %(m/m) water content [15], well above the typical water content of lyophilisates of ap proximately 1 %(w/w) exhibit the so-called protein dynamical transition (PDT), an increase in the mean square displacement of molecules at a temperature of 180 K to 220 K upon heating. The PDT that is usually measured with neutron scattering is not to be confused with the glass transition \(T_{\mathrm{g}}\) that occurs in amorphous samples, which is characterised by a sudden change in relaxation times and heat capacity at \(T_{\mathrm{g}}\) and is usually measured with DSC. [16] The PDT has been observed at temperatures at around 200 K for different lightly hydrated proteins and is thought to be due to the onset of motions involving interactions of charged side chains with surrounding water molecules. [17] The PDT and the glass transition can both be described in terms of Goldstein's potential energy landscape (or surface, PES) containing many small minima within larger basins [18; 19]. Moving between minima requires overcoming the local, shallow activation energy barriers. With lower temperature, the configurational entropy decreases, i.e. the number of available minima decreases. Moving from one basin to the next requires a large amount of activation energy and cooperative rearrangement of molecules. The protein dynamical and the glass transitions can be understood as corresponding to lowering energy barriers on that landscape. In the absence of hydration, a temperature increase activates a different set of motions of activation energies similar to the thermal energies supplied. This effect can be very subtle. Given the propensity of hydrogen and van der Waals bonding interactions in solvated and lyophilised biopharmaceuticals, an ideal experimental method to study such systems is terahertz time-domain spectroscopy, THz-TDS, due to the match in photon and bonding energies [20]. In the terahertz range, the boson peak (BP) and the coupling of dipoles to the vibrational density of states dominate the absorption mechanisms [21]. The motions at terahertz frequencies play an essential role in understanding solid-state protein dynamics. At storage temperature (room temperature), a considerable number of vibrational modes in the terahertz frequency range are active and contribute to the formulation stability. In the 1970s, Frohlich suggested the existence of coherent vibrations at around \(10^{11}\) Hz and postulated that such motions enable biological functions, so-called biological control through selective long-range interactions. Motions active at terahertz frequencies are long-range motions, and the possible existence of a so-called Frohlich condensate in lyophilised protein formulations can hence be investigated. Thus far no experimental evidence for such coherent states was found for biomolecules in solution. Vibrational confinement can dramatically reduce molecular mobility in lyophilistates at temperatures close to room temperature and depends heavily on the interaction between protein and excipients in the formulation [22]. Upon increasing the temperature, the molecular mobility in the sample increases due to the ability to access more of the local minimum in the PES until the free volume is taken up entirely and the molecule becomes "jammed". Any further increase in mobility, and hence terahertz absorption, is no longer possible until a higher energy barrier to another basin in the PES is overcome that is associated with additional degrees of freedom for molecular motions. In the spectral region accessible with most terahertz spectrometers, between 0.3 THz and 3.0 THz, the frequency-dependent infrared absorption coefficient \(\alpha(\omega)\) is theoretically related to the reduced density of states \(g(\omega)\) via \(\frac{g(\omega)}{\omega^{2}}\propto\frac{\alpha(\omega)}{g^{3}}\)[21]. The excess density of states, apparent in the reduced density of states, is referred to as the boson peak (BP). The BP is a harmonic phenomenon due to inherent disorder that anharmonic effects can obscure [23]. Utilising THz-TDS, the onset temperature of molecular mobility was found to correlate with anharmonic effects in the model glass-former glycerol. These anharmonic effects resulted in an apparent shift of the BP centre frequency and could thus be separated from purely harmonic contributions [21]. Markelz et al. [24] observed that in an amorphous sample, an increase in absorption with temperature at a single frequency as measured with THz-TDS is due to anharmonic effects, even at very low temperatures. While no actual PES is perfectly harmonic, these effects are comparatively small at low temperatures, e.g. the BP is not yet obscured or its apparent centre frequency affected. We will hence refer to the temperature region below \(T^{*}\), i.e. at temperatures at which the BP is unaffected by anharmonic effects, as the harmonic regime, and to the temperature region at which the BP is affected as the anharmonic regime. This is expected to conceptually also apply to protein samples. In the present work, we investigated terahertz protein dynamics decoupled from solvent effects in four different one-component lyophilised products, namely sucrose (a widely used bulking agent, molecular weight 0.34 kDa), backitracen (a polypeptide antibiotic, molecular weight 1.4 kDa), lysozyme (a globular protein, molecular weight 14.5 kDa), and human serum albumin (HSA, a globular protein and bulking agent, molecular weight 66.5 kDa), which are shown in Figure 1. A possible Frohlich condensate in lyophilised protein formulations would become apparent by distinct spectral features emerging in the terahertz spectrum. ## II Materials and Methods ### Materials Sucrose (a commonly used cryo- and lyoprotectant, molecular weight 0.34 kDa) and HSA (globular model protein, molecular weight 66.5 kDa) were purchased from Merck GmbH (Steinheim, Germany). Backitracen (a polypeptide antibiotic, molecular weight 1.4 kDa) and lysozyme (a globular protein, molecular weight 14.5 kDa) were purchased from Carl Roth GmbH (Karlsruhe, Germany). The samples were prepared with highly purified water (HPW; Sartorius Arium Pro, Sartorius, Gottingen, Germany) to reach a total solid content of 10 %m/m prior to lyophilisation. ### Lyophilisation Lyophilisation stoppers (B2-TR coating, West) and DIN 10R vials (Fiolax, Schott, Germany) were cleaned with highly purified water and dried at 333 K for 8 h. The vials were filled with 3 mL solution and subsequently semi-stoppered. The product temperature in vials at different positions on the shelf was recorded with a thermocouple. Formulations were freeze-dried according to the protocol in Table 1 using an FTS LyoStar 3 freeze dryer (SP Scientific, Warmister, Pennsylvania, USA). The end of primary drying was controlled by comparative pressure measurement between a Pirani and MKS sensor. The vials were stoppered after secondary drying under nitrogen atmosphere at 800 mbar and crimped with flip-off seals. ### Differential Scanning Calorimetry (DSC) The \(T_{\mathrm{g}}\) of the lyophilises was determined with a DSC 821\({}^{\mathrm{e}}\) (Mettler Toledo, Giessen, Germany). 5 to 10 mg of crushed lyo cake were filled into 40 mL aluminium crucibles (Mettler Toledo, Giessen, Germany) under controlled humidity conditions (\(\leq\)10 % relative humidity) and sealed hermetically. The samples were heated from 280 K to 415 K at a ramp rate of 2 Kmin\({}^{-1}\). The \(T_{\mathrm{g}}\) was determined as the midpoint of the phase transition. ### Terahertz Time-Domain Spectroscopy (THz-TDS) The lyophilised cake was broken up under a dry nitrogen atmosphere contained within a glove bag (Atmos-Bag, Merck UK, Gillingham, UK), the powder was gently mixed using an agate mortar and pestle and then pressed into a thin pellet (thickness less than 800 um, diameter 13 mm) using a manual press (load 3 t, Specac Ltd, Orpington, UK). The pellet was sealed between two z-cut quartz windows of 2 mm thickness each and fixed to the cold finger of a cryostat (ST-100, Janis, Wilmington, MA, USA). Samples were analysed with a Terapulse 4000 spectrometer (Teraview Ltd, Cambridge, UK) in transmission under vacuum (pressure \(<\) 20 mbar). Each sample and reference spectrum was calculated from the co-average of 1000 waveforms which were acquired with a resolution of 0.94 cm\({}^{-1}\) and transformed to the frequency domain via fast Fourier transform. The absorption coefficient was calculated following the method by Duvillaret et al. [25] At the beginning of each measurement, the sample was cooled down from room temperature to 80 K and left to equilibrate for at least 30 min. The temperature was subsequently increased in steps of 10 K up to a maximum temperature of 440 K. The system was allowed to equilibrate for 8 min at each temperature increment before reference (two z-cut quartz windows with no sample in between) and sample measurement. ## III Results and Discussion ### Thermal analysis of \(T_{\mathrm{g}}\) by DSC In DSC data a clear \(T_{\mathrm{g}}\) was only found for sucrose (at 340 K, shown in Figure 2). The peptide and proteins did not show a clear step in heat capacity corresponding to \(T_{\mathrm{g}}\); instead, a gradual decrease in heat flow was observed, potentially linked to structural changes at elevated temperatures. The time scales of protein unfolding are strongly temperature-dependent and it is possible that the mobility was insufficient to maintain equilibrium between folded and unfolded states during the DSC measurements [26]. In all three pure macromolecule samples, an inflection point occurred, namely at 384 K (bactiracin), 330 K (lysozyme), and 337 K (HSA). ### THz-TDS provides insights into molecular mobility The THz-TDS spectra of the four materials show a similar profile (Figure 3). The change in absorption coefficient with temperature was most pronounced for sucrose. Figure 4 shows the absorption coefficient extracted at a frequency of 1 THz for different samples. The absorption coefficient measures the change in dipole moments caused by inter- and intramolecular motions in the sample of \begin{table} \begin{tabular}{l c c c c} \hline Step & Range (K/ min) & Shelf temperature (K) & Pressure (plate) & Hold time (h) \\ \hline Freeing & 1.0 & 223 & latm & 3 \\ Primary drying & 0.5 & 253 & 60 & 50 \\ Secondary drying & 0.4 & 323 & 60 & 5 \\ \hline \end{tabular} \end{table} Table 1: Lyophilisation protocol. Figure 1: Visualisation of the four different key molecules of this study. interest. As the temperature increases, larger-scale motions become available, resulting in changes of the dipole moments. Each sample is characterised by a distribution of states, each with its own onset temperature. In smaller systems, like sucrose, the distribution of states is narrower in temperature, and the total number of states is lower than in more complex systems. The rate of change in absorption with temperature increased at both transition temperatures (see Figure 4). \(T_{\mathrm{g}}^{*}\) was found at 340 K, which agreed very well with the \(T_{\mathrm{g}}\) measured by DSC and literature values [27]. \(T^{*}\), which cannot be measured by DSC, was found at 230 K. In contrast, the changes in the absorption coefficient of the protein samples can be very gradual with temperature. The PDT commonly occurs around 200 K. We therefore analysed the average rate of absorption change in three temperature intervals: \(T<200\) K, 200 K \(<T<300\) K, and \(T>300\) K (Table 2). It is known that the \(T_{\mathrm{g}}\) of sucrose decreases with increasing water content [27]. The agreement between the \(T_{\mathrm{g}}^{*}\) we measured with THz-TDS as well as DSC with literature values for dry sucrose matrices indicates that the experimental setup and procedures ensure a very low water content. Additionally in THz-TDS, any water molecules that could have potentially adsorbed to the sample surface during preparation may be removed by the vacuum (of \(<\)20 mbar) in the measurement chamber. Generally, the rate of change in absorption with temperature decreased at temperatures above 300 K for the larger molecular weight systems. This phenomenon was previously observed in other (more complex) lyophilised \begin{table} \begin{tabular}{l|c|c|c|c} \multicolumn{4}{c}{d\(\alpha\)/d\(T\) (\(10^{-2}\) cm\({}^{-1}\) K\({}^{-1}\))} \\ \hline & Sucrose & Bacitracin & Lysozyme & HSA \\ \hline \(T<200\) K & \(2.9\pm 0.4\) & \(2.7\pm 0.2\) & \(3.5\pm 0.8\) & \(2.9\pm 0.2\) \\ 200 K \(<T<300\) K & \(7.1\pm 0.6\) & \(4.6\pm 0.2\) & \(9.0\pm 0.8\) & \(5.5\pm 0.8\) \\ \(T>300\) K & \(15.2\pm 3.3\) & \(3.0\pm 0.3\) & \(5.3\pm 0.8\) & \(1.9\pm 0.6\) \\ \end{tabular} \end{table} Table 2: Rate of absorption change with temperature of sucrose, bacitracin, lysozyme, and HSA lyophilises. Figure 3: Terahertz spectra for sucrose, bacitracin, lysozyme, and HSA lyophilises. Absorption mostly increases with temperature. Blue: 80 K, red: 420 K. Figure 2: DSC of sucrose, bacitracin, lysozyme, and HSA lyophilises. A clear \(T_{\mathrm{g}}\) was only found for sucrose. Inflection points for the other samples were found at 348 K (bacitracin), 330 K (lysozyme), and 337 K (HSA). Figure 4: Absorption at 1 THz for sucrose, bacitracin, lysozyme, and HSA lyophilises. The vertical line in the plot of sucrose marks the \(T_{\mathrm{g}}\) as determined with DSC. Error bars are standard error for \(n\) measurements (\(n=5\) for sucrose, \(n=4\) for bacitracin, \(n=3\) for lysozyme and HSA). formulations and attributed to high-temperature macromolecular confinement [22]. We observed that the confinement effect depends on the shape and size of the molecules. This effect cannot be observed in small organic molecular systems like sucrose where only few degrees of freedom of dihedral motion are available and hence it is not possible to reach a "jammed conformation". Bacitracin and lysozyme show a similar restriction of motions to that observed in HSA, i.e. a change of the PES. The effect is slightly more pronounced for lysozyme due to its increased size and hence higher number of internal degrees of freedom. Between 310 K to 330 K, the absorption coefficient does not increase and the overall absorption change becomes less above 300 K. At temperatures above 330 K, the absorption increases again with temperature, as is also the case for BSA formulations measured previously [22]. This temperature corresponds to an energy barrier of 2.7 kJ mol\({}^{-1}\) (equal to 0.65 kcal mol\({}^{-1}\)), which is significantly lower than the energy barrier of unfolding of BSA in solution, which is on the order of 64 kJ/mol to 267 kJ/mol [28]. The confinement is most pronounced in HSA, the largest macromolecule studied. Here, the absorption coefficient even decreases slightly at temperatures between 310 K to 360 K. This decrease in absorption could be because the molecules may lose some of the degrees of freedom that they already gained at lower temperatures as the conformational jamming increases once they become trapped in a steep minimum on the potential energy landscape, as shown schematically in Figure 5. Even after the jammed conformation is overcome, d\(\alpha\)/d\(T\) in HSA is less than half compared to lysozyme. #### iii.2.1 The low-frequency (boson peak) region The BP can be visualised by plotting the absorption coefficient \(\alpha\) divided by \(\nu^{3}\) over the frequency \(\nu\) (Figure 6), [21] and the maximum value of \(\alpha\)/\(\nu^{3}\) was found from smoothed data. If the BP occurs below 0.3 THz, the data are not extrapolated, and no maximum is reported to avoid extrapolation errors. In small molecular systems, for example, in glycerol, the dynamics of glasses upon heating usually fall into two regimes [29]. At very low temperatures, the terahertz spectra are dominated by harmonic excitations. The BP itself is harmonic and, therefore, a temperature-independent phenomenon. Its centre frequency is constant. Above \(T_{\mathrm{g}}^{*}\), the system leaves the harmonic minimum on the potential energy landscape. Shallower minima increase the amount of anharmonicity and absorption. Once anharmonic effects dominate at \(T^{*}\), they lead to an apparent frequency shift of the BP maximum and obscure it completely at temperatures close to \(T_{\mathrm{g}}^{*}\)[21]. The maximum frequency of the BP of the protein lyophilisates is preserved in the harmonic regime at temperatures below approximately 150 K to 200 K. However, it decreases in the anharmonic regime with increasing temperature before being obscured by anharmonic effects that appear to shift it outside the experimentally accessible region. The BP in sucrose is the least pronounced of the samples measured and appears to be masked by anharmonic effects already at very low temperatures. The BP occurs close to the Ioffe-Regel crossover frequency at which the mean-free path of transverse waves becomes equal to their wavelength, meaning that there is a crossover from wave-like to random-matrix-like physics. Once global mobility sets in above the glass transition temperature, anharmonicity and mobility increase with temperature until a "critical" mobility is reached, completely obscuring the BP. At very similar temperatures, the proteins reconfigure and get trapped in a different conformation, decreasing mobility overall. Interestingly, the inflection point observed in the DSC data coincides with the temperature regime just above the initial trapping. Therefore, it is hypothesised that the trapping and/or the conformational change inducing the trapping result in a subtle heat capacity change with increasing temperature and that the maximum in the DSC data corresponds to partial unfolding. #### iii.2.2 Anharmonicity parameter \(a\) The derivative parameter \(a\) can be utilisied to characterise terahertz spectra at frequencies above and below the BP more reliably. It reflects the slope of the absorption spectra averaged over a specific frequency range [21]. We chose three different frequency ranges: 0.35 THz to 0.55 THz (below the BP), 0.90 THz to 1.10 THz (above the BP at the spectrometer's highest SNR), and 1.45 THz to 1.65 THz (at even higher frequencies above the BP). The lyophilisates show a markedly different behaviour at frequencies below and above the BP (Figure 7). In the protein samples, the plateau just above room temperature, seen in the absorption coefficient at 1 THz (Figure Figure 5: Schematic of a possible potential energy landscape topology; with sufficient thermal energy, a sample can explore different hypersurface configurations and can become trapped in shallow minima. 4), can only be observed at frequencies above the BP. At lower frequencies, the increase of \(a\) with temperature is monotonous. For bicitracin, that increase is approximately linear with temperature, while for lysozyme and HSA, a subtle transition can be observed at around \(200\,\mathrm{K}\) and \(300\,\mathrm{K}\), respectively. While the protein dynamical transition has thus far only been observed in hydrated samples, these increases in \(a\) point to the existence of thermally activated modes that involve only the protein molecules themselves. Even without solvent molecules, parts of the proteins retain some flexibility. Given that the increase in \(a\) appears in a similar temperature range, for example, in lysozyme, it could be caused by side chain motions where the activation energy is not affected by the presence of the solvent [24]. Sucrose shows a more pronounced increase of \(a\) over temperature, and a distinct increase is observed at \(T_{\xi}^{\star}\). This discontinuity is expected as the glass transition temperature marks the onset of global mobility. #### iii.2.3 Molecular confinement is observed in the spectrum at frequencies beyond the BP While the absorption coefficient at \(1\,\mathrm{THz}\) is located relatively close to the BP maximum and is strongly influenced by anharmonic effects, a different behaviour may be observed at higher frequencies. In the following we use a model developed by Chumakov et al. [29] to investigate the effect of subtle spectral changes on the extrapolated vibrational density of states (VDOS). This model was previously utilised to investigate the model glass former glycerol and is now applied to more complex biomolecules. The exponential function \(\alpha=A\nu^{3}\exp\left(-\nu/\nu_{c}\right)+C\) is fitted to the higher frequency part of the experimentally accessible spectrum. If that function is plotted at even higher frequencies, a peak with a centre frequency of \(3\nu_{c}\) is observed. It has to be noted that this is a feature based on the fitting function and not an accurate prediction of the centre frequency of the VDOS. The experimental data available spans a frequency range from the Ioffe-Regel crossover up to approximately \(2.3\,\mathrm{THz}\), while the actual VDOS may exhibit an underlying multi-peak structure [30] and show more features beyond the peak itself [31]. Chumakov et al. found excellent agreement in the frequency range of \(1\,\mathrm{THz}\) to \(1.7\,\mathrm{THz}\) in glycerol, where an exponential decay in the reduced density of states was observed [29]. The reduced density of states and the absorption coefficient measured with THz-TDS are related by a factor of \(\nu^{3}\). Figure 8: Extrapolated spectra at higher frequencies, for sucrose, bicitracin, lysozyme, and HSA lyophilistates, fit to frequencies above the BP maximum. Figure 6: BP visualisation for sucrose, bicitracin, lysozyme, and HSA lyophilistates at different temperatures. Boson peaks have been highlighted with crosses. Figure 7: Anharmonicity parameter evaluated in different frequency ranges for sucrose, bicitracin, lysozyme, and HSA lyophilistates. Error bars are standard error for \(n\) measurements (\(n=5\) for sucrose, \(n=4\) for bicitracin, \(n=3\) for lysozyme and HSA). The peak described by this model is narrowest and least intense in sucrose and gets broader but more intense in the protein samples (Figure 8). Some temperature dependence of the centre frequency is also apparent (Figure 9). In all samples, the centre frequency of the VDOS decreases with increasing temperature, thereby shifting the VDOS to lower frequencies and increasing the absorption coefficient measured at the shoulder (e.g. at 1 THz). It is possible that the frequency shift may follow a Bose-Einstein distribution as previously observed for crystalline modes where thermal excitation was mediated by phonons populating an anharmonic potential. A redshift of a mode is observed when phonons are excited by sufficient thermal energy [32]. However, because the data are only extrapolated, we refrain from fitting a model and will simply discuss it in broader, qualitative terms. In future, it might be beneficial to measure similar samples on a spectrometer with higher spectral bandwidth to be able to extract more accurate data. For sucrose, the change in the centre frequency is pronounced above \(T_{\mathrm{g}}^{*}\), whereas in backtracin, the decrease is gradual over the entire temperature range but slightly increased between 180 K and 350 K. The centre frequency of lysozyme generally decreases with temperature below 300 K and is constant above. The centre frequency of HSA lyophilistates is constant upon heating to a temperature of 210 K, followed by a decrease in \(\nu_{\mathrm{BP}}\) and reaching a minimum at 340 K, followed by a slight increase. In all cases, the change in the centre frequency is less at temperatures at which confinement is observed, indicating that the VDOS does not shift. If the molecules are trapped in a conformation, one may assume that the vibrations are temperature independent. The higher frequencies seem to be more affected by the activation of modes at around 200 K. Modes at higher frequencies typically involve a lower reduced mass than low-frequency vibrations. The effect on the higher frequency modes, therefore, is an indication that the modes that become active at around 200 K may involve side chains or functional groups rather than the heavy protein backbone, which would influence the lower frequencies instead. In this respect, using a frequency of 1 THz to analyse the systems is beneficial as it provides insight into anharmonic effects, disappearance of the BP at low temperatures, and activation of modes as well as jamming at increased temperatures. ### Frohlich condensates Frohlich postulated the existence of coherent vibrations in the gigahertz to terahertz range that help facilitate the biological function of biomolecules [33; 34]. In terahertz spectra, such coherent modes appear as peaks in the absorption spectrum that sharpen when the temperature is decreased. Simultaneously, the absorption coefficient at neighbouring frequencies decreases [35]. No such features were found in any measurement (Figure 3) Our data hence show no evidence for Frohlich coherence or the existence of a Frohlich condensate or other quantum effects. In protein solution, terahertz vibrations are propagated by the solvent and coupled to dielectric relaxations. This may lead to the fast dissipation of any postulated coherence or localised states due to the similarity in their frequencies. In this case, only extraordinarily rigid or well-ordered molecular structures would be observable. However, we do not find such evidence for inherent coherent states for the four molecules under investigation. ## IV Conclusions The importance of solvents and the solvation shell surrounding proteins for their function is widely recognized. In the present work, we tried to obtain insights into protein-protein interactions in the dry state analysing lyophilistates of sucrose, bicitracin, lysozyme, and HSA by terahertz spectroscopy. The glass transition temperature, \(T_{\mathrm{g}}\), was identified in sucrose by DSC, whereas \(T_{\mathrm{g}}^{*}\) could not be identified for bicitracin, lysozyme, and HSA. However, THz-TDS demonstrated an increase in mobility with temperature. An increase in temperature led to the activation of modes involving only the protein molecules, resulting in an increase in the absolute absorption coefficient and the anharmonicity parameter. The anharmonicity parameter showed a markedly different behaviour below and above the BP centre frequency. Utilising the theoretical model by Chumakov et al., the higher frequencies were also evaluated, and a redshift of the VDOS was predicted. Anharmonicity began to influence the spectra in sucrose at \(T^{*}\) and resulted in an apparent shift of the centre frequency of the BP. A further increase in temperature Figure 9: Extrapolated centre frequency for sucrose, bicitracin, lysozyme, and HSA lyophilistates. Error bars are standard error for \(n\) measurements (\(n=5\) for sucrose, \(n=4\) for bactracin, \(n=3\) for HSA). For the lysozyme lyophilistate, two separate measurements are shown with error bars the respective 95 % confidence intervals of the fit. and, thereby, mobility, led to the dissipation of the BP. For the larger proteins lysozyme and HSA, jamming was observed at increased temperatures after the dissipation of the BP and could only be overcome by a further increase in temperature. Future experiments making use of a higher- bandwidth spectrometer can investigate the impact of temperature change on the VDOS. In the frequency range investigated, we could not find evidence for the occurrence of Frohlich coherence. ###### Acknowledgements. All authors would like to thank Walter Schirmacher for insightful discussions about the nature of the boson peak and theoretical aspects. JK thanks the EPSRC Cambridge Centre for Doctoral Training in Sensor Technologies and Applications (EP/L015889/1) and AstraZeneca for funding. MLA would like to thank Erasmus+ for funding. All authors would like to thank the Cambridge-LMU Strategic Partnership for funding. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
2304.09242
A Framework for Analyzing Cross-correlators using Price's Theorem and Piecewise-Linear Decomposition
Precise estimation of cross-correlation or similarity between two random variables lies at the heart of signal detection, hyperdimensional computing, associative memories, and neural networks. Although a vast literature exists on different methods for estimating cross-correlations, the question what is the best and simplest method to estimate cross-correlations using finite samples ? is still unclear. In this paper, we first argue that the standard empirical approach might not be the optimal method even though the estimator exhibits uniform convergence to the true cross-correlation. Instead, we show that there exists a large class of simple non-linear functions that can be used to construct cross-correlators with a higher signal-to-noise ratio (SNR). To demonstrate this, we first present a general mathematical framework using Price's Theorem that allows us to analyze cross-correlators constructed using a mixture of piece-wise linear functions. Using this framework and high-dimensional embedding, we show that some of the most promising cross-correlators are based on Huber's loss functions, margin-propagation (MP) functions, and the log-sum-exp (LSE) functions.
Zhili Xiao, Shantanu Chakrabartty
2023-04-18T19:03:27Z
http://arxiv.org/abs/2304.09242v2
# A Framework for Analyzing Online ###### Abstract Precise estimation of cross-correlation or similarity between two random variables lies at the heart of signal detection, hyperdimensional computing, associative memories, and neural networks. Although a vast literature exists on different methods for estimating cross-correlations, the question _what is the best and simplest method to estimate cross-correlations using finite samples?_ is still not clear. In this paper, we first argue that the standard empirical approach might not be the optimal method even though the estimator exhibits uniform convergence to the true cross-correlation. Instead, we show that there exists a large class of simple non-linear functions that can be used to construct cross-correlators with a higher signal-to-noise ratio (SNR). To demonstrate this, we first present a general mathematical framework using Price's Theorem that allows us to analyze cross-correlators constructed using a mixture of piece-wise linear functions. Using this framework and high-dimensional embedding, we show that some of the most promising cross-correlators are based on Huber's loss functions, margin-propagation (MP) functions, and the log-sum-exp functions. ## 1 Introduction Estimating cross-correlations between random variables plays an important role in the field of statistics [1], machine learning [2; 3], and signal detection [4; 5]. This is because the cross-correlation metric measures some form of similarity between the random variables, and hence reveals how one might influence the another. With proper normalization, the metric becomes equivalent to cosine similarity and unitary transforms, both of which are extensively used in linear algebra [6], natural language processing [7], and computer vision [8; 9]. In the emerging field of Hyperdimensional Computing [10; 11] cross-correlations are generally associated with inner products used for information retrieval from sparse distributed memories. In its most general form cross-correlation \(R:\mathcal{R}\times\mathcal{R}\rightarrow\mathcal{R}\) is defined for a pair of zero-mean, unit-variance random variables \(x\in\mathcal{R}\), \(y\in\mathcal{R}\) as \[R:=\mathcal{E}[xy]=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}xy\;\;p(x,y) dxdy. \tag{1}\] where \(p:\mathcal{R}\times\mathcal{R}\rightarrow\mathcal{R}^{+}\) denotes the underlying joint probability distribution from which \(x\) and \(y\) are drawn from. The operator \(\mathcal{E}[.]\) denotes an expectation under the probability measure \(p\). In practice the distribution \(p\) is not known apriori, instead, one has access to \(N\) samples independently drawn from the distribution \(p\). If we denote the sample vectors as \(\mathbf{x}\in\mathcal{R}^{\mathbf{N}}\) and \(\mathbf{y}\in\mathcal{R}^{\mathbf{N}}\), then the empirical cross-correlation can be estimated as \[\hat{R}_{N}=\frac{1}{N}\sum_{n=1}^{N}x_{n}y_{n} \tag{2}\] where \(x_{n},y_{n}\) represent the elements of the vector \(\mathbf{x}\) and \(\mathbf{y}\). Then, by the law of large numbers (LLN), the empirical correlation converges uniformly to the true correlation \(R\) \[\left|\frac{1}{N}\sum_{n=1}^{N}x_{n}y_{n}-R\right|\leq\epsilon\stackrel{{ N\to\infty}}{{\longrightarrow}}0 \tag{3}\] and is depicted in Fig. 1. Note that equation 2 admits an online approach for estimating \(\hat{R}_{N}\) according to \[\hat{R}_{N}=\left(1-\frac{1}{N}\right)\hat{R}_{N-1}+\frac{1}{N}x_{N}y_{N} \tag{4}\] In this paper, we explore an alternate approach towards estimating cross-correlations using a class of non-linear functions \(f:\mathcal{R}\times\mathcal{R}\to\mathcal{R}\) such that \[\frac{1}{N}\sum_{n=1}^{N}f(x_{n},y_{n})\stackrel{{ N\to\infty}}{{ \longrightarrow}}\mathcal{E}[f(x,y)]=g(R) \tag{5}\] where \(g:\mathcal{R}\to\mathcal{R}\) is a monotonic function. The uniform convergence of \(f\) is illustrated in Figure. 1 where \[\left|\frac{1}{N}\sum_{n=1}^{N}f(x_{n},y_{n})-g(R)\right|\leq\epsilon \stackrel{{ N\to\infty}}{{\longrightarrow}}0 \tag{6}\] The main premise of this paper is that when \(x\) and \(y\) are drawn from a joint Gaussian distribution, the function \(g\) is known apriori or can be estimated with high accuracy. As a result, and as illustrated in Figure 1, for a finite sample size \(N\), an estimate of the correlation using \(g^{-1}(\frac{1}{N}\sum_{n=1}^{N}f(x_{n},y_{n}))\) could be closer than \(\hat{R}\) to the true cross-correlation \(R\). Under what conditions this might be true will be the main topic of investigation for this paper. Note that the focus of this paper The paper is organized as follows: In section 2, we first propose a mathematical framework that can be used to analyze cross-correlators for a general class of non-linear function \(f\), and for jointly distributed Gaussian inputs. We use the framework to analyze different types of estimators which include the linear-rectifier cross-correlator, Margin Propagation (MP) correlators, Huber-type cross-correlators, and log-sum-exp (LSE) estimators. In section 3, we extend the framework to arbitrary input distributions based on Hyperdimensional mapping using Walsh-Hadamard transforms. In section 4, we show experiments evaluating different correlators and the transformation method, and discuss the advantages and disadvantages of their possible hardware implementations. Section 5 concludes the paper with a brief perspective on future directions. ## 2 Analysis Framework using Price's Theorem In this section we present an analysis framework that can be used to understand the behavior of different cross-correlators. A cross-correlator can be viewed as a difference between two functions and in Fig. 2(a) we illustrate this Figure 1: (a) Statistical definition of cross-correlation \(R\) between random variables \(x\) and \(y\) under a joint distribution \(p(x,y)\); (b) Empirical cross-correlation \(\hat{R}_{N}\) based on samples \((x_{n},y_{n})\); (c) Uniform convergence of \(\hat{R}_{N}\) to \(R\); (d) Uniform convergence of empirical non-linear cross-correlator \(\hat{g}_{N}\) to \(g(R)\); and (e) Estimation of \(R\) using \(g^{-1}\). for the empirical cross-correlator defined in equation 2 which can be expressed as \[\frac{1}{N}\sum_{n=1}^{N}x_{n}y_{n}=\frac{1}{4N}\sum_{n=1}^{N}(x_{n}+y_{n})^{2}-( x_{n}-y_{n})^{2} \tag{7}\] The symmetric quadratic functions \((x+y)^{2}\) and \(-(x-y)^{2}\) are shown in Fig. 2(a) which when summed together results in the product \(xy\). When extended to \(N\) dimensions, the quadratic functions in equation 2 become \(L_{2}\) distances \(||\mathbf{x}+\mathbf{y}||_{2}^{2}\) and \(-||\mathbf{x}-\mathbf{y}||_{2}^{2}\) and their sum is proportional to the empirical cross-correlation \(\frac{1}{N}\sum_{n=1}^{N}x_{n}y_{n}\). The concept can be generalized to other norms and in Fig. 2(b) we show the equivalent construction for an \(L_{1}\) type cross-correlators using \(L_{1}\) distances \(|x+y|\) and \(-|x-y|\). Both these constructions can be viewed as special cases of mixtures of piece-wise linear functions as shown in Fig. 2(c) and can be expressed as \[f(x,y)=h(x+y)-h(x-y) \tag{8}\] where \[h(x)=\frac{1}{2}\sum_{l=1}^{L}w_{l}\left(|x-\alpha_{l}|+|x+\alpha_{l}|\right), \tag{9}\] with parameters \(\alpha_{l}\geq 0\), \(w_{l}\geq 0;\sum_{l}w_{l}=1\) and \(|\cdot|\) is an absolute-value function defined as \[|x|=\left\{\begin{array}{c c}x&;&x\geq 0\\ -x&;&x<0\end{array}\right. \tag{10}\] We now state the Lemma that can be used to compute the function \(g(R)=\mathcal{E}[f(x,y)]\). **Lemma 2.1**.: _If the function \(f(x,y)\) is given by equation 8 and 9 and if the joint probability distribution of \(x\in\mathbb{R}\) and \(y\in\mathbb{R}\) are is then given by_ \[p(x,y)=\frac{1}{2\pi\sqrt{1-R^{2}}}\exp\left[-\frac{x^{2}+y^{2}-2Rxy}{2\left(1 -R^{2}\right)}\right], \tag{11}\] _where \(R\in[-1,1]\) is the cross-correlation between \(x\) and \(y\), then_ \[g(R)=\frac{1}{2\sqrt{\pi}}\sum_{l=1}^{L}\int_{0}^{R}\left(\frac{w_{l}}{\sqrt{ (1+\rho)}}\exp\left[-\frac{\alpha_{l}^{2}}{4\left(1+\rho\right)}\right]+\frac{ w_{l}}{\sqrt{(1-\rho)}}\exp\left[-\frac{\alpha_{l}^{2}}{4\left(1-\rho\right)} \right]\right)d\rho \tag{12}\] Proof.: Since \(f\) is a memory-less function with a well-defined Fourier transform and \(x\) and \(y\) are zero-mean, unit variance, jointly distributed Gaussian random variables, we can apply Price's theorem [12, 13, 14] which states that \[\frac{\partial\mathcal{E}(f)}{\partial R}=\mathcal{E}\left[\frac{\partial^{2} f}{\partial x\partial y}\right] \tag{13}\] Figure 2: A conceptual scalar demonstration of a) empirical correlators, b) \(L_{1}\) type correlators, and c) an example of correlators using mixtures of piece-wise linear functions with \(L=1,w_{1}=1\), and \(\alpha_{l}=0.5\). All correlators \(f(x,y)\) can be viewed as a difference between \(h(x+y)\) and \(h(x-y)\). The upper three curves represent \(h(x+y)\), the lower three curves represent \(-h(x+y)\), and curves in the middle are \(f(x,y)\) for different values of \(y=y_{1},y_{2},y_{3}\). where the expectation operator \(\mathcal{E}\) is defined as \[\mathcal{E}\left(f\right)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(x,y)p( x,y;\rho)dxdy=g(R). \tag{14}\] The partial derivatives of the sub-functions \(h(x+y),h(x-y)\) in Eqn.9 are \[\frac{\partial^{2}h(x+y)}{\partial x\partial y}=\frac{1}{2}\sum_{l=1}^{L}w_{l} \left[\delta\left(x+y-\alpha_{l}\right)+\delta\left(x+y+\alpha_{l}\right) \right], \tag{15a}\] \[-\frac{\partial^{2}h(x-y)}{\partial x\partial y}=\frac{1}{2}\sum_{l=1}^{L}w_{l} \left[\delta\left(x-y-\alpha_{l}\right)+\delta\left(x-y+\alpha_{l}\right) \right]. \tag{15b}\] where \(\delta(.)\) denotes the Dirac-delta function. Substituting in equation 13 leads to \[\mathcal{E}\left[\frac{\partial^{2}h(x+y)}{\partial x\partial y}\right] =\frac{1}{2}\sum_{l=1}^{L}w_{l}\int_{-\infty}^{\infty}p(x,-x+ \alpha_{l})+p(x,-x-\alpha_{l})dx, \tag{16a}\] \[=\frac{1}{\sqrt{\pi(1+R)}}\sum_{l=1}^{L}w_{l}\exp\left[-\frac{ \alpha_{l}^{2}}{4\left(1+R\right)}\right],\] (16b) \[-\mathcal{E}\left[\frac{\partial^{2}h(x-y)}{\partial x\partial y}\right] =\frac{1}{2}\sum_{l=1}^{L}w_{l}\int_{-\infty}^{\infty}p(x,x- \alpha_{l})+p(x,x+\alpha_{l})dx,\] (16c) \[=\frac{1}{\sqrt{\pi(1-R)}}\sum_{l=1}^{L}w_{l}\exp\left[-\frac{ \alpha_{l}^{2}}{4\left(1-R\right)}\right]. \tag{16d}\] Thus, using \(f=h(x+y)-h(x-y)\) and from 13 \[\frac{\partial g}{\partial R}=\frac{1}{\sqrt{\pi(1+R)}}\sum_{l=1}^{L}w_{l} \exp\left[-\frac{\alpha_{l}^{2}}{4\left(1+R\right)}\right]+\frac{1}{\sqrt{\pi (1-R)}}\sum_{l=1}^{L}w_{l}\exp\left[-\frac{\alpha_{l}^{2}}{4\left(1-R\right)}\right] \tag{17}\] leads to the expression for \(g(R)\), \[g(R)=\int_{0}^{R}\frac{1}{\sqrt{\pi(1+\rho)}}\sum_{l=1}^{L}w_{l}\exp\left[- \frac{\alpha_{l}^{2}}{4\left(1+\rho\right)}\right]+\frac{1}{\sqrt{\pi(1-\rho) }}\sum_{l=1}^{L}w_{l}\exp\left[-\frac{\alpha_{l}^{2}}{4\left(1-\rho\right)} \right]d\rho \tag{18}\] Even though the equation 12 may not be solved in closed-form, it is analytical and hence can be used to visualize the form of \(g(R)\) for specific choices of \(\alpha_{l}\) and \(w_{l}\). The partial derivative equation in 17 is also useful for choosing appropriate offsets to construct desired correlators, and the derivative when there is only one offset is visualized in Fig. 3 for different values of the offset. **Example 1:** When \(L=1,w_{1}=1,\alpha_{1}=0\), the function \(f\) is reduced to \[f(x,y)=|x+y|-|x-y|, \tag{19}\] which is the well-studied linear rectifier correlator [13]. In this case, the relation 12 can be evaluated in closed form and is given by \[g_{L1}(R)=\frac{2}{\sqrt{\pi}}(\sqrt{1+R}-\sqrt{1-R}). \tag{20}\] **Example 2:** When \(w_{l}=1/L,\alpha_{l}=c/L,l=1,..,L\) and \(c,L\rightarrow\infty\), the function \(f\) is reduced to \[f(x,y)=\frac{1}{2c}(\left(x+y\right)^{2}-\left(x-y\right)^{2}), \tag{21}\] where \(c>|x|\) is the range of inputs. So the function \(f\) becomes the empirical correlator. In this case, the summation in the relation 12 can be replaced by integrals in the limit \(L\rightarrow\infty\) in which case \[\frac{\partial g_{L2}}{\partial R} = \frac{1}{c\sqrt{\pi(1+R)}}\int_{0}^{\infty}\exp\left[-\frac{x^{2} }{4\left(1+R\right)}\right]dx\] \[+\frac{1}{c\sqrt{\pi(1-R)}}\int_{0}^{\infty}\exp\left[-\frac{x^{ 2}}{4\left(1-R\right)}\right]dx\] \[= \frac{2}{c}.\] Therefore, \(g_{L2}(R)=\frac{2}{c}R\) matches the result for a scaled empirical cross-correlation. The g(R) of both empirical and linear rectifier correlators are shown in Fig. 4(a). Let's denote \(\mathcal{E}\left(f\right)\) or \(g(R)\) by \(y\), we can derive the following from \(g_{L1}(R)\) in 20 \[R^{2}=\frac{\pi}{4}y^{2}-\frac{\pi^{2}}{64}y^{4}. \tag{23}\] This suggests that \(g^{-1}\) in Fig. 1 can be robustly estimated using a polynomial expansion with a relatively low degree. This is important since the closed-form solution for equation 18 can not be computed for different choices of \(w_{l},\alpha_{l}\). As a result, \(g^{-1}\) has to be learned/estimated by drawing samples with known apriori cross-correlation, which is then used to estimate \(R\) according to Fig. 1. As we will show later in section 3, this calibration procedure and procedure to estimate \(R\) can be agnostic to the input distribution. We now apply the calibration procedure to three other types of functions of the type given by expression 8. The first is a margin-propagation (MP) function that can be constructed using a finite number of splines \(L\) and self-normalizes itself such that the maximum gradient \(|f^{\prime}|\leq 1\). The MP function is given by \[(x-z)_{+}+(-x-z)_{+} =\gamma, \tag{24}\] \[h(x) =z, \tag{25}\] where \((\cdot)_{+}\) is a rectifying linear unit (ReLU) function, and \(\gamma>0\) is a hyper-parameter. The second function is the Huber function [15] which requires an infinite number of splines and is given by \[h(x)=\left\{\begin{array}{rl}&0.5x^{2}/\sigma,\;\;x<\sigma\\ &|x|-\frac{1}{2}\sigma,\;\;x\geq\sigma,\end{array}\right. \tag{26}\] where \(\sigma>0\) is a threshold parameter. Note that the maximum gradient of the Huber function is 1. The third function is a log-sum-exp (LSE) function which also requires an infinite number of splines and is given by \[h(x)=\frac{1}{a}(\log(\exp{[ax]}+\exp{[-ax]}), \tag{27}\] where \(a>0\) is a scaling factor. Note that the maximum gradient of the log-sum-exp function is also 1. Fig. 4b),c), and d) shows the average output of cross-correlation function \(\hat{g}(R)\) corresponding to the MP, Huber and LSE functions, which is used as an approximation of the correlation function \(g(R)\) to get the calibration function \(g^{-1}\). It can be observed from Fig. 4c) and d) that the \(g(R)\) for the Huber and LSE functions are bounded above by \(g_{L1}(R)\) and the magnitude of gradient \(|\frac{\partial g}{\partial R}|\) monotonically increases as R increases. In fact, the normalized \(g(R)\) for the Huber function and LSE functions are bounded above and below by the normalized \(g_{L1}(R)\) and \(g_{L2}(R)\). For Huber functions, it's a combination of the quadratic function (\(L_{2}\)) and absolute value function (\(L_{1}\)). As the threshold value \(\sigma\) increases, its output is closer to the empirical correlator. Conversely, it becomes a linear rectified correlator when \(\sigma\) is sufficiently small. For LSE functions, as the scaling factor \(\alpha\) increases, the \(h(x)\) for LSE functions in expression 27 can be simplified to \(\frac{1}{a}log(exp\left[a|x|\right]=|x|\) as the negative part will go to zero exponentially fast. On the other hand, as \(a\) decreases, the function \(h(x)\) can be approximated by the Taylor series expanded at zero, which leads to the following \[h(x)=\frac{1}{a}\log(\sum_{n=0}^{\infty}\frac{(ax)^{n}}{n!}+\frac{(-ax)^{n}}{n!}). \tag{28}\] The odd-degree terms cancel each other out and higher-order terms decay fast, which leads to \[h(x)\approx\frac{1}{a}\log(2+(ax)^{2}). \tag{29}\] Figure 4: The normalized \(g_{L1}(R)\) and \(g_{L2}(R)\) and the estimated \(\hat{g}(R)\) for the MP, Huber, and LSE functions for standard bivariate normal distributions from Monte Carlo experiments. The expected output of the empirical and linear rectifier correlators are added for comparison. a) The normalized \(g_{L1}(R)\) and \(g_{L2}(R)\); b) The \(\hat{g}(R)\) of MP correlators with \(\gamma=\{0,1,1.45,2\}\); c) The \(\hat{g}(R)\) of Huber correlators with \(\sigma=\{10^{-7},0.5,1.4,2\}\); d) The \(\hat{g}(R)\) of LSE correlators with \(a=\{0.1,1,3\}\). Applying the same trick to the logarithm but expanding at 2, we have \[h(x)\approx\frac{1}{a}(\log(2)+\frac{1}{2}(ax)^{2}+...)\approx\frac{1}{a}+\frac{ 1}{2}ax^{2}. \tag{30}\] Therefore, the LSE correlator approaches the empirical correlator as \(a\) decreases.Since the normalized \(g(R)\) of Huber and log-sum-exp correlators fall between \(g_{L1}(R)\) and \(g_{L2}(R)\) and the magnitude of gradient \(|\frac{\partial g}{\partial R}|\) monotonically increases as R increases, the inverse cross-correlation function \(g^{-1}\) can be approximated by a polynomial of degree lower or equal to the degree needed for \(g_{L1}^{-1}\). In practice, we found that fourth order polynomial is sufficient for calibration of \(g_{L1}^{-1}\). For calibrating MP correlators, higher degree polynomials were needed as the value of \(\gamma\) increases as its \(|\frac{\partial g}{\partial R}|\) is not monotonically increasing as correlation increases. As such, a fifth-order polynomial was used to learn the inverse cross-correlation function \(g^{-1}\). ## 3 Extension to non-Gaussian distributions The theoretical and experimental results presented in section 2 assumed random variables with joint Gaussian distributions. In this section, we extend the previous results for non-Gaussian distributions. To achieve this, we use results from the Hyperdimensional computing literature, which state that variances and cross-correlations are preserved when random variables are mapped into high-dimensional space using Unitary random matrices. **Lemma 3.1**.: _Let \(x\) and \(y\) be zero-mean random variables with unit variance and with a cross-correlation \(R\). Let \(\Phi:\mathbb{R}^{N}\rightarrow\mathbb{R}^{M}\) denote a high-dimensional embedding using a Unitary transform such that \(\mathcal{E}[\Phi(\mathbf{x})]=\mathbf{0}\). Then, as \(N\rightarrow\infty\), \(\frac{1}{N}\left\langle\Phi(\mathbf{x}),\Phi(\mathbf{y})\right\rangle\to R\), where \(\left\langle\cdot,\cdot\right\rangle\) is the inner product._ Proof.: Suppose \(\mathbf{x}\) and \(\mathbf{y}\) are \(\mathbb{R}^{N}\)-valued random vector, and each entry \(x_{n}\), \(y_{n}\) are independently identically distributed (i.i.d) variables with the joint probability density function \(p(x,y;R)\) \[\frac{1}{N}\left\langle\Phi(\mathbf{x}),\Phi(\mathbf{y})\right\rangle=\frac{1 }{N}\left\langle\mathbf{x},\mathbf{y}\right\rangle=\frac{1}{N}\sum_{n=1}^{N}x _{n}y_{n}\stackrel{{ N\rightarrow\infty}}{{\longrightarrow}}R \tag{31}\] The Walsh-Hadamard-Transform is one such unitary transform \(\mathcal{H}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) and it can be represented by a \(N\times N\) Hadamard matrix. An example \(4\times 4\) WHT matrix is shown below \[H_{4}=\frac{1}{2}\begin{pmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\end{pmatrix} \tag{32}\] The Hadamard matrices are orthogonal and symmetric matrices composed of +1 and -1 with a normalization factor \(1/\sqrt{N}\), which makes it easy for implementations and computations. Besides keeping the covariance between random variables, it can also be shown that the transformed zero-mean variables converge to the joint Gaussian distribution with the same variance and covariance. **Lemma 3.2**.: _Let \(X\) and \(Y\) be zero-mean random variables with unit variance and with a cross-correlation \(R\). Let \(\mathcal{H}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) denote the Walsh-Hadamard-Transform. Suppose the entries of the vector equation \(x^{\prime}=\mathcal{H}(x)\) are given by \(x^{\prime}_{n}=h_{n}(x_{1},...,x_{n})\), and \(h_{n}(\cdot)\) is therefore_ \[h_{n}(x_{1},...,x_{n})=\begin{cases}&\frac{1}{\sqrt{N}}\sum_{n=1}^{N}x_{n}, \ \ \ \ n=1,\\ &\frac{1}{\sqrt{N}}(\sum_{n=1}^{N/2}x_{n}-\sum_{n=1}^{N/2}x_{n}),\ \ n\neq 1,\end{cases} \tag{33}\] _where the \(x_{n}\) are independently identically distributed (i.i.d.) samples from \(X\). Then, as \(N\rightarrow\infty\), the joint probability distribution \(p(x^{\prime}_{n},y^{\prime}_{n})\) converges to a bivariate Gaussian distribution with zero-mean, unit variance and covariance \(R\)._ Proof.: Suppose \(X\) and \(Y\) are zero-mean random variables with finite variances and covariance R. According to the multivariate Central Limit Theorem (CLT) [16], as \(N\rightarrow\infty\) the joint distribution \(p(\sqrt{N}\tilde{X}_{N},\sqrt{N}\tilde{Y}_{N})\) converges to bivariate Gaussian distribution with zero mean and the same variances and covariance R, where \(\tilde{X}=\frac{1}{N}\sum_{n=1}^{N}x_{n}\) is the average of \(N\) independently identically distributed samples of \(X\). Notice that the transformed entry after WHT \(x^{\prime}_{n}=h_{n}(x_{1},...,x_{n})\) can be expressed as \[x^{\prime}_{n}=\begin{cases}&\sqrt{N}\bar{X}_{N},\ \ \ \ \ n=1,\\ &\frac{1}{\sqrt{2}}\sqrt{\frac{N}{2}}(\bar{X}_{N/2}-\bar{X}_{N/2}),\ \ n\neq 1.\end{cases} \tag{34}\] For \(n=1\), the CLT obviously applies to the transformed input \(x^{\prime}_{1}\) and \(y^{\prime}_{1}\). For the case of \(n\neq 1\), note that the CLT also applies to \(\pm\sqrt{\frac{N}{2}}\bar{X}_{N/2}\) and \(\pm\sqrt{\frac{N}{2}}\bar{Y}_{N/2}\), so they become bivariate Gaussian with zero mean, and same variances and covariance, and so is their sum divided by \(\sqrt{2}\). As such, using the proof in section 2, it can be shown that for correlators that can be expressed by equations 8 and 9, the expected output \(\mathcal{E}[f(\mathcal{H}(x),\mathcal{H}(y))]\) is equal to \(g(R)\). In other words, for non-Gaussian distributed variables with zeros mean and finite covariance \(R\), we can first transform the inputs to jointly Gaussian distribution, which preserves the covariance \(R\), and then use the cross-correlator in section 2 to estimate the cross-correlation \(R\) using the transformed data and the same \(g^{-1}\). Figure 5: The standard deviation of cross-correlation estimation error of MP, Huber, and LSE correlators for different correlation levels at the dimension of 256, with the SNR of empirical and linear rectifier added for reference. a) The error plot of MP correlators with \(\gamma=\{1,1.45,2\}\); b) The error plot of Huber correlators with \(\sigma=\{10^{-7},0.5,1.4,2\}\); c) The error plot of LSE correlators with \(a=\{0.1,1,3\}\). ## 4 Experiments Results and Analysis ### Experiments and Results for Jointly Gaussian Inputs This section presents the results using different correlators to estimate the covariance for standard jointly normal distribution. To study and compare their performance, vectors of different lengths are randomly sampled from the zero mean and unit variance Gaussian distribution. Each vector pair \(S_{1},S_{2}\in\mathbb{R}^{D}\) is mixed in the following way to generate a bivariate Gaussian distribution \(X=(X_{1},X_{2})\) with different correlation \(R\) to learn and test the inverse cross-correlation function \(g(R)\), \[X_{1} =S_{1}, \tag{35}\] \[X_{2} =RS_{1}+\sqrt{1-R^{2}}S_{2},\] (36) \[X =(X_{1},X_{2})\sim N(\vec{0},\Sigma),\Sigma=\begin{pmatrix}1&R\\ R&1\end{pmatrix}. \tag{37}\] In Fig. 5 we display the standard deviation of cross-correlation estimation errors made by the MP, Huber, and LSE correlators with different parameters at different levels of cross-correlation using the learned inverse cross-correlation function \(g^{-1}\). It is observed that the linear rectifier correlator is more accurate when the signal of interest is highly correlated, while the empirical correlator makes less error in the other case. As discussed in section 2, the Huber, and LSE correlators behave more like the empirical correlator when \(\sigma\) is high and \(a\) is small, and approach the linear Figure 6: The SNR plots of MP, Huber, and LSE correlators from correlator length of 16 to 65536, with the SNR of empirical and linear rectifier correlators as a reference. a) The SNR plot of MP correlators with \(\gamma=\{1,1.45,2\}\); b) The SNR plot of Huber correlators with \(\sigma=\{10^{-7},0.5,1.4,2\}\); c) The SNR plot of LSE correlators with \(a=\{0.1,1,3\}\). rectifier correlator otherwise. The MP correlators using the function 24 is equivalent to the linear rectifier correlator when \(\gamma=0\). As \(\gamma\) increases, the performance degrades and performs even worse than the empirical correlator. In Fig. 6, we plot the signal-to-noise-ratio (SNR) for each of these five cross-correlators for different sample sizes or correlation-window sizes \(N\). The result shows that for joint Gaussian input distribution, the linear rectifier-based correlator shows the best SNR. Also, the SNR increases by 3dB when the correlation length \(N\) is doubled. This result can be attributed to the reduction in the estimation error due to simple averaging. ### Non-Gaussian Inputs and Walsh-Hadamard-Transformation In this section, the WHT method was tested on non-Gaussian distributions to verify that the function \(g(R)\) remains unchanged for non-Gaussian inputs after being transformed by the WHT. Fig. 7 presents the distribution of random vectors used in the experiment. The first non-Gaussian distribution tested shown in Fig. 7(a) has zero mean, unit variance uniform marginal distributions. In this case, the original joint distribution is symmetric. The second non-Gaussian distribution tested shown in Fig. 7(b) has asymmetric zero mean unit variance marginal input distributions. In particular, the marginal distribution of the Y vector is composed of samples drawn from two normal distributions with opposite mean and unequal variance with equal possibilities. From Fig. 7, it can be seen that the vectors are jointly Gaussian distributed after the transformation. Monte Carlo experiments show that the expected correlator output \(\mathcal{E}[f(\mathcal{H}(x),\mathcal{H}(y))]\) is the same as the \(g(R)\) for jointly Gaussian inputs. So no calibration is needed to learn \(g^{-1}\) for different distributions. On the other hand, it should be noticed that the standard deviation of cross-correlation estimation errors is not guaranteed to be the same for different distributions, even after the WHT transformation. To see this, notice that the WHT process will not change the output for the empirical correlator, because the WHT transformation is unitary. However, the estimation errors made by the empirical correlator, which is equivalent to \[Var(XY-\mathcal{E}[XY])=\mathcal{E}[(XY)^{2}]-g(R)^{2}=\int_{-\infty}^{\infty }\int_{-\infty}^{\infty}X^{2}Y^{2}\;\;p(x,y)dxdy-g(R)^{2}, \tag{38}\] will change as the joint probability density function \(p(x,y)\) changes for varying distributions. In particular, for jointly Gaussian distribution, it can be verified that \(Var(XY-\mathcal{E}[XY])=1+R^{2}\), whereas it is \(1-\frac{1}{5}R^{2}\) in the case of jointly uniform distribution in the symmetric experiment. The standard deviation of estimation error plots for the symmetric non-Gaussian test and the asymmetric test are displayed in Fig. 8 and 9. It is observed that the contour of the error plot changes for all correlators except the linear rectifier correlator, which is still the best-performing cross-correlation estimator in these two specific non-Gaussian input distributions. Figure 7: The distribution of the random vectors with zero correlation before and after the WHT transformation. a) The random vectors for the symmetric non-Gaussian distribution test. Original X and Y are uniformly distributed with zero mean and unit variance; b) The asymmetric non-Gaussian distribution, where the marginal distribution for X follows the standard normal distribution before WHT, and Y is a mixture of two normal distributions with opposite mean and unequal variance. Y is zero mean and unit variance as well. ## 5 Discussions and Conclusions In this paper, we presented a mathematical framework for analyzing different types of online cross-correlators. The analysis has been verified by Monte-carlo simulation for different input distributions. However, error analysis reveals that the shape of the error profile exhibits a trade-off. Online cross-correlation estimators that exhibit high errors near \(R\approx 0\) make fewer errors near \(|R|\approx 1\). The hyperparameters of the Huber estimators, MP estimators, and log-sum-exp estimators can be adapted to achieve different error profiles. However, the complexity of implementing these different online estimators on hardware could be significantly different. The Huber cross-correlator relies on the quadratic function and hence may be difficult to implement on hardware. On the other hand, the MP and LSE correlators can be easily implemented on analog hardware [17]. Even though MP and LSE cross-correlators require calibration to estimate the function \(g^{-1}\), we show that the calibration process can be agnostic to input distributions using the WHT transformation. Another potential brought by other correlators is the advantage of computational complexity and dynamic range when implementing on digital systems. It is obvious that the computation of the linear rectifier correlator and the MP correlator (without additional offsets) is much cheaper than the empirical correlator as it uses only additions [18]. The addition operation is immune to underflow on a fixed point system. The LSE correlator also possesses the advantage of numerical stability at the cost of computational complexity. It is a common trick used in machine learning to address the issue of gradient updates. Figure 8: The standard deviation error plot of MP, Huber, and LSE correlators for the symmetric input distribution test when dimension is 256. a) The error plot of MP correlators with \(\gamma=\{1,1.45,2\}\); b) The error plot of Huber correlators with \(\sigma=\{10^{-7},0.5,1.4,2\}\); c) The error plot of LSE correlators with \(a=\{0.1,1,3\}\). In terms of accuracy for estimating the cross-correlation of jointly Gaussian distributed inputs, we can see that the empirical method may not be the best correlator. The linear rectifier correlator is the best in estimating the covariance for highly correlated signals and in terms of overall SNR. The Huber and LSE correlator's performance is bounded by the linear rectifier and empirical correlator. The MP correlators using the function 24 is equivalent to the linear rectifier correlator when \(\gamma=0\). As \(\gamma\) increases, its performance for higher \(\gamma\) values can be potentially improved by introducing offsets. Of course, the above observations are not guaranteed to hold for other input distributions and are left for future research.
2310.02288
Vortex Effects in Merging Black Holes and Saturons
Vorticity has recently been suggested to be a property of highly-spinning black holes. The connection between vorticity and limiting spin represents a universal feature shared by objects of maximal microstate entropy, so-called saturons. Using $Q$-ball-like saturons as a laboratory for black holes, we study the collision of two such objects and find that vorticity can have a large impact on the emitted radiation as well as on the charge and angular momentum of the final configuration. As black holes belong to the class of saturons, we expect that the formation of vortices can cause similar effects in black hole mergers, leading to macroscopic deviations in gravitational radiation. This could leave unique signatures detectable with upcoming gravitational-wave searches, which can thereby serve as a portal to macroscopic quantum effects in black holes.
Gia Dvali, Oleg Kaikov, Florian Kuhnel, Juan Sebastián Valbuena-Bermúdez, Michael Zantedeschi
2023-10-02T18:00:03Z
http://arxiv.org/abs/2310.02288v2
# Vortex Effects in Merging Black Holes and Saturons ###### Abstract Vorticity has recently been suggested to be a property of highly-spinning black holes. The connection between vorticity and limiting spin represents a universal feature shared by objects of maximal microstate entropy, so-called saturons. Using \(Q\)-ball-like saturons as a laboratory for black holes, we study the collision of two such objects and find that vorticity can have a large impact on the emitted radiation as well as on the charge and angular momentum of the final configuration. As black holes belong to the class of saturons, we expect that the formation of vortices can cause similar effects in black hole mergers, leading to macroscopic deviations in gravitational radiation. This could leave unique signatures detectable with upcoming gravitational-wave searches, which can thereby serve as a portal to macroscopic quantum effects in black holes. _Introduction._ Recently, it has been proposed that black holes may admit vortex structures [1]. This proposal is supported by two separate lines of reasoning. First, such a possibility is rather natural within the description of a black hole as a condensate of soft gravitons at a quantum critical point [2; 3], as in general, the Bose-Einstein condensates are known to exhibit the vortex structure. Secondly, the possibility of vortices within black holes is supported by an alternative reasoning that is independent of a particular microscopic proposal. Instead, this argument relies on the universal properties of the phenomenon of saturation. It has recently been argued [4] that key black hole properties, such as the area-law of the entropy, a near-thermal evaporation, and a long time-scale of information retrieval, are not specific to black holes. Rather, they represent universal features of objects exhibiting the maximal microstate entropy permitted by a given quantum-field-theoretical (QFT) description. Such objects were referred to as _saturons_. Several explicit examples have been studied in a series of papers [4; 5; 6; 7; 8], which fully confirm the above universality. This remarkable correspondence between the black holes and other saturons gives us a double advantage. First, non-gravitational saturons emerge as interesting creatures in their own right, which can have spectacular consequences both for fundamental physics as well as for observations. In particular, they can have various cosmological implications [9]. At the same time, saturons can serve as a laboratory for understanding the existing features of black holes and predicting new ones, solely using the power of universality of saturation, without the need to enter into the technicalities of quantum gravity. The present paper is devoted to one of such features: vorticity. The key starting point of our study is the following. First, it has been noticed in Ref. [1] that the saturon spin is linked with its internal vorticity. Crucially, the vorticity sets the relation between the maximal spin \(J^{\rm max}\) of a saturon and its entropy \(S\), \[J^{\rm max}=S\,. \tag{1}\] Remarkably, this relation copies the analogous well-known relation for extremal black holes, \(J^{\rm max}_{\rm BH}=S_{\rm BH}\). In the case of a black hole, there exists no commonly accepted microscopic explanation of the above relation. However, in light of the black hole/saturon correspondence, it was suggested in Ref. [1] that the underlying mechanism limiting the black hole spin must be vorticity, as is the case for saturons. This implies that vorticity is expected to be exhibited by highly-spinning black holes. Beyond the explanation of the upper bound on the black holes' spin, this conjecture also sheds light on other seemingly mysterious features of black holes with large spins. For example, it provides a rationale for the absence of Hawking emission in an extremal black hole: on the analog side of an extremal saturon, the absence of emission is due to the topological stability of the extremal vortex [1]. Needless to say, the existence of vorticity in black holes is expected to have potentially observable consequences. In particular, vorticity could localize a magnetosphere around a black hole without the need for a specific accretion disk required in the standard scenario [10]. Correspondingly, powerful jets could be emitted [11], providing a smoking gun for the vorticity. In this letter, we study another important consequence of vorticity in black holes. Due to the macroscopic difference in the substructure of a black hole with or without a vortex, it is natural to expect a discontinuity in the gravitational radiation emitted by black hole mergers. In order to verify this idea, we study how the formation of vorticity in the final state affects the collision dynamics of saturons. As we will show, the presence of vorticity indeed leads to macroscopic deviations in the emitted radiation. In particular, this is due to a very high sensitivity of the energy output with respect to vortex formation. Motivated by the universality of saturated objects, we propose that similar dynamics could take place in certain black hole mergers, strongly affecting the emitted gravitational radiation. Specifically, we expect this phenomenon to manifest itself in the form of delayed or suppressed gravitational radiation in case of vortex formation. In our analysis, we use the model of saturons in the form of vacuum bubbles, originally introduced in Ref. [4]. These objects spontaneously break a global SU(\(N\)) symmetry and are stabilized by the corresponding Goldstone charge [8]. In this sense, they represent a form of \(Q\)-balls [12; 13]. Correspondingly, our results are independently motivated by understanding the cosmological and astrophysical consequences of \(Q\)-balls and other types of saturated solitons. We first proceed with the introduction of technical details of the model and present our findings regarding the collision process as well as vortex formation. Afterwards, we elaborate on the consequences of our results for black hole mergers. A summary of the present work, as well as videos of the simulated dynamics, can be found at this URL. Throughout this work we use units in which \(c=\hbar=1\). A Model for Saturons.In our analysis, we use a specific model of saturons within a renormalizable field theory, as introduced in Ref. [4]. Stable saturon constructions without and with spin are given in Refs. [8] and [1], respectively. The theory consists of a scalar field \(\Phi\) in the adjoint representation of SU(\(N\)) with the Lagrangian \[\mathcal{L}[\Phi]=\frac{1}{2}\operatorname{Tr}\left(\partial_{\mu}\Phi \right)(\partial^{\mu}\Phi)-V(\Phi)\,. \tag{2}\] The potential \(V(\Phi)\) is given by \[V(\Phi)=\frac{\alpha}{2}\operatorname{Tr}\left[f\Phi-\Phi^{2}+\frac{\mathds{1 }}{N}\operatorname{Tr}\Phi^{2}\right]^{2}\,, \tag{3}\] with \(\mathds{1}\) being the unit matrix, \(\alpha\) a dimensionless coupling constant and \(f\) the scale. Notice that the validity of QFT description in terms of \(\Phi\), imposes the constraint [4] \[\alpha\lesssim 1/N\,. \tag{4}\] Inspired by the analogy with black holes we shall work at large \(N\) keeping in mind the double-scaling limit \[N\to\infty\,,\quad\alpha\to 0\,,\quad\alpha\,N\to\mathcal{O}(1)\,, \tag{5}\] where the last expression determines the strength of the collective coupling. The model (2) has multiple degenerate vacua satisfying the condition \[f\Phi^{b}_{\,\,\,a}-\left(\Phi^{2}\right)^{b}_{\,\,\,a}+\,\delta^{b}_{\,\,\,a }\,\frac{1}{N}\operatorname{Tr}\Phi^{2}=0\,. \tag{6}\] The different constant solutions to Eq. (6) realize distinct patterns of symmetry breaking. We are interested in vacuum-bubble configurations that interpolate between the SU(\(N\)) symmetric vacuum in the exterior and SU(\(N-1\))\(\times\)U(1) vacua in the interior of the bubble. In the former vacuum, realized asymptotically, the theory is in the gapped phase, with mass-squared \(m^{2}=\alpha\,f^{2}\). In the latter vacuum, localized in the interior of the bubble, the symmetry is spontaneously broken. Therefore, \(2(N-1)\) Goldstone species exist in that region. A wall of tension \(m^{3}/6\alpha\) and of thickness \(1/m\) separates these two regions. We build such configurations via the ansatz [8] \[\Phi=U^{\dagger}\phi\,U\,, \tag{7}\] in which \[\phi=\frac{\rho(x)}{N}\operatorname{diag}\bigl{[}(N-1),-1,\ldots,-1\bigr{]}\,, \tag{8}\] and \[U=\exp\Bigl{[}i\,\theta(x)\,\hat{T}/\sqrt{2}\,\Bigr{]}\,. \tag{9}\] Above, \(\hat{T}\) corresponds to generators broken by the ansatz (8), and \(\theta(x)\) to the Goldstone modes. This leads to the effective leading-order (large-\(N\) and up to second order in \(\theta\)) Lagrangian \[\mathcal{L}=\frac{1}{2}\left[\partial_{\mu}\rho\,\partial^{\mu}\rho+\frac{1}{ 2}\rho^{2}\,\partial_{\mu}\theta\,\partial^{\mu}\theta-\alpha\,\rho^{2}\,( \rho-f)^{2}\right]. \tag{10}\] Here and below, the \(N\)-dependent factors are absorbed into the respective redefinition of the parameters. For simplicity, we consider the case in which a single broken generator is macroscopically occupied. In this effective theory, we are interested in two solutions. The first solution corresponds to the choice \[\rho(x)=\rho(r)\,,\quad\theta(x)=\omega\,t\,, \tag{11}\] describing a spherical bubble configuration (see Ref. [8]). The profile \(\rho(r)\) obeys the asymptotic conditions \(\rho(0)\simeq f\) and \(\rho(\infty)=0\). This configuration is stable thanks to the occupied Goldstone mode number -- or charge -- \(Q\). According to the Noether theorem, this charge can be estimated as \[\begin{split} Q&=i\;\text{Tr}\left[\int\text{d}^{3}x\; \left[\partial_{t}\Phi,\,\Phi\right]\hat{T}\right]\\ &=2\pi\,\omega\int\text{d}r\;r^{2}\,\rho^{2}(r)\simeq\frac{2\pi}{ 3\alpha}\,m^{2}\,\omega R^{3}\,,\end{split} \tag{12}\] where the second equality follows from ansatz (11). The mechanism of classical stability of the bubble shares similarities with \(Q\)-balls. The distinctive feature is that the bubble exhibits an exponentially large microstate degeneracy due to \(\sim N\) species of Goldstone modes localized within its interior (see Ref. [8] for more details). It was shown that in the region of parameter space \[\omega\sim m\sim 1/R\,,\quad N\sim Q\sim 1/\alpha\,, \tag{13}\] the bubble saturates the upper bound on entropy imposed by the validity of QFT [8], \[S\sim\frac{1}{\alpha}\sim(R\,f)^{2}\sim M^{2}/f^{2}\,, \tag{14}\] where \(M\) is the energy of the configuration. In other words, in this regime, the bubble represents a saturon with features analogous to a black hole. In particular, the microstate entropy of the saturon bubble mimics the Bekenstein-Hawking entropy [14; 15] of a black hole, with \(f\) replacing the role of the Planck mass. The second solution corresponds to the case in which the bubble is pierced by a global vortex line [16]. The construction is very similar to the one introduced for Abelian \(Q\)-balls considered in Ref. [17] in \(2+1\) dimensions and generalized to \(3+1\) in Ref. [18]. In our case, we consider the axially-symmetric ansatz \[\rho=\rho(r,\,\chi)\,,\quad\theta=\omega\,t+n\,\varphi\,, \tag{15}\] where \(\chi,\varphi\) denote the azimuthal and the polar angles, respectively. Asymptotically, we have: \(\rho(0,\,\chi)=0=\rho(\infty,\,\chi)=\rho(r,\,0)=\rho(r,\,\pi)=0\). As in the previous case of a non-spinning solution, the bubble is stabilized by the Goldstone modes of frequency \(\omega\). However, the same Goldstone modes now form a topological current: the Goldstone field exhibits an integer winding number \(n\). This gives the spin of the saturon, \[J=\int\text{d}^{3}x\;T_{0\varphi}=n\,Q=n\,S\,, \tag{16}\] where \(T_{\mu\nu}\) is the energy momentum tensor of the configuration. The second equality follows from the ansatz (15), and the third one from the saturation condition (13). It can be shown that in order to maintain the entropy saturation of the configuration, \(n\) cannot be much greater than one. From this, the maximal-spin condition for rotating \(Q\)-ball-type saturons (1) is recovered [1]. The explicit stationary solutions that follow from the Ansatze (11) and (15) can be found in Ref. [8], and, Ref. [19], respectively, for the three-dimensional case. Vorticity in Saturon Mergers.We now proceed with the study of the collision of spherical bubbles [c.f. Eq. (11)] around the saturation point (13). For practical numerical purposes, we set \(N=4\). We boost the bubbles towards each other with different velocities and impact parameters. We observe that a sufficient condition for vortex formation is related to the relative phase between the two bubbles. Namely, in order to create a vortex, the chosen \(\Delta\theta\) [see Eq. (11)] should be equal to \(\pi\). The reason is the following: The vortex ansatz for \(\theta(x)\) (15) at a fixed time gives opposite phase upon a rotation by \(\pi\) in \(\varphi\) for \(n=1\). Therefore, choosing the above offset for \(\Delta\theta\) ensures proximity to the vortex bound state. Moreover, exactly at \(\Delta\theta=\pi\) the merged bubble is subject to a "point" symmetry around its center, enforcing the presence of a vortex. As we shall see below, small deviations from the above condition only lead to temporal vortex formation. Once the relative phase is chosen, the initial conditions determine the kinematics. For a broad range of initial velocities, between \(0.2\) and \(0.8\), there exists a wide range of impact parameters comparable to the bubble radius, ensuring vortex formation. The impact parameter, however, needs significantly more tuning for higher velocity. Moreover, the relevant range of parameters is affected by the thickness of the bubble wall as it seems generally easier to attain vorticity in the thin-wall regime. We believe this is happening due to the fact that for larger cores it is easier to generate high angular momentum at milder velocities, effectively resulting in a larger dynamical window. If the velocities are too high and/or the impact parameter is too large, the bubbles simply scatter off from each other. For definiteness, we will focus on three cases that exemplify the relevance of our findings when extrapolated to black holes [20]. These cases correspond to different choices of \(\Delta\theta=0,\,0.95\,\pi,\,\pi\), while keeping all other parameters unchanged. We will refer to the first, second, and third cases as the _no-vortex_, _ejected-vortex_ and _vortex_ regimes, respectively. The reason behind these names will become apparent below. Snapshots of the dynamics for \(t=0,\,12\,m^{-1}\) and \(24\,m^{-1}\) can be observed in Fig. 1 for the vortex and no-vortex cases. A full video of the dynamics can be found at this URL. The resulting dynamics are summarised by the energy, charge, and spin of the configurations as a function of time in the three cases as shown in Fig. 2: * _no-vortex regime_ (green line): the bubbles merge, radiating a large fraction of their initial energy. * _ejected-vortex regime_ (red line): the initial conditions are close to the threshold of vortex formation. The merged soliton exhibits vorticity for a finite amount of time. However, eventually, it ejects the vortex and relaxes to a vortex-free configuration. * _vortex regime_ (blue line): a stable vortex, lasting on the timescales of the simulations, is formed within the resulting soliton. In the third case, in sharp contrast to the first one, we observe that little to no emission takes place throughout the merger. That is, the would-be emission energy is invested internally into vortex formation. This is also compatible with the previous reasoning that a saturated \(Q\)-ball with vortex has no available soft-quanta for relaxation [21]. The second case of vortex ejection is intermediate between the two other regimes. The initial conditions are in close vicinity to the threshold of vortex formation. Thereby a finite time interval exists in which a vortex persists in the final bubble. During this phase, no emission takes place, effectively mimicking the vortex case. Correspondingly, the energy evolution of this temporary scenario tracks that of the vortex case. Eventually, the instability of the configuration leads to a rapid emission of the vortex and to the relaxation of the \(Q\)-ball. A sizable additional energy is emitted in the ejected-vortex regime as compared to the no-vortex case due to the extra misalignment of modes subject to \(\Delta\theta\neq 0\). As can be seen from the upper panel of Fig. 2, the behavior of the energy is typical for the relaxation of unstable configurations due to the exponential growth of instability modes. The lost charge in the merger, displayed in the middle panel of Fig. 2, is correlated with the loss of energy. Finally, the lower panel of Fig. 2 shows the evolution of the configuration's angular momentum \(J\) as a function of time. In the case where the system is away from the vortex formation, a significant portion of the angular momentum is emitted at the merger, as can be seen from the green line. In contrast, in the vortex-formation case, the angular momentum of the configuration slightly increases by \(\mathcal{O}(1)\) per cent. This, combined with the energy and charge trajectory, is a clear indication of proximity to the classical stationary vortex bubble. In Figure 1: Snapshots of the charge density from the numerical three-dimensional evolution of the merger dynamics for the no-vortex (_left two columns_) and vortex case (_right two columns_). Odd columns show the full three-dimensional snapshot, while even columns display the two-dimensional plane clipped at the \(Q\)-ball centers. In these simulations \(\alpha=1\), \(\omega=0.4m\), \(v_{\rm initial}=0.25\) and the impact parameter \(b=12m^{-1}\). the scenario close to the threshold of vortex formation (red line), the merged saturon behaves analogously to the stable vortex case for a prolonged period of time. Eventually, due to instability, the vortex is ejected from the configuration resulting in an almost complete drop in the final angular momentum. This can be understood from the fact that the vortex was responsible for most of the spin. Consequences for Black Holes.So far we have investigated collisions of two saturon bubbles. Three important results emerge. The first one is the very fact of a merger. Secondly, it is possible to form a vortex, even if no vortices were initially present. Thirdly, the vortex formation significantly affects the emission of energy, leading to suppression by several hundred percent compared to the non-vortex case. In addition, we observe some transient regimes. When close to the vortex-formation threshold, the merged bubble sustains vorticity during an extended period of time. Eventually, the vortex is ejected and a significant fraction of its mass is emitted in radiation. Two notable features arise in this case: 1) The total emitted radiation is larger when compared to the case of mergers far away from the vortex-formation threshold and 2) the resulting bubble carries close-to-zero angular momentum. The reason for 2) is that the vortex is responsible for the spin of the configuration as can be understood from Eqs. (15) and (16). In other words, vorticity provides a jump in the angular momentum of the bubble, leading to a slowly-rotating final soliton. The relevant question is: what conclusions can be drawn for black hole mergers? We must take into account that the spectrum of classical black holes is different from their non-gravitational pendants considered here. Black holes possess a continuous spectrum of axially symmetric configurations of arbitrary angular momentum, while the saturon \(Q\)-balls in our model do not. Another question is the rigidity of the connection between the spin and vorticity in the two cases. For the studied \(Q\)-ball saturons, the spin implies vorticity. This need not necessarily be true for all black holes, especially the black holes of low spin. Rather, the correspondence works very well in the limit of maximal entropy (1). From this point of view, we expect that a high probability of vorticity exists for black holes of high spin. Of course, this by no means excludes the possibility of vortex formation even in black holes of a very low spin, as the total winding number can be zero. However, such configurations are likely to be above the ground state and thus unstable, albeit sufficiently long-lived for observational interest. From the above, we draw the following lesson. The universality of saturation suggests that it is important to look for the features of vorticity in black hole mergers, taking the features exhibited by saturons as qualitative guidelines. Treating the saturons in non-gravitational systems as theoretical laboratories for black holes, we are led to the expectation that vorticity could cause similar imprints in black hole mergers. Specifically, if the black hole binary system approaches the vortex-formation threshold at its merger point, we expect a potentially significant delay of the radiation signal. Moreover, the final burst can be expected to be highly energetic -- enhanced compared to the case with no vorticity at all -- as it carries away most of the angular momentum of the configuration i.e., \(J_{\rm radiation}\sim\mathcal{O}(S)\). Therefore, this would also result in a black hole with low spin. Figure 2: Energy (_upper panel_), charge (_middle panel_) and angular momentum (_lower panel_) as a function of time for the three cases outlined in the text. Black hole vortices can be expected for high spins [1]. These are not uncommon for stellar black holes but predominantly occur in the supermassive range [22] and not much in the observed black hole mergers of order \(\mathcal{O}(1\,\text{--}\,100)\,M_{\odot}\)[23]. However, a few black holes with a final spin around or less than 10% away from extremality have been observed (such as GW200308). Forthcoming observational runs of LIGO/Virgo/Karga as well as future observatories can be expected to find further mergers with high final spins. These will also probe part of the subsolar mass range in which only primordial black holes [24; 25] could have been formed (see Refs. [26; 27] for recent reviews). If formed during an epoch of matter domination (cf. Ref. [28]) or from quark confinement [29], their angular momenta can be near extremal, particularly in the subsolar mass range. If primordial black holes constitute a substantial fraction of the dark matter, the mentioned mechanisms could yield a large amount of black holes at or near the vortex-formation threshold. Either from stellar or primordial black holes, the observation of high spins can be expected in the near future. Some of these black holes might carry or generate vorticity, which, as we have argued in this work, would leave observable signatures in forthcoming gravitational-wave searches, unveiling the underlying quantum nature of black holes. _Acknowledgements._ This work was supported in part by the Humboldt Foundation under Humboldt Professorship Award, by the European Research Council Gravities Horizon Grant AO number: 850 173-6, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868, and Germany's Excellence Strategy under Excellence Cluster Origins. **Disclaimer:** Funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
2308.11900
HashReID: Dynamic Network with Binary Codes for Efficient Person Re-identification
Biometric applications, such as person re-identification (ReID), are often deployed on energy constrained devices. While recent ReID methods prioritize high retrieval performance, they often come with large computational costs and high search time, rendering them less practical in real-world settings. In this work, we propose an input-adaptive network with multiple exit blocks, that can terminate computation early if the retrieval is straightforward or noisy, saving a lot of computation. To assess the complexity of the input, we introduce a temporal-based classifier driven by a new training strategy. Furthermore, we adopt a binary hash code generation approach instead of relying on continuous-valued features, which significantly improves the search process by a factor of 20. To ensure similarity preservation, we utilize a new ranking regularizer that bridges the gap between continuous and binary features. Extensive analysis of our proposed method is conducted on three datasets: Market1501, MSMT17 (Multi-Scene Multi-Time), and the BGC1 (BRIAR Government Collection). Using our approach, more than 70% of the samples with compact hash codes exit early on the Market1501 dataset, saving 80% of the networks computational cost and improving over other hash-based methods by 60%. These results demonstrate a significant improvement over dynamic networks and showcase comparable accuracy performance to conventional ReID methods. Code will be made available.
Kshitij Nikhal, Yujunrong Ma, Shuvra S. Bhattacharyya, Benjamin S. Riggan
2023-08-23T04:01:54Z
http://arxiv.org/abs/2308.11900v1
# HashReID: Dynamic Network with Binary Codes for Efficient Person Re-identification ###### Abstract Biometric applications, such as person re-identification (ReID), are often deployed on energy constrained devices. While recent ReID methods prioritize high retrieval performance, they often come with large computational costs and high search time, rendering them less practical in real-world settings. In this work, we propose an input-adaptive network with multiple exit blocks, that can terminate computation early if the retrieval is straightforward or noisy, saving a lot of computation. To assess the complexity of the input, we introduce a temporal-based classifier driven by a new training strategy. Furthermore, we adopt a binary hash code generation approach instead of relying on continuous-valued features, which significantly improves the search process by a factor of 20. To ensure similarity preservation, we utilize a new ranking regularizer that bridges the gap between continuous and binary features. Extensive analysis of our proposed method is conducted on three datasets: Market1501, MSMT17 (Multi-Scene Multi-Time), and the BGC1 (BRIAR Government Collection). Using our approach, more than 70% of the samples with compact hash codes exit early on the Market1501 dataset, saving 80% of the networks computational cost and improving over other hash-based methods by 60%. These results demonstrate a significant improvement over dynamic networks and showcase comparable accuracy performance to conventional ReID methods. Code will be made available. ## 1 Introduction Person re-identification (ReID), where probe (query) images are matched against gallery images is an important application in real-world scenarios. For example, unmanned aerial vehicles (UAVs) such as drones are often equipped with identification capabilities for applications such as border control, intelligence, and security. This task poses a significant challenge due to the considerable variations in factors such as pose, clothing, resolutions, occlusions, camera-viewpoints and more. Additionally, the deployment of ReID models in realistic scenarios, such as real-time ReID, is hindered by significant latencies induced by model complexity and searching (sorting). To address this, researchers have designed efficient model architectures, such as incorporating depth-wise convolutions [12], compound scaling for balancing width and depth [24], point-wise group convolutions [33], and squeeze and expand layers [14]. However, the high-dimensional representations still render the matching process inefficient. Binary (hash) representations--the transformation of high-dimensional continuous-valued representations to discrete binary codes--have been recently used to accelerate the matching process [2, 17, 26, 35]. For instance, compar Figure 1: Overview of our contributions. Our method generates hash/binary representation of the features, to enable fast lookup. Moreover, the network adjusts its inference process according to the input saving computational costs. ing two 2048-dimensional representations with Hamming distance metric is 229x faster compared to continuous valued features [26]. However, the computational time and energy for network inference still remains a bottleneck. Another recent approach focuses on adaptive inference, where the network dynamically adjust its architecture based on the complexity of the input [25, 28]. For example, ElasticNet [39, 9] generates intermediate outputs and jointly optimizes the loss of all layers, allowing termination of computation to meet changing computational demands. However, spatial information at the earlier layer is often not correctly utilized, leading to subpar initial performance. In this work, we explore the combination of both hash representations and dynamic inference (see Figure 1), achieving performance competitive with traditional neural networks. We argue that most of the discriminability is lost in the earlier layers due to the global pooling on the large spatial dimensions, and instead utilize part-based local pooling to boost performance. Due to the need for extensive fine-tuning of threshold-only methods used to determine when to stop computation, we opt for the utilization of a learnable exit policy to make predictions. Finally, we make the hash learning tractable and discriminative by employing a soft-sign operation driven by a ranking regularizer to preserve the similarity between the continuous-valued and binary discrete-valued features. Our work does not rely on the underlying network architecture and can be adapted to various encoders [12, 14, 24], providing an efficient way to dynamically terminate computation during inference. Our contributions can be summarized as follows: 1. We propose a novel hash-based network called HashReID, which leverages spatial information in earlier layers to generate robust representations at early exit points, while also generating a compact hash representation for efficient inference and lookup. 2. We introduce a new ranking regularizer that maintains the similarity between continuous and binary features. 3. We present a novel policy called Exit using Training Statistics (ETS) that uses a gated recurrent unit (GRU) to train and predict the difficulty of samples as easy, hard or impossible to recognize. We conduct extensive analysis on the Market1501 [36], MSMT17 [30], and the BGC1 [4] datasets, and demonstrate competitive performance in realistic situations, such as budgeted performance metrics. ## 2 Related Work **Supervised ReID:** Supervised ReID approaches have seen tremendous progress, especially with the exploration of the triplet loss for ReID [10]. To tackle misaligned person crops, HaCNN [15] proposes a local and global branches that jointly learns soft and hard attention to focus on discriminative regions. OSNet [38] fuses feature representations from multiple feature scales within and across channels, thereby generating a 'omni-scale' feature representation. BOT [18] summarizes training strategies most effective for person ReID. The challenge of deploying these methods lies in the requirement for efficient computation and inference, particularly in real-time applications. On the other hand, our approach enables early computation termination while simultaneously generating hash representations, thereby accelerating the matching process. **Adaptive Networks:** As input samples vary in difficulty for recognition or classification tasks, a recent paradigm has emerged to skip layers or intermediately exit the network prediction to save computation and energy costs. BranchyNet [25] adds two early-exit branches having a combination of \(3\times 3\) convolutional layers and fully connected layers at equidistant locations. ElasticNet [39, 9] inserts exit pathways following each residual block, resulting in a total of 17 exits for the ResNet50 [8] model. MSDNet [13] uses dense multi-scale features to learn intermediate classifiers and inputs are exited once a confidence threshold is reached. RANet [32] improves upon MSDNet by conditioning on the resolution of the input sample by utilizing sub-networks with different input resolutions. In DareNet [29], a multi-resolution approach is applied on the ReID task by inserting early exit blocks after every stage of the ResNet50 network. In contrast to existing methods, our approach takes advantage of spatial information in the early layers of the network to achieve competitive performance at an early stage. Additionally, we introduce a novel exit policy and generate a hash representation of the feature vector, resulting in a significant acceleration of query lookup. **Hashing Networks:** For retrieval problems, hash representations have been explored because of their capability of efficient lookup and storage. Deep learning methods suffer from the ill-posed gradient problem when learning a binary code due to the discontinuity of the signum (sign) function at zero. To address this, HashNet [2] begins learning using a hyperbolic tangent (tanh) activation, and gradually modifies to approximate the sign activation. Kernel-Based Supervised Hashing (KSH) [16] learns a kernel to map data to binary codes and optimizes using a code inner product for similarity-preserving learning. Adversarial Binary Coding (ABC) [17] uses adversarial learning to optimize between binary and real-valued features using a Wasserstein loss [7]. In [26], a 2048-dimensional hashing code is learned using self-distillation across at different stages of the network. DeepSSH [35] uses attribute- and identity-level hash codes using a sigmoid cross-entropy loss as a relaxation of the sign function. In contrast, our approach employs the soft-sign activation, facilitating better convergence due to a gradual gradient slope and tighter bounds enforced on the gradient values (similar to label smoothing). ## 3 Methodology ### Preliminaries The whole-body (person) images and corresponding identity labels from the training set are denoted as \(X_{train}=\{x_{1}^{t},x_{2}^{t},\dots,x_{n}^{t}\}\) and \(Y_{train}=\{y_{1}^{t},y_{2}^{t},\dots,y_{n}^{t}\}\), respectively, where \(n\) is the total number of images in the set. The aim is to learn a discriminative hash code function \(\phi(x;\theta)\) that can match images from disjoint sets \(X_{query}\) and \(X_{gallery}\), where \(\theta\) denotes trainable parameters of the network. To compare hash (or binary) representations, we use the Hamming distance metric. The representations are directly optimized (such as in the triplet loss) as well as fed to a classifier to optimize the logits of the classifier. Figure 2 captures the overall architecture of our proposed network. ### Hash-based Dynamic Network To generate a discriminative binary code representation of the person image, we utilize the ResNet50 [8] architecture as the encoder to fairly compare with other methods. However, the encoder can be easily swapped to save further computation. To be able to halt computation at different computational budgets, we place \(n_{h}\) hash exit blocks across the network, roughly equally spaced according to computational cost. In our current implementation, we have set \(n_{h}\) to 4, but it can be easily modified to meet the requirements of the backbone network or application. The network comprises four stages producing a 256-, 512-, 1024-, and 2048-dimensional feature, respectively. Given an image \(x_{i}^{t}\) from the training set, the network produces four representations at different stages, denoted as: \[\begin{split} s_{1}&=\phi^{stage1}(x_{i}^{t}), \hskip 28.452756pts_{2}=\phi^{stage2}(s_{1}),\\ s_{3}&=\phi^{stage3}(s_{2}),\hskip 28.452756pts_{4 }=\phi^{stage4}(s_{3}).\end{split} \tag{1}\] The initial layer (\(s_{1}\) in Eq. 1) contain more local and fine-level details, and the spatial dimensions retain most of the discriminative information. For example, at \(s_{1}\), the input image with size \(256\times 128\times 3\) (H \(\times\) W \(\times\) C) is mapped to a \(64\times 32\times 256\) (H \(\times\) W \(\times\) C) feature representation. Global pooling on the spatial dimensions loses too much information, resulting in poor performance. Therefore, we utilize a part-based local pooling operation that retains spatial information. Specifically, we identify that person images can be categorized into four parts (head, upper torso, lower torso, and feet). We split the feature map into four spatial parts in the height dimension, producing four \(16\times 32\times 256\) dimensional-tensors. These are then average pooled to Figure 2: Our proposed method uses intermediate feature representations, driven by the triplet loss as specified in Eq 5, as input to the hash exit blocks that is optimized using the classifier loss. The stage-1 part-based pooled features (\(\hat{s_{1}}\)) is used by the ETS policy to determine the layer at which to exit. The easy/skip predictions are exited at the initial layer, while the hard prediction samples are exited using the query-gallery margin heuristic. Sign* denotes soft-sign activation used during training, and sign activation during the inference phase. gether to generate a \(1\times 1\times 256\) feature. Mathematically, it is: \[\hat{s}_{1}=concat(Avg(s_{1}\{i:i+16,32,256\})), \tag{2}\] where \(i\in\{0,16,32,48\}\) and \(\hat{s}_{1}\) denotes that part-based pooling applied on \(s_{1}\). To generate the hash code, we attach a novel Hash-Exit (HE) block to each of the intermediate representations (\(s_{2}\), \(s_{3}\), \(s_{4}\) from Eq. 1, and \(s_{1}\) from Eq. 2). The first two blocks, shown in Eq. 3, serve the dual purpose of bridging the gap between fine and coarse-level features in the earlier layers and generating a concise hash code. Specifically, the block consists of using a fully connected layer (\(FC\)) with a batch normalization layer (\(BN\)) and a hyperbolic tangent (\(tanh\)) function, followed by another \(FC\) layer to introduce non-linearity. The hash code is generated by first centering the features around 0 using a batch normalization (\(BN\)) layer and then finally using a \(sign\) function. Formally, it is: \[\begin{split} HE_{1,2}=& FC_{512}-BN_{512}-tanh- \\ &\underbrace{FC_{256}-BN_{256}-sign}_{hash},\end{split} \tag{3}\] where the subscript for \(FC\) and \(BN\) denotes the output dimension, and the \(sign\) function returns 1 if output value is greater than 0 and returns -1 otherwise. The third and fourth (final) stage feature representation are generic enough to not need a fine-to-coarse transformation, and hence the representations are directly fine-tuned without using a non-linear transformation (shown in Eq. 4). \[\begin{split} HE_{3}=\underbrace{BN_{1024}-sign}_{hash},HE_{4} =\underbrace{BN_{2048}-sign}_{hash},\end{split} \tag{4}\] We also attach a classifier at the end of each output from the HE blocks for cross-entropy minimization. However, the classifier is discarded after training. ### Multi-Exit Optimization with Soft Sign Given a training set \(X_{train}\), we first employ the triplet loss with hard mining that minimizes the distance between the most different representations of the same identity in the batch, and maximizes the distance between two similar representations of distinct identities. For each iteration, P distinct identities with K images per P (identities) are sampled in a batch. Next, for each sample in the batch (anchor), the hardest (furthest) positive and hardest (closest) negative are selected to compute the loss. Mathematically, it can be denoted as: \[\begin{split} L_{T}&=\sum_{P}\sum_{K}max(\overbrace{d( \phi(x^{a}),\phi(x^{p}))}^{\text{hardest positive}}\\ &-\underbrace{d(\phi(x^{a}),\phi(x^{n}))}_{\text{hardest negative}}+margin,0)\end{split} \tag{5}\] where \(x^{a}\), \(x^{p}\), and \(x^{n}\) denote the anchor, positive, and negative samples, respectively, and margin is the minimum gap set to 0.2 as in [10]. To bridge the gap between hash codes and the embedding representation, we employ the negative log likelihood loss on the classifier logits: \[L_{C}=-\sum_{i=0}^{N}y_{i}log(\hat{y}_{i}), \tag{6}\] where \(y_{i}\) is the true label of the identity and \(\hat{y_{i}}\) is the softmax probability of the class. This ensures that the original representation is discriminative enough, whereas the classification probability of the hash codes aligns with the features. Lastly, to minimize the distance and keep it differentiable [16], we minimize the inner product: \[L_{R}=\mathbb{E}(feat_{ori}\cdot feat_{ori}^{T}-feat_{hash}\cdot feat_{hash}^ {T})^{2}, \tag{7}\] between the distance matrix of the original continuous features (\(feat_{ori}\)) and the hash features (\(feat_{hash}\)) for the top-5 ranks in the batch. This ensures we are able to optimize the distance between continuous and hash features, without considering separability of all samples that might hinder the learning. The final loss is denoted as: \[\begin{split} L_{final}=\lambda_{1}L_{T}(feat_{ori})+\lambda_{2 }L_{C}(feat_{hash})\\ +\lambda_{3}L_{R}(feat_{ori},feat_{hash})\end{split} \tag{8}\] where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are used to balance the losses. The empirically determined values are specified in Section 4.2. Note that the Hamming distance and the sign activation function is not differentiable, and hence a smooth sign function and the inner-product distance [16] is used to make it convex and tractable. ### Exit using Training Statistics (ETS) To support adaptive inference, we need to determine when samples can exit the network--meaning the most effective HE block in terms of both efficiency and discriminability. Previous work uses classifier confidence, but in a retrieval problem, we have unknown number of classes, making this infeasible. Heuristics approaches such as similarity metric or margin between have been explored [32], but this needs hand-tuning of the distance thresholds and is not generalizable. In this work, we propose to predict which samples are easy by using a temporal network that predicts whether or not to exit early. The temporal network accepts the \(\hat{s}_{1}\) representation of the query and the top-4 matches as input, and consists of a gated recurrent unit (GRU) with two hidden stages, followed by a ReLU activation and a three-output classifier with outputs \(easy,skip\), and \(hard\). However, the challenge is to train the network to classify such samples without fine-tuning on the test set. To address this, we utilize our training phase to collect statistics of the number of flips in the top-1 retrieval for the sample. Specifically, assuming that we train our model for 100 epochs, we collect the top-1 decisions of the training samples at every 10 epochs. An example for an easy/hard sample is denoted as: \[\begin{split}& T_{easy}=[\times,\underbrace{\chi,\check{\check{ \check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \checkcheckcheck{\checkcheck{\check ### Bit Length In Table 3, we present the performance trend on various hash-bit lengths with the _OURS-HE_ model. The final column indicates the time required in \(10^{-6}\) seconds for computing the Hamming distance between hash features compared to the euclidean distance for continuous-valued features. For a more detailed comparison, we refer readers to [26]. Even with a relatively short length of 128 bits, the model achieve 84.26% rank-1 accuracy. However, the correct match separability is low as observed by the mAP score. With increase in length, there is a significant improvement in the mAP scores, while the rank-1 accuracy improves gradually. The binary representation significantly performs better (\(>\)90x), demonstrating the benefit of hash features. As observed in [26], the query search time (in seconds) on the Market1501 dataset is \(2.2\) for the 2048-dimensional continuous-valued representation and \(2.8\times 10^{-1}\) for the 2048-dimensional hash-valued representation. In our approach, we employ a 256-dimensional feature for the first two exit blocks, and 1024- and 2048-dimensional hash feature for the last two exit blocks, respectively. As most (\(>70\%\)) samples exit early at stage 1 with 256-dimensional features, 80% of the networks computational cost is saved, while reducing the total query search time to \(1.1\times 10^{-1}\). This leads to an improvement of 60% over using only 2048-dimensional hash codes. ### Budgeted Inference In the budgeted inference setting shown in Figure 4, the model operates within a predefined computational budget, represented by the x-axis indicating the number of floating-point operations (FLOPS), to classify all query \begin{table} \begin{tabular}{l l l l l|l l l} \hline \hline \multicolumn{6}{c|}{**Hash-based Methods**} & \multicolumn{6}{c}{**SOTA ReID**} \\ \hline Method & Type/Length & R-1 & mAP & Method & Type/Length & R-1 (S4) & mAP \\ \hline HashNet [2] & B/512 & 29.20 & 19.10 & PNGAN [21] & C/2048 & 89.40 & 72.60 \\ DeepSSH [35] & B/512 & 46.50 & 24.10 & SVDNet [22] & C/2048 & 82.30 & 61.10 \\ ABC [17] & B/2048 & 81.40 & 64.70 & OSNet [38] & C/2048 & 94.20 & 82.60 \\ DCH [1] & B/512 & 40.70 & 20.20 & BOT [18] & C/2048 & 94.50 & 85.90 \\ CF [26] & B/2048 & 93.70 & 84.10 & MLFN [3] & C/2048 & 90.10 & 74.30 \\ PDH [40] & B/512 & 44.60 & 24.30 & HaCNN [15] & C/512 & 90.90 & 75.60 \\ DSRH [34] & B/512 & 27.10 & 17.70 & TriNet [10] & C/2048 & 84.90 & 69.10 \\ OURS-HE & B/2048 & **94.18** & **84.85** & OURS-HE & B/2048 & **94.18** & **84.85** \\ \hline \hline \multicolumn{6}{c}{**Dynamic Networks**} \\ \hline Method & Type & Length & R-1 (S1) & R-1 (S2) & R-1 (S3) & R-1 (S4) & mAP \\ \hline BranchyNet [25] & C & 14880/13440/2048 & 38.81 & 58.05 & - & 80.05 & 62.76 \\ MSDNet [13] & C & 384/384/352/204 & 58.79 & 61.67 & 64.01 & 63.51 & 38.07 \\ RANet [32] & C & 576/1088/641/897 & 58.55 & 59.47 & 65.26 & 65.05 & 40.19 \\ DafE(R) [29] & C & 128/128/128/128 & 62.86 & 74.20 & 82.30 & 83.91 & 65.40 \\ DafE+RE(R) [29] & C & 128/128/128/128 & 62.47 & 78.38 & 87.05 & 87.77 & 74.34 \\ OURS+HE & B & 256/256/1024/2048 & **71.29** & **79.93** & **92.10** & **92.10** & **82.08** \\ \hline \hline \end{tabular} \end{table} Table 1: Market1501 performance. **B** denotes binary-valued representation whereas C denotes continuous-valued representation. \begin{table} \begin{tabular}{l l l l l|l l l} \hline \hline \multicolumn{6}{c|}{**Hash-based Methods**} & \multicolumn{6}{c}{**SOTA ReID**} \\ \hline Method & Type/Length & R-1 & mAP & Method & Type/Length & R-1 (S4) & mAP \\ \hline HashNet [2] & B/512 & 23.55 & 10.65 & FCB [23] & C/2048 & 68.20 & 40.40 \\ DTSH [27] & B/512 & 47.37 & 25.61 & GLAD [31] & C/2048 & 61.40 & 34.00 \\ ABC [17] & B/2048 & - & - & OSNet [38] & C/2048 & 79.10 & 55.10 \\ QSMI [19] & B/512 & 16.21 & 9.88 & IANet [11] & C/2048 & 75.50 & 46.80 \\ CfF [26] & B/2048 & 75.95 & 51.36 & MLFN [3] & C/2048 & 66.40 & 37.20 \\ PDH [40] & B/512 & 37.13 & 16.90 & HaCNN [15] & C/512 & 64.70 & 37.20 \\ DSRH [34] & B/512 & 29.91 & 14.75 & DGNet [37] & C/2048 & 77.20 & 52.30 \\ OURS-HE & B/2048 & **76.81** & **51.41** & OURS-HE & B/2048 & **76.81** & **51.41** \\ \hline \hline \multicolumn{6}{c}{**Dynamic Networks**} \\ \hline Method & Type & Length & R-1 (S1) & R-1 (S2) & R-1 (S3) & R-1 (S4) & mAP \\ \hline BranchyNet [25] & C & 14880/13440/2048 & 9.17 & 21.27 & - & 49.93 & 27.51 \\ MSDNet [13] & C & 384/384/352/304 & 15.56 & 20.15 & 20.98 & 21.94 & 8.53 \\ RANet [32] & C & 576/1088/641/897 & 13.47 & 14.90 & 19.99 & 20.44 & 8.04 \\ DaRE(R) [29] & C & 128/128/128/128 & 16.81 & 19.56 & 51.76 & 52.11 & 30.01 \\ DafE+RE(R) [29] & C & 128/128/128/128 & 15.49 & 19.92 & 53.18 & 54.77 & 30.31 \\ OURS+HE & B & 256/256/1024/2048 & **28.91** & **47.74** & **71.31** & **71.52** & **46.66** \\ \hline \hline \end{tabular} \end{table} Table 2: MSMT17 Performance. **B** denotes binary-valued representation whereas C denotes continuous-valued representation. samples. In this, we employ early-exiting of _easy/skip_ samples while propagating _hard_ examples. We compare with four SOTA dynamic networks: MSDNet [13], RANet [32], BranchyNet [25], and DaRE [29]. Additionally, we compare with SOTA ReID methods: TriNet [10] and MLFN [3]. RANet and MSDNet exhibits low initial performance, with only gradual increase in performance as budget increases. BranchyNet demonstrates a steep increase, indicating that earlier stages have very low performance compared to later stages. Compared to DaRE, our method consistently performs better with substantial performance gap between the lowest and highest budget point. This signifies that throughout the network, the model is able to classify most samples accurately compared to other methods. To ensure fairness in comparison, all methods utilize the same exit policy. ### Earliest Exit Performance In Figure 5, we present the results on five distinct exit policy techniques for exiting at stage 1 (\(HE_{1}\)). _Random_ exiting involves randomly determining the sample to exit with a 50% probability. _Query Separability (QS)_ is the Hamming distance between the query and the top-1 gallery sample for determining whether to exit. _Gallery Separability (GS)_ is the distance between the top-2 matches in the gallery. If the separability exceeds a threshold value, the sample is exited. _Ours_ is the proposed GRU-based classifier discussed in Section 3.4, and _Ours+GS_ is the combination of the GRU and GS. The stacked bar represents the count of exited samples, where the green and red bar denote if the sample had the correct or incorrect top-1 retrieval at \(HE_{1}\), respectively. Based on the figure, it is evident that the _Random_ approach is the least favorable choice, although it still demonstrates acceptable performance due to its strong performance at \(HE_{1}\). Both _QS_ and _GS_ appear to be conservative approaches, resulting in low numbers of correct exits. _Ours_ achieves high number of accurate classifications for early-exiting, but also has the most number of incorrect exits. The combination of the GRU classifier and GS heuristic (_Ours+GS_) exhibits the best performance, achieving high number of correct matches and low number of incorrect matches. The maximum line refers to the highest number of correct exits achievable at \(HE_{1}\). All thresholds are determined using cross-validation. ### Qualitative Results Figure 6 illustrates the top match obtained at each stage, with different levels of difficulty of query samples. In the first row is an easy sample, where the uniform (and distinct patterns) facilitates clear separation from other identities, resulting in correct retrievals at every exit stage. The second row is a low-resolution query input, which benefits from additional computation to accurately retrieve the corresponding gallery match. The third and fourth row represent hard query samples, where the similarity in clothing negatively impacts performance at earlier stages. However, with full computation and longer hash codes, the model is eventually able to correctly classify these challenging samples. Figure 7 shows results using the ETS policy. The _Easy Sample_ category stands out from other identities due to distinct patterns and colors, such as a checkered shirt. _Skip/Impossible samples_ typically consist of noisy or ambiguous images where multiple identities appear together. _Hard Samples_ refer to individuals wearing similar cloth Figure 4: Budgeted inference compared with various dynamic networks. * denotes performance of non-dynamic networks. Figure 5: Number of correct (green bar) and incorrect exits (red bar) at stage 1. The maximum dotted line denotes the maximum number of samples that can be correctly exited at stage 1. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Length & Rank-1 & mAP & Time (B:C) in \(10^{-6}\) s. \\ \hline 128 & 84.26 & 67.22 & 2.8:260 (92x) \\ 256 & 89.99 & 75.54 & 3.3:500 (151x) \\ 512 & 91.12 & 79.06 & 4.4:1000 (227x) \\ 1024 & 91.80 & 80.63 & 7.1:2000 (281x) \\ 2048 & 94.18 & 84.85 & 17:3900 (229x) \\ \hline \end{tabular} \end{table} Table 3: Longer codes increases comparison time but improves separability as seen in the mAP scores. ing and in similar views, requiring additional computation to generate reliable cross-view representations. Finally, incorrect classifications from the classifier are presented, where identities wearing similar clothing patterns and accessories (e.g., backpack) in similar poses result in erroneous matches. To reduce these mis-classifications, a combination of a heuristic and our GRU classifier is employed, leading to reduced errors as demonstrated in Figure 5. ## 5 Conclusion In conclusion, this work introduces a novel hash-based dynamic network capable of adapting its computation based on the difficulty of input samples. We leverage a GRU-based ETS policy to assess the complexity, considering both the query and top-gallery samples to make informed decisions regarding early exits. To ensure discriminability between hash and continuous-valued features, we incorporate a ranking regularization technique that optimizes feature similarity. The adoption of hash representation results in a significant improvement in query-gallery matching compared to continuous-valued representations, while saving computational costs because of the dynamic capability of the network. Our work establishes a robust baseline for an input-adaptive hash network for biometric applications. ## 6 Acknowledgement This research is based upon work supported in part by DEVCOM Army Research Laboratory (ARL) under contract W911NF-21-2-0076, DEVCOM ARL and the National Strategic Research Institute (NSRI) under contract FA4600-20-D-0003, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2022-21102100002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DEVCOM ARL, NSRI, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes not withstanding any copyright annotation therein. Figure 6: Stage-wise retrieval performance on the Market1501 and BGC1 datasets. Green border denotes correct retrieval whereas Red border denotes incorrect retrieval. All subjects consent to image publication. One image (row 4) is pixelated for privacy. Figure 7: Qualitative performance using the ETS policy. Green border denotes correct retrieval whereas red border denotes incorrect retrieval for a given query image (**black** border).
2306.00379
Large Scale Generative Multimodal Attribute Extraction for E-commerce Attributes
E-commerce websites (e.g. Amazon) have a plethora of structured and unstructured information (text and images) present on the product pages. Sellers often either don't label or mislabel values of the attributes (e.g. color, size etc.) for their products. Automatically identifying these attribute values from an eCommerce product page that contains both text and images is a challenging task, especially when the attribute value is not explicitly mentioned in the catalog. In this paper, we present a scalable solution for this problem where we pose attribute extraction problem as a question-answering task, which we solve using \textbf{MXT}, consisting of three key components: (i) \textbf{M}AG (Multimodal Adaptation Gate), (ii) \textbf{X}ception network, and (iii) \textbf{T}5 encoder-decoder. Our system consists of a generative model that \emph{generates} attribute-values for a given product by using both textual and visual characteristics (e.g. images) of the product. We show that our system is capable of handling zero-shot attribute prediction (when attribute value is not seen in training data) and value-absent prediction (when attribute value is not mentioned in the text) which are missing in traditional classification-based and NER-based models respectively. We have trained our models using distant supervision, removing dependency on human labeling, thus making them practical for real-world applications. With this framework, we are able to train a single model for 1000s of (product-type, attribute) pairs, thus reducing the overhead of training and maintaining separate models. Extensive experiments on two real world datasets show that our framework improves the absolute recall@90P by 10.16\% and 6.9\% from the existing state of the art models. In a popular e-commerce store, we have deployed our models for 1000s of (product-type, attribute) pairs.
Anant Khandelwal, Happy Mittal, Shreyas Sunil Kulkarni, Deepak Gupta
2023-06-01T06:21:45Z
http://arxiv.org/abs/2306.00379v1
# Large Scale Generative Multimodal Attribute Extraction for ###### Abstract E-commerce websites (e.g. Amazon) have a plethora of structured and unstructured information (text and images) present on the product pages. Sellers often either don't label or mislabel values of the attributes (e.g. color, size etc.) for their products. Automatically identifying these attribute values from an eCommerce product page that contains both text and images is a challenging task, especially when the attribute value is not explicitly mentioned in the catalog. In this paper, we present a scalable solution for this problem where we pose attribute extraction problem as a question-answering task, which we solve using **MXT**, consisting of three key components: (i) **M**AG (Multimodal Adaptation Gate), (ii) **X**ception network, and (iii) **T**5 encoder-decoder. Our system consists of a generative model that _generates_ attribute-values for a given product by using both textual and visual characteristics (e.g. images) of the product. We show that our system is capable of handling zero-shot attribute prediction (when attribute value is not seen in training data) and value-absent prediction (when attribute value is not mentioned in the text) which are missing in traditional classification-based and NER-based models respectively. We have trained our models using distant supervision, removing dependency on human labeling, thus making them practical for real-world applications. With this framework, we are able to train a single model for 1000s of (product-type, attribute) pairs, thus reducing the overhead of training and maintaining separate models. Extensive experiments on two real world datasets show that our framework improves the absolute recall@90P by 10.16% and 6.9% from the existing state of the art models. In a popular e-commerce store, we have deployed our models for 1000s of (product-type, attribute) pairs. ## 1 Introduction E-commerce websites (e.g. Amazon, Alibaba) have a very wide catalog of products. Seller provided catalog of these products contain both textual information and product images. Apart from this unstructured information, they also provide structured information about the products such as color, material, size, etc. This information can be represented in terms of attribute-value pairs (see figure 1). In this paper, we will use the terms attribute and attribute-name interchangeably. The value of attribute will be referred as _attribute-value_. However, while listing the products, sellers rarely specify all attribute values or mistakenly fill incorrect values. These attribute values may or may not be present in the unstructured textual product information. Extracting/inferring the missing attribute values from the unstructured textual product information (and images) can improve the catalog quality, thereby improving the customer experience (again, refer figure 1 for an example of attribute extraction). **PT-attribute:** A PT-attribute is defined as a pair of (product-type, attribute), where product-type (or PT) is a broad category of products (e.g. "shoes", Figure 1: Illustration of attribute extraction problem "dress", "laptops" etc.) and attribute is an attribute-name (e.g. "color", "size" etc.). Typically, attribute-extraction is done at the granularity of PT-attribute (e.g. "extract the value of _color_ attribute of _shoe_"). A good attribute extraction system has following desirable properties: (1) **Scalability:** A single model should handle multiple PT-attributes so that there is no need to train a separate model for every PT-attribute combination, (2) **Multi-modality:** Model should be able to extract attributes from multiple modalities like text, image, video etc., (3) **Zero-shot inference:** Model should be able to extract attribute values that were not seen in the training data, and (4) **Value-absent inference:** Model should extract attribute values that are not explicitly mentioned in the text on the product page (but can be inferred from image or some other reasoning). **Related Work:** Extensive research has been done to build attribute extraction models, which can be categorized as _extractive_, _predictive_, or _generative_. Extractive models pose this problem as a Named Entity Recognition (NER) problem (Zheng et al., 2018). Some of the recent work in this space include LATEX-numeric (Mehta et al., 2021), and MQMRC (Shrimal et al., 2022). However, these models don't do value-absent inference. Moreover, these are text based models and do not use product images. Predictive models are the classifier models that take text (and image) as input and predict the attribute values. CMA-CLIP (Liu et al., 2021) is a recent multi-modal predictive framework for predicting attribute values. However, these models can't do zero-shot inference as the prediction comes from the predefined classes only. Generative models pose this problem as an answer generation task given a question and context. Here, the question is the attribute name, and context is the product data (text and image), and the answer is the attribute value. For example, Roy et. al. (Roy et al., 2021) presented a generative framework to generate attribute values using product's text data. PAM (Lin et al., 2021) introduced a multi-modal generative framework, however their model requires (i) Training encoder and decoder from scratch, (ii) Manually modifying the vocabulary of outputs (attribute-values) for different product-types. In this paper, we present **MXT**, a multimodal generative framework to solve the attribute extraction problem, that consists of three key components: (i) **M**AG (Multimodal Adaptation Gate) (Rahman et al., 2020): a fusion framework to combine textual and visual embeddings, that enables generating image-aware textual embeddings, (ii) **X**ception network (Chollet, 2017): an image encoder that generates attribute-aware visual embeddings, and (iii) **T**5 encoder-decoder (Raffel et al., 2020). The models trained by our generative framework are scalable as a single model is trained on multiple PT-attributes, thus reducing the overhead of training and maintaining separate models. We remove the disadvantages of PAM model by (i) finetuning a strong pre-trained language model (T5 (Raffel et al., 2020)) and thus leveraging its text generation ability, (ii) providing product-type in the input itself so that output distribution is automatically conditioned on the PT. Moreover, our trained model satisfies all of the 4 desirable properties that were mentioned previously. Our system formulates the attribute extraction problem as a question-answering problem, where (a) question is the attribute name (e.g. "color"), (b) textual context comprises of a concatenation of product-type (e.g. "shirt"), and textual description of the product, (c) visual context comprises product image, and (d) answer is the attribute value for the attribute specified in the question. Our model architecture consists of (i) a T5 encoder to encode the question and textual context, (ii) encoding visual context into product specific embeddings through a pre-trained ResNet-152 model (He et al., 2016) and fusing them with T5's textual embeddings using a multimodal adaptation gate (MAG) (Rahman et al., 2020), (iii) encoding visual context into attribute (e.g. "sleeves", "collar" etc.) specific embeddings through Xception model (Chollet, 2017) and fusing them with previously fused embeddings through a dot product attention layer (Yu et al., 2021), and finally (iv) generating the attribute values through T5 decoder. The detailed architecture of our system is shown in figure 2. In section 2, we explain our proposed model MXT. In section 3, we compare our model's performance with NER-Based MQMRC (Shrimal et al., 2022) along with a popular multi-modal model CMA-CLIP (Liu et al., 2021) and show that on same precision, we outperform them (on recall) for a majority of the attributes. We also show an ablation study justifying the proposal of different components in MXT. Finally, we also show that our model is able to perform zero-shot and value-absent inference. Our trained models using MXT framework are being used to extract attributes for over 12000 PT-attributes in a popular e-commerce store, and have extracted more than 150MM attribute values. ## 2 MXT Framework Given a set of product-types (PTs) \(\mathcal{P}=\{p_{1},p_{2},\ldots,p_{m}\}\) and attribute-names \(\mathcal{A}=\{a_{1},a_{2},\ldots,a_{n}\}\), we define \(\text{MXT}_{\mathcal{P},\mathcal{A}}\) as a multi-PT, multi-attribute, and multi-modal generative model that is trained on PT-attributes from \((\mathcal{P},\mathcal{A})\), and can be used to generate attribute value for any product in the trained PT-attribute set. The overall architecture of our model is described in figure 2. ### Problem Formulation We formulate the problem of attribute extraction as the problem of answer generation given a question and a context. Here question is the attribute-name \(a\in\mathcal{A}\), and context consists of textual description, product type \(p\in\mathcal{P}\) and image of the product. All of these are used to extract attribute values. The answer generated from the model is the attribute value for \(a\). As shown in figure 2, our model architecture mainly consists of 3 components: (a) Image-aware Text encoder, (b) Attribute-aware Text-Image Fusion, and (c) Text decoder. Below, we describe each component in detail. ### Image-aware Text encoder We use T5 (Raffel et al., 2020), which is a transformer (Vaswani et al., 2017) based text only Seq2Seq pretrained language model. It includes a bidirectional encoder and a unidirectional (forward only) decoder. In this section, we give an overview of T5's encoder and details of its usage for our task. Our text input consists of (i) attribute-name (e.g. "color"), (ii) product-type (e.g. "dress"), and (iii) textual description of product. In our QnA format, the question consists of attribute-name, and context consists of concatenation of product-type and textual description of the product. We tokenize both question and context and create a single input sequence of tokens. This input sequence \(x\) is then fed to an embedding and positional encoding layer to create input features \(T_{emb}\in\mathbb{R}^{N\times d}\), where \(N\) is the sequence length and \(d\) is the feature dimension. These input text embeddings are then fused with Multimodal Adaptation Gate (MAG) as described in Rehman et. al. (Rahman et al., 2020) to generate image aware text embeddings. Due to MAG, the internal representation of words (at any transformer layer) is shifted conditioned on visual modalities. This attachment essentially puts words into a different semantic space, which is conditioned on the visual inputs. For e.g., the meaning of the word "ripple" changes according to the visual input soap image or paper image. With soap, the meaning is "free and clear", while with paper, the meaning is "wavy pattern" as shown in figure 3. This module shifts the meaning of "ripple" according to visual modality. Since T5 is pretrained model and can understand only text embeddings it is required to fuse the visual embeddings (\(V_{R}\in\mathbb{R}^{d}\)) with text before feeding it to T5 Encoder rather than feeding the visual embeddings along with text. Specifically, in MAG, for each input token \(i\) of the sequence, we first learn a gating vector \(g_{i}\) using concatenated embeddings of \(T^{i}_{emb}\) and \(V_{R}\): \(g_{i}=RELU(W_{g}[T^{i}_{emb};V_{R}]+b_{g})\). This gating vector highlights the relevant information in visual modality conditioned on the input textual vector. We then create an image displacement vector \(H_{i}\) by multiplying \(V_{R}\) with each token's gating vector \(g_{i}\): \(H_{i}=g_{i}\cdot(W_{H}V_{R})+b_{H}\). Finally, we shift the embedding \(T^{i}_{emb}\) by the weighted displacement vector \(H_{i}\) to get the multimodal vector \(\hat{T}^{i}_{emb}=T^{i}_{emb}+\alpha*H_{i}\). In this equation, \(\alpha=min(\frac{\|T^{i}_{emb}\|_{2}^{2}}{\|H_{i}\|_{2}}*\beta,1)\), where \(\beta\) is a hyper-parameter whose value is taken as it is from the paper (Rahman et al., 2020). This is then passed through a layer normalization followed by a dropout layer to get the final fused embedding \(F_{MAG}\) from MAG module, where \(F^{i}_{MAG}=\text{dropout}(\text{LN}(\hat{T}^{i}_{emb}))\). This fused output is then fed to the T5 encoder. The encoder consists of \(L\) encoder-layers. It takes \(F_{MAG}\) as input gives \(T_{enc}\) as output. Equation 1 shows the encoding done by \(k^{th}\) layer. Here SA is the multi-head self attention layer, Res is the residual connection, LN is the layer normalization, and FC is a fully connected layer. \[T^{k}_{enc}=\text{LN}(\text{Res}(\text{FC}(\text{LN}(\text{Res}( \text{SA}(T^{k-1}_{enc})))))) \tag{1}\] ### Attribute-aware Text-Image Fusion Xception(Chollet, 2017) model performs depth-wise (or channel-wise) separable convolutions, i.e., it applies separate filters for different color channels. We propose another fusion layer based on the Xception network. The advantage of using this is that it can readily learn the visual features conditioned on the attribute type. For example, for the attribute "sleeve type" of a dress, it can iden tify the channel/color difference between sleeves of dress and skin of the person, thus identifying whether sleeve is half or full. We then fuse the text and image embeddings using multi-head cross attention. As shown in figure 2(b), a product image has several regions of interest, for different attributes like "neck style" and "sleeve type". This region specific embeddings are learnt by separable convolutions in Xception which is then attended with text embeddings to arrive at attribute aware text embeddings. Now given text embedding \(T_{enc}\in\mathbb{R}^{N\times d}\) and image embedding \(V_{X}\in\mathbb{R}^{1\times x}\) (from MXT), we create an attribute-aware fused embedding \(F_{A}\in\mathbb{R}^{N\times d}\) (having same dimension as of text embedding). This fused embedding is created through a multi-head cross attention module, that applies cross attention between textual and visual embeddings as shown in figure 2. This fusion has an advantage that for an attribute, different attention scores can be learned for each object of an image, allowing attending to specific portions of the product image conditioned on the attribute name in the question. For example, for the product type "shirt" and attribute "sleeve-type", we may want to concentrate only on the portions of the image where sleeves are visible. ### Text Decoder We use T5's unidirectional decoder to output the attribute values. The input to the decoder is the fused embedding vector \(F_{A}=<F_{A}^{1},F_{A}^{2},\dots,F_{A}^{N}>\). The decoder iteratively attends to previously generated tokens \(y_{<j}\) (via self-attention) and \(F_{A}\) (via cross-attention), then predicts the probability of future text tokens \(P_{\theta}(y_{j}|y_{<j},x,I)=\text{Dec}(y_{<j},F_{A})\). For attribute generation, we fine-tune our model parameters \(\theta\) by minimizing the negative log-likelihood of label text \(y\) tokens given input text \(x\) and image \(I\): \(L_{\theta}^{GEN}=-\sum_{j=1}^{|y|}logP_{\theta}(y_{j}|y_{<j},x,I)\). ## 3 Experimental Setup & Results **30PT Dataset:** We picked 30 product types (PTs) consisting of total 38 unique attributes from a popular e-commerce store. For each product in the dataset, we have textual information and image. The dataset has 569k and 84k products in train and validation data across 30 PTs. Our test data consists of products from two product types with a total of 73k products. Figure 3: Shift in text embeddings (e.g. ”ripple”) after applying MAG with visual embeddings Figure 2: Architecture of MXT. (a) Generates image-aware text embeddings by fusing image embeddings (obtained from ResNet-152) and text embeddings of the input text (concatenation of _attribute name_, _product type_, and textual description of the product), (b) Image-aware text embeddings are then attended with region specific visual embeddings obtained from separable convolution of Xception Network, which in turn passes only the attribute specific embeddings to the decoder (c) Fused embeddings are passed through T5 decoder to generate attribute value. We evaluated MXT against two state of the art methods on attribute extraction: (1) **CMA-CLIP:** A multi-task classifier that uses CLIP (Radford et al., 2021) for learning multi-modal embeddings of products followed by using two types of cross-modality attentions: (a) sequence-wise attention to capture relation between individual text tokens and image features, and (b) modality-wise attention to capture weightage of text and image features relative to each downstream task, (2) **NER-MQMRC:** This framework (Shrimal et al., 2022) poses Named Entity Recognition (NER) problem as Multi Question Machine Reading Comprehension (MQMRC) task. This is the state of the art model for the text-based attribute extraction task. In this model, given the text description of a product (_context_), they give attribute names as _multiple questions_ to their BERT based MRC architecture, which finds span of each attribute value _answer_ from the context. Left table in the figure 1 compares the recall@90P% of the three models. We show the performance on top-5, top-10 and top-15 attributes (by number of products in which they are present. We can see that MXT outperforms MQMRC and CMA-CLIP on both product types. **E-commerce5PT:** This is a benchmark dataset from NER-MQMRC paper (Shrimal et al., 2022). We take a subset of this dataset (removing numerical attributes) consisting of 22 product-attributes across 5 product types. This is a benchmark dataset for NER based models since all attribute values are present in the text in this dataset. The dataset has 273,345 and 4,259 products in train and test data respectively. We compare average F1 scores (averaged across attributes for each product type) of MXT model with NER-MQMRC on this dataset where our model outperforms NER-MQMRC on 16/22 attributes. Right table in the figure 1 shows the average F1-scores (across attributes in each product type) of MXT and NER-MQMRC models. ### Ablation Study We show three ablation studies on 30PT dataset that justify our choices in the MXT architecture. Left table in the figure 1 shows the results of these studies. **(a) Scalability:** We show that our proposed framework is highly scalable. For that, we compute Recall@90P% of the MXT model trained on individual PTs. The results show that (i) our model leverages cross-PT information during training, (ii) we don't need to train separate model for each PT, which makes model monitoring and refreshing easier in the production, **(b) Xception network:** We show that Xception network helps concentrating on certain attribute features. For this, we removed the Xception network from our architecture and trained and evaluated the model, **(c) MAG:** We replaced MAG with simple concatenation of text and image embeddings in MXT. We can see in the table that each of our ablation model under-performs the MXT model trained on 30PTs, thus justifying our design choices. ### Zero-shot Inference and Value-absent Inference Most existing methods for attribute extraction face two challenges: **(i) Zero-shot inference:** All the predictive models (classification-based models) can predict attribute values only from a predefined set of values that are seen in the training data. They are unable to do zero-shot inference i.e. they can't predict an attribute value if it is not seen in the training data, **(ii) Value-absent inference:** All NER \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**PT**} & \#**top** & **CMA-CLIP** & \multicolumn{4}{c}{**MXT**} \\ \cline{3-8} & **attributes** & **CLIP** & **Multi-PT** & **Single PT** & \begin{tabular}{c} **Without-** \\ **Xception** \\ \end{tabular} & \begin{tabular}{c} **Without-** \\ **MAG** \\ \end{tabular} \\ \hline \multirow{3}{*}{A} & K=5 & +6.16\% & **+22.33\%** & +19.58\% & +21.82\% & +20.93\% \\ \cline{2-8} & K=10 & +6.70\% & **+16.89\%** & +15.50\% & +15.60\% & +15.19\% \\ \cline{2-8} & K=15 & +1.81\% & **+13.23\%** & +11.64\% & +10.67\% & +10.45\% \\ \hline \multirow{3}{*}{B} & K=5 & +8.34\% & **+16.63\%** & +12.86\% & +13.94\% & +13.58\% \\ \cline{2-8} & K=10 & +18.46\% & **+24.98\%** & +22.46\% & +22.81\% & +22.55\% \\ \cline{1-1} \cline{2-8} & K=15 & +11.72\% & **+18.51\%** & +15.50\% & +16.28\% & +15.68\% \\ \hline \hline \end{tabular} \end{table} Table 1: Left: Improvement in Recall@90P% of CMA-CLIP and MXT (with different ablation studies) over NER-MQMRC on 30PT datasetE-commerce5PT dataset. Right: Improvement in F1-score of MXT over NER-MQMRC on E-commerce5PT dataset based models can extract values only which are mentioned in the text data i.e. if an attribute value is absent in the input text, they can't extract that value. Our generative model solves both of these challenges. For example, in the E-commerce5PT dataset, there are a total of 8289 product-attribute pairs in the test data, out of which 970 product-attribute pairs were not seen in the training data, from which our model correctly generated 124 product-attribute pairs. For example, given a product of product-type _"dress"_ with title _"Tahari ASL Women's Sleeveless Ruched Neck Dress with Hi Lo Skirt"_, our model generated the value _"Ruched Neck"_ for the attribute _"neck style"_. Here the value _"Ruched Neck"_ was absent from the training data. Similarly, for the _"dress"_ product shown in figure 1, our model generated the value _"mini"_ for the attribute _"item length"_ (by inferring it from the image) even when this value is not mentioned in the product text(thus solving the second challenge). ### Training & Inference Details We conducted training for each model over a span of 20 epochs, employing a batch size of 4. The training process was performed using distributed multi-GPU training across 8 V-100 Nvidia GPUs, each equipped with 16GB of memory. For text encoder and decoder, we finetune the pretrained t5-base1 checkpoint. We obtained ResNet-based image embeddings using a pretrained ResNet-152, specifically with one embedding assigned to each image. 2. During training, we employed the Adam optimizer with learning rate of \(5e^{-5}\) and warmup ratio of 0.1. We chose the checkpoint having best validation loss. For inference, we used greedy search to generate attribute values. Footnote 1: The t5-base checkpoint is available at [https://huggingface.co/transformers/model_doc/t5.html](https://huggingface.co/transformers/model_doc/t5.html) Footnote 2: [https://download.pytorch.org/models/resnet152-b12ed2d.pth](https://download.pytorch.org/models/resnet152-b12ed2d.pth) ## 4 Deployment In a popular e-commerce store, we have deployed MXT for 6 English speaking markets covering >10K PT-attributes and have extracted >150MM attribute values. **Design Choices:** In popular e-commerce stores, usually there are more than 100K PT-attributes across various markets. Earlier models like NER-MQMRC or CMA-CLIP could be trained only for few 100s of PT-attributes. NER-MQMRC (Shrimal et al., 2022) architecture only allowed one product type in one model training, while CMA-CLIP couldn't scale beyond few 100s of PT-attribute pairs due to network explosion (as they had to create an output layer for each of the different attribute value). This had serious issues of monitoring, refreshing and maintaining the quality of models. Our prompt-based approach in MXT allows us to train a single model checkpoint for any number of PT-attribute pairs. **Practical Challenges:** We faced several challenges during building and deploying the model. One of the biggest challenge was lack of normalized attribute values. Since we were relying on the distantly supervised training data from the catalog, there were multiple junk values. Normalizing these values is challenging without the support of annotations. To overcome this problem, we used some heuristic matches to merge similar values. We also trimmed the tail attribute values to remove the junk values further. The second major challenge was to evaluate the model and find the threshold for every PAC to achieve the desired precision. Since we had >10K PT-attributes, even if we annotate 300 samples per PT-attribute, it leads to 3MM annotations, which is not feasible. For that, we evaluated the model automatically using the catalog data. Since the catalog data can be noisy, we checked other things like whether the predicted value is present in text, whether the attribute should allow zero-shot prediction etc. Based on these checks, we decided the required precision accordingly. ## 5 Conclusion & Future Work In this paper, we presented MXT, a large scale multi-modal product attribute generation system to extract product attributes from the products listed in eCommerce stores. Our model infers the attribute values using both textual and visual information present on the product pages. We introduced a novel architecture comprising a T5 based encoder and decoder along with two fusion layers to fuse text and image embeddings. We showed our model can beat the existing state of the art extractive as well as predictive models on the benchmark datasets. Our model is scalable to multiple product types and countries by just specifying them in the input text prompt. We further showed that our model is able to perform zero-shot inference, as well as it can generate attribute values not present in the text. There are several future directions to ex plore which can further improve the performance of our model. First, we would like to create an ensemble of NER-based and generative models so that we can leverage the power of extraction based models which work very well for numerical attributes (e.g. size, length etc.). Second, our current approach does not use relational information among the products. Since similar products can have common attribute values, we can use graph based approaches to capture that relational information. Specifically, we can approach the attribute extraction problem through either link prediction or node classification. In the former method, we aim to predict missing links between products and their attributes. Alternatively, the latter approach involves using similarity between product features, including text, images, and co-viewing information, to determine graph edges for classification of product nodes. ## 6 Limitations In this section, we discuss some of the limitations of our current model architecture: (1) **Non-English locales:** Currently in our experiments, we have trained and evaluated models only on English datasets. Building models on non-English locales is the direction for future work, (2) **Use of pre-trained tokenizer:** The T5's tokenizer in our models has been pre-trained on open-domain datasets, and its vocabulary misses out on e-commerce specific terms. For example, the current tokenizer of T5 tokenizes the phrase "skater mid dress" as ["sk", "a", "ter", "mid", "I", "dress"]. Here, the meaning of words "skater" and "mid" is not captured in the tokenized text. We believe that we can overcome this limitation by pre-training T5 on e-commerce data which would help tokenizer understanding and tokenizing the e-commerce specific terms more correctly. ## 7 Acknowledgements We thank all the anonymous reviewers for providing their valuable comments that helped us improve the quality of our paper. We also thank our colleagues in the science, product, and engineering teams at Amazon for their valuable inputs.
2305.18066
Nonreciprocal heat flux via synthetic fields in linear quantum systems
We study the heat transfer between N coupled quantum resonators with applied synthetic electric and magnetic fields realized by changing the resonators parameters by external drivings. To this end we develop two general methods, based on the quantum optical master equation and on the Langevin equation for $N$ coupled oscillators where all quantum oscillators can have their own heat baths. The synthetic electric and magnetic fields are generated by a dynamical modulation of the oscillator resonance with a given phase. Using Floquet theory we solve the dynamical equations with both methods which allow us to determine the heat flux spectra and the transferred power. With apply these methods to study the specific case of a linear tight-binding chain of four quantum coupled resonators. We find that in that case, in addition to a non-reciprocal heat flux spectrum already predicted in previous investigations, the synthetic fields induce here non-reciprocity in the total heat flux hence realizing a net heat flux rectification.
S. -A. Biehs, P. Rodriguez-Lopez, M. Antezza, G. S. Agarwal
2023-05-29T13:11:26Z
http://arxiv.org/abs/2305.18066v2
# Nonreciprocal heat flux via synthetic fields in linear quantum systems ###### Abstract We study the heat transfer between N coupled quantum resonators with applied synthetic electric and magnetic fields realized by changing the resonators parameters by external drivings. To this end we develop two general methods, based on the quantum optical master equation and on the Langevin equation for \(N\) coupled oscillators where all quantum oscillators can have their own heat baths. The synthetic electric and magnetic fields are generated by a dynamical modulation of the oscillator resonance with a given phase. Using Floquet theory we solve the dynamical equations with both methods which allow us to determine the heat flux spectra and the transferred power. With apply these methods to study the specific case of a linear tight-binding chain of four quantum coupled resonators. We find that in that case, in addition to a non-reciprocal heat flux spectrum already predicted in previous investigations, the synthetic fields induce here non-reciprocity in the total heat flux hence realizing a net heat flux rectification. ## I Introduction In the last decade a great number of experiments have verified the near-field enhancement of thermal radiation between two macroscopic objects down to distances of a few nanometer [1; 2; 3; 4; 5; 6; 7; 8; 9]. In particular, the theoretically proposed effects of thermal rectification with a phase-change diode [10; 11], a phase-change material based memory [12] and active heat flux switching or modulations [13; 14; 15] have been realized, experimentally. Also several proposals for non-reciprocal heat transport have been made, but these effects have not been demonstrated experimentally, yet. Typically, these proposals rely on the application of magnetic fields to nanoscale setups involving magneto-optical materials or by using Weyl semi-metals with intrinsic nonreciprocal optical properties. It can be shown theoretically that by means of magnetic fields the magnitude of the heat flux and its direction can be manipulated [16; 17; 18; 19; 20; 21; 22; 23]. Due to the broken time-reversal symmetry also non-reciprocal heat fluxes can exist in such cases leading to persistent heat currents and fluxes [24; 25], persistent angular momenta and spins [26; 27; 25], normal and anomalous Hall effect for thermal radiation [28; 29], and diode effects by coupling to non-reciprocal surface modes [30; 31; 32], and spin-directional near- and far-field thermal emission [33; 34]. A trade-off of using magneto-optical materials is that for having observable non-reciprocal heat fluxes, experiments with large magnetic fields in a nanoscale setup are necessary. On the other hand, using Weyl semi-metals with intrinsic nonreciprocity does not allow for a dynamic tuning. Recently, the modulation of resonance frequencies of a system of resonators with a single modulation frequency but different phases has been interpreted as a mean to Figure 1: Sketch of \(N\) coupled quantum resonators each coupled to its own heat bath. create synthetic electric and magnetic fields [50]. For the energy transmission in a setup of two resonators with applied synthetic electric and magnetic fields, i.e. with a modulation of the resonance frequencies and a phase shift, it could be shown experimentally and theoretically that monochromatic waves are transmitted in a nonreciprocal manner [51] if there is a non-zero phase shift, i.e. a synthetic magnetic field. Now, if the two resonators with applied synthetic electric and magnetic fields are coupled to two thermal reservoirs within a master equation approach [53; 54; 55; 56] then the transmission coefficients for the heat current in both directions are not the same which is a manifestation of a broken detailed balance [52]. However, in this case the total transferred power between both resonances itself is reciprocal even in presence of synthetic electric and magnetic fields [52]. That the transferred power is reciprocal might not be surprising for two reasons. First of all, in the context of Rytov's fluctuational electrodynamics it can easily shown that the total radiative heat flux between two objects is always reciprocal [16]. Non-reciprocal effects necessitate at least a third object and non-reciprocal material properties of the objects or environment [57; 58]. Another argument is that within the quantum master equation approach for linearly coupled oscillators typically non-linear effects need to be included to have nonreciprocal heat flow [59] even though it seems that nonreciprocal heat flow can also be generated by specific choices of temperatures in a linear chain of oscillators [60; 61]. However, as we will show below the application of synthetic electric and magnetic fields can indeed generate nonreciprocal heat flow in a tight-binding configuration of four coupled resonators without the need of non-linearity due to the presence of the synthetic magnetic field. Now, we distinguish our work from previous studies. Several kinds of modulations have been proposed like the periodic modulation of the permittivity [35; 36; 37]. Such modulations have been shown to introduce synthetic magnetic fields for photons [38] and consequently related effects like the Aharonov-Bohm effect for photons [39]. In the context of thermal radiation it could be demonstrated that permittivity modulations can introduce nonreciprocity which manifests itself in a breakdown of the detailed balance in Kirchhoff's law [40] and can be employed for photonic refrigeration [41]. In similar approaches a combined dynamical modulation of the resonances of heat exchanging objects and their interaction strength were applied resulting in a heat pumping effect and non-reciprocal heat fluxes in a three resonator configurations [42; 43]. Heat pumping effects also exist when only the interaction strengths in three-body configurations are dynamically modulated [44]. It must be emphasized that these effects are different from the heat shuttling effect where temperatures or chemical potential of two reservoirs are periodically modulated around their equilibrium values in order to have a heat transport despite the fact that the system is in average in equilibrium [45; 46; 47]. Indeed, in that case the modulation affects the baths only and not resonator parameters. Finally, it could be demonstrated theoretically that adiabatic dynamical modulation of resonators with nonreciprocal conductances geometrical phases can increase or reduce the thermal relaxation [48] and rapid magnetic field modulations in magneto-optical systems can substantially increase the cooling [49]. In this work, we extend the quantum Langevin equation (qLE) and quantum master equation (qME) approach used in Ref. [52] to the case of \(N\) coupled arbitrary resonators with their own heat baths as sketched in Fig. 1 with applied syntetic electric and magnetic synthetic fields. Both methods can be used to calculate the heat flux between any two resonators which are coupled to their own reservoirs. We show numerically that both methods give the same values for the heat flux. The qLE approach naturally allows for calculating the heat flux spectra, whereas the master-equation method is a better choice for fast numerical calculations of the heat flux. We use both methods to show that the heat flux itself is nonreciprocal in the presence of synthetic fields in a linear tight-binding chain of four-resonators. The manuscript is organized as follows: First, in Sec. II we introduce the standard master-equation for \(N\) coupled resonators with \(N\) reservoirs. We derive the dynamical equations for the mean values of products of the resonator amplitudes and introduce the qLE for the coupled resonator system. In Sec. III we introduce the synthetic fields in the qLE approach and provide a formal solution in Fourier space. In Sec. IV we introduce the synthetic fields in the master-equation approach and give a formal solution by making a Fourier series ansatz. In Sec. V we show the occurrence of nonreciprocal heat flux in presence of synthetic electric and magnetic fields in a four-resonator chain. We conclude with a summary of the main results in Sec. VI. ## II Langevin and master equations We start with writing down the Hamiltonian of a coupled harmonic oscillator system (each oscillator coupled to its own heat bath of oscillators) which is given by [64] \[H=H_{S}+\sum_{i}H_{B,i}+\sum_{i}H_{SB,i} \tag{1}\] with the Hamiltonian of the system of coupled oscillators \[H_{S}=\sum_{i}\hbar\omega_{i}a_{i}^{\dagger}a_{i}+\sum_{i,j,i\neq j}\hbar g_{ ij}a_{i}^{\dagger}a_{j}, \tag{2}\] with resonance frequencies \(\omega_{i}\) and coupling constants \(g_{ij}=g_{ji}^{*}\) for hermitian system \(H_{S}^{\dagger}=H_{S}\) and the bosonic creation and annihilation operators \(a_{i}^{\dagger}\) and \(a_{i}\). The bath oscillator Hamiltonians are given by(\(i=1,\ldots,N\)) \[H_{B,i}=\sum_{j}\hbar\omega_{ij}b_{ij}^{\dagger}b_{ij} \tag{3}\] with bosonic creation and annihilation operators \(b_{ij}^{\dagger}\) and \(b_{ij}\) and the Hamiltonians describing the linear coupling between the system oscillators and their baths are given by \[H_{SBi}=\mathrm{i}\hbar\sum_{j}g_{B,ij}(a_{i}+a_{i}^{\dagger})(b_{ij}-b_{ij}^{ \dagger}) \tag{4}\] with corresponding coupling constants \(g_{B,ij}\). By assuming the validity of the Born-Markov approximation and tracing out the bath variables one can arrive at the qME [64] \[\frac{\partial\rho_{S}}{\partial t} =-\mathrm{i}\sum_{i}\omega_{i}[a_{i}^{\dagger}a_{i},\rho_{S}]\] \[\quad-\mathrm{i}\sum_{i,j;i\neq j}g_{ij}[a_{i}^{\dagger}a_{j}, \rho_{S}]\] \[\quad-\sum_{i}\kappa_{i}(n_{i}+1)\big{(}a_{i}^{\dagger}a_{i}\rho_ {S}-2a_{i}\rho_{S}a_{i}^{\dagger}+\rho_{S}a_{i}^{\dagger}a_{i}\big{)}\] \[\quad-\sum_{i}\kappa_{i}n_{i}\big{(}a_{i}a_{i}^{\dagger}\rho_{S}- 2a_{i}^{\dagger}\rho_{S}a_{i}+\rho_{S}a_{i}a_{i}^{\dagger}\big{)} \tag{5}\] where the coupling to the bath oscillators is formally given in terms of the coupling constants \(\kappa_{i}=\pi\sum_{j}g_{B,ij}^{2}\delta(\omega_{ij}-\omega_{i})\) and \(n_{i}=[\exp(\hbar\omega_{i}/k_{\mathrm{B}}T_{i})-1]^{-1}\) are the mean occupation numbers at the bath temperatures \(T_{i}\). For the sake of generality we assume that \(g_{ij}\neq g_{ji}\) which will allow to include systems without "inversion symmetry". From the qME we can derive the dynamical equation for the mean values of any observable. For example, for the mean values of products of raising and lowering operators we obtain the set of equations (\(k,l=1,\ldots,N;k\neq l\)) \[\frac{\mathrm{d}}{\mathrm{d}t}\langle a_{k}^{\dagger}a_{k}\rangle =-\mathrm{i}\sum_{j,j\neq k}\big{(}g_{kj}\langle a_{k}^{\dagger}a_ {j}\rangle-g_{jk}\langle a_{k}a_{j}^{\dagger}\rangle\big{)}\] \[\quad-2\kappa_{k}\langle a_{k}^{\dagger}a_{k}\rangle+2\kappa_{k}n _{k}, \tag{6}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\langle a_{k}^{\dagger}a_{l}\rangle =\Omega_{kl}\langle a_{k}^{\dagger}a_{l}\rangle-\mathrm{i}\sum_{ j\neq k;j\neq l}\big{(}g_{lj}\langle a_{k}^{\dagger}a_{j}\rangle-g_{jk} \langle a_{j}^{\dagger}a_{l}\rangle\big{)}\] \[\quad-\mathrm{i}g_{lk}\big{(}\langle a_{k}^{\dagger}a_{k}\rangle- \langle a_{l}^{\dagger}a_{l}\rangle\big{)} \tag{7}\] with \[\Omega_{kl}=\mathrm{i}(\omega_{k}-\omega_{l})-\kappa_{k}-\kappa_{l}. \tag{8}\] In the following we will refer to this set of equations for the mean values of operator products (6) and (7) as master-equation approach as they are derived from the qME in Eq. (5). Similarly, one obtains for the time evolution of the mean values of the raising/lowering operators of each oscillator \(a_{i}\) the set of equations (\(k=1,\ldots,N\)) \[\frac{\mathrm{d}}{\mathrm{d}t}\langle a_{k}\rangle=-\Omega_{k}\langle a_{k} \rangle-\mathrm{i}\sum_{i;i\neq k}g_{ki}\langle a_{i}\rangle \tag{9}\] with \(\Omega_{k}\equiv\mathrm{i}\omega_{k}+\kappa_{k}\). The set of equations for the mean values of the lowering operators of the two oscillators in Eq. (9) motivates the introduction of a set of qLE for the operators themselves instead for their expectation values \[\dot{a}_{k}=-\mathrm{i}\omega_{k}a_{k}-\kappa_{k}a_{k}-\mathrm{i}\sum_{i,i\neq k }g_{ki}a_{i}+F_{k}, \tag{10}\] where the coupling to baths is taken into account by the bath operators \(F_{k}\) which obviously must fulfill \(\langle F_{k}\rangle=0\) to retrieve Eqs. (9). Furthermore, to be consistent with the set of Eqs. (6)-(7) the correlation functions of the bath operators are given by \[\langle F_{k}^{\dagger}(t)F_{k}(t^{\prime})\rangle =2\kappa_{k}n_{k}\delta(t-t^{\prime}), \tag{11}\] \[\langle F_{k}(t)F_{k}^{\dagger}(t^{\prime})\rangle =2\kappa_{k}(n_{k}+1)\delta(t-t^{\prime}) \tag{12}\] and \(\langle F_{k}F_{k}\rangle=\langle F_{k}^{\dagger}F_{k}^{\dagger}\rangle=0\). Furthermore, the bath operators of different baths are uncorrelated. ## III Langevin equations with synthetic fields We now use the set of qLEs as introduced above and include a frequency modulation (\(k=1,\ldots,N\)) \[\omega_{k}\rightarrow\omega_{k}+m_{k}\beta\cos(\Omega t+\theta_{k}), \tag{13}\] with phase shifts \(\theta_{k}\) and \(m_{k}=\{0,1\}\) (\(m_{k}=0\) modulation of oscillator \(k\) turned of, \(m_{k}=1\) modulation turned on). The set of coupled qLE in frequency space is therefore (\(k=1,\ldots,N\)) \[X_{k}a_{k}(\omega)+\mathrm{i}\sum_{l\neq k}g_{ki}a_{l}(\omega)=F_{k}+\frac{ \beta}{2\mathrm{i}}\big{(}a_{k,-}\mathrm{e}^{-\mathrm{i}\theta_{k}}+a_{k,+} \mathrm{e}^{+\mathrm{i}\theta_{k}}\big{)} \tag{14}\] introducing \[X_{k}=\mathrm{i}(\omega_{k}-\omega)+\kappa_{k} \tag{15}\] and the short-hand notation \[a_{k,\pm}=a_{k}(\omega\pm\Omega). \tag{16}\] The coupled qLEs can now be brought in matrix form \[\mathbf{\psi}=\mathsf{M}\mathbf{F}+\frac{\beta}{2\mathrm{i}}\mathsf{M}\mathsf{Q}_{+} \mathbf{\psi}_{+}+\frac{\beta}{2\mathrm{i}}\mathsf{M}\mathsf{Q}_{-}\mathbf{\psi}_{-} \tag{17}\] by introducing the vectors \[\mathbf{\psi}=\begin{pmatrix}a_{1}(\omega)\\ \vdots\\ a_{N}(\omega)\end{pmatrix},\mathbf{\psi}_{\pm}=\begin{pmatrix}a_{1}(\omega\pm \Omega)\\ \vdots\\ a_{N}(\omega\pm\Omega)\end{pmatrix},\mathbf{F}=\begin{pmatrix}F_{1}(\omega)\\ \vdots\\ F_{N}(\omega)\end{pmatrix}, \tag{18}\] and the matrices \[\mathsf{I}\mathsf{M}=\mathsf{A}^{-1}\quad\text{with}\quad\mathsf{A}=\begin{pmatrix} X_{1}&\mathrm{i}g_{12}&\ldots&\mathrm{i}g_{1N}\\ \mathrm{i}g_{21}&X_{2}&\ldots&\mathrm{i}g_{2N}\\ \vdots&\ldots&\vdots&\vdots\\ \mathrm{i}g_{1N}&g_{2N}&\ldots&X_{N}\end{pmatrix} \tag{19}\] and \[\mathds{Q}_{\pm}=\mathrm{diag}(\mathrm{e}^{\pm\mathrm{i}\theta_{1}}m_{1},\dots, \mathrm{e}^{\pm\mathrm{i}\theta_{N}}m_{N}). \tag{20}\] In Eq. (17) it can be clearly seen that due to the modulation there are couplings to the next sidebands \(\omega\pm\Omega\) so that this set of equations is recursive and infinitely large. These side bands can be understood as being the consequent of a synthetic constant electric field. Furthermore, the phase shift adds a phase \(\pm\theta_{k}\) to this coupling which can be understood as a consequence of a synthetic magnetic field. The solution of the coupled qLEs (17) can formally be written down for all orders. By introducing the block vectors \[\underline{\mathbf{\psi}} =(\dots,\mathbf{\psi}_{++},\mathbf{\psi}_{+},\mathbf{\psi},\mathbf{\psi}_{-},\bm {\psi}_{--}\dots)^{t}, \tag{21}\] \[\underline{\mathbf{F}} =(\dots,\mathbf{F}_{++},\mathbf{F}_{+},\mathbf{F},\mathbf{F}_{-},\mathbf{F}_{--},\dots)^{t}, \tag{22}\] the diagonal block matrix \[\underline{\mathbf{M}}=\begin{pmatrix}\dots&\dots&\dots&\dots&\dots\\ \dots&\mathds{M}_{+}&\mathds{O}&\mathds{O}&\dots\\ \dots&\mathds{O}&\mathds{M}&\mathds{O}&\dots\\ \dots&\mathds{O}&\mathds{O}&\mathds{M}_{-}&\dots\\ \dots&\dots&\dots&\dots&\dots\end{pmatrix} \tag{23}\] and tridiagonal block matrix \[\underline{\mathds{L}}=\begin{pmatrix}\dots&\dots&\dots&\dots&\dots\\ \frac{\mathrm{i}\beta}{2}\mathds{M}_{+}\mathds{Q}_{+}&\mathds{I}&\frac{ \mathrm{i}\beta}{2}\mathds{M}_{+}\mathds{Q}_{-}&\mathds{O}&\dots\\ \dots&\frac{\mathrm{i}\beta}{2}\mathds{M}\mathds{Q}_{+}&\mathds{I}&\frac{ \mathrm{i}\beta}{2}\mathds{M}\mathds{Q}_{-}&\dots\\ \dots&\mathds{O}&\frac{\mathrm{i}\beta}{2}\mathds{M}_{-}\mathds{Q}_{+}& \mathds{I}&\frac{\mathrm{i}\beta}{2}\mathds{M}_{-}\mathds{Q}_{-}\\ \dots&\dots&\dots&\dots&\dots\end{pmatrix} \tag{24}\] we can rewrite the coupled qLE (17) as a matrix equation \[\underline{\mathds{L}}\underline{\mathbf{\psi}}=\underline{\mathds{M}}\underline {\mathds{F}}. \tag{25}\] Hence \[\underline{\mathbf{\psi}}=\underline{\mathds{L}}^{-1}\underline{\mathds{M}} \underline{\mathds{F}}. \tag{26}\] By considering only block vectors \(\underline{\mathbf{\psi}}\) of \(2n+1\) vectors \(\mathbf{\psi}\) with the corresponding block matrices of size \((2n+1)\times(2n+1)\) submatrices we obtain the perturbation results up to order \(n\). Note, that the full size of the block vectors and matrices is \(N(2n+1)\) and \(N^{2}(2n+1)^{2}\), resp. To evaluate these spectra in our general formalism, we start with Eq. (26) and introduce the block matrices \(\underline{\mathds{Y}}_{1}=\mathrm{diag}(1,0,0,0,1,0,0,0,\dots)\), \(\underline{\mathds{Y}}_{2}=\mathrm{diag}(0,1,0,0,0,1,0,0,\dots)\) etc. so that \(\sum_{k}\underline{\mathds{X}}_{k}=\underline{\mathds{L}}\). These matrices allow us to split the contributions from all baths \(k\) so that \[\underline{\mathbf{\psi}}=\sum_{k=1}^{N}\underline{\mathds{L}}^{-1}\underline{ \mathds{M}}\mathds{Y}_{k}\underline{\mathds{F}}. \tag{27}\] To evaluate products we use the fluctuation-dissipation theorem in the form \[\langle F_{k}^{\dagger}(\omega+l\Omega)F_{k^{\prime}}(\omega^{\prime}+l^{ \prime}\Omega)\rangle=\delta_{k,k^{\prime}}\delta_{ll^{\prime}}2\pi\delta( \omega-\omega^{\prime})\langle F_{k}^{\dagger}F_{k}\rangle_{\omega}, \tag{28}\] where \(\langle F_{k}^{\dagger}F_{k}\rangle_{\omega}=2\kappa_{k}n_{k}\). Here in agreement with the treatment in the qME approach we are assuming that \(n_{k}\) is constant as demanded by the assumption of white noise. This assumption is justified for \(\beta\ll\omega_{k}\) and \(\Omega\ll k_{\mathrm{B}}T/\hbar\). Then we have \[\langle\underline{\mathbf{\psi}}_{\alpha}^{\dagger}\underline{\mathbf{\psi}}_{ \epsilon}\rangle_{\omega}=\sum_{k=1}^{N}2\kappa_{k}n_{k}\big{(}\underline{ \mathds{L}}^{-1}\underline{\mathds{M}}\underline{\mathds{Y}}_{k}\underline{ \mathds{M}}^{\dagger}\underline{\mathds{L}}^{-1}\big{)}_{\epsilon,\alpha} \tag{29}\] using the properties \(\underline{\mathds{Y}}_{k}^{\dagger}=\underline{\mathds{Y}}_{k}\) and \(\underline{\mathds{Y}}_{k}\underline{\mathds{Y}}_{k}=\underline{\mathds{Y}}_{k}\). From this expression we can numerically calculate all spectral correlation function. The mean heat flux (transferred power over one oscillation period) from oscillator \(k\) at temperature \(T_{k}\neq 0\,\mathrm{K}\) to an oscillator \(l\) at temperature \(T_{l}=0\,\mathrm{K}\) is defined by [59] \[P_{k\to l}=\int\frac{\mathrm{d}\omega}{2\pi}\hbar\omega_{k}2\kappa_{l} \langle a_{l}^{\dagger}a_{l}\rangle_{\omega} \tag{30}\] where \(\langle a_{l}^{\dagger}a_{l}\rangle_{\omega}\) is given by \(\langle\underline{\mathbf{\psi}}_{\alpha}^{\dagger}\underline{\mathbf{\psi}}_{\nu} \rangle_{\omega}\) from Eq. (29) with \(\epsilon=\alpha=Nn+l\) coming from the term involving \(n_{k}\) due to bath \(k\). The total power emitted by the hot oscillator \(k\) is given by \[P_{k}^{\mathrm{em}}=\int\frac{\mathrm{d}\omega}{2\pi}\hbar\omega_{k}2\kappa_{k}(n _{k}-\langle a_{k}^{\dagger}a_{k}\rangle_{\omega}). \tag{31}\] Since energy conservation is fulfilled the sum of the power received by all resonators \(l\) equals that emitted by resonator \(k\) so that we have \[P_{k}^{\mathrm{em}}=\sum_{l\neq k}P_{k\to l}. \tag{32}\] This relation is very useful for validating the numerical results. ## IV Master equations with synthetic fields Now, instead of the qLEs we use the qME (6)-(7) with a periodic driving as in Eq. (13). This directly leads to the set of equations \[\frac{\mathrm{d}}{\mathrm{d}t}\langle a_{k}^{\dagger}a_{k}\rangle =-\mathrm{i}\sum_{j,j\neq k}\bigl{(}g_{kj}\langle a_{k}^{\dagger} a_{j}\rangle-g_{jk}\langle a_{k}a_{j}^{\dagger}\rangle\bigr{)}\] \[\quad-2\kappa_{k}\langle a_{k}^{\dagger}a_{k}\rangle+2\kappa_{k}n _{k}, \tag{33}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\langle a_{k}^{\dagger}a_{l}\rangle =\tilde{\Omega}_{kl}\langle a_{k}^{\dagger}a_{l}\rangle-\mathrm{i} \sum_{j\neq k;j\neq l}\bigl{(}g_{lj}\langle a_{k}^{\dagger}a_{j}\rangle-g_{jk} \langle a_{j}^{\dagger}a_{l}\rangle\bigr{)}\] \[\quad-\mathrm{i}g_{lk}\bigl{(}\langle a_{k}^{\dagger}a_{k} \rangle-\langle a_{l}^{\dagger}a_{l}\rangle\bigr{)} \tag{34}\] with \[\begin{split}\tilde{\Omega}_{kl}&=+\mathrm{i}(\omega_{k}- \omega_{l})-\kappa_{k}-\kappa_{l}\\ &\quad+\mathrm{i}\beta\bigl{[}\cos(\Omega t+\theta_{k})-\cos(\Omega t +\theta_{l})\bigr{]}.\end{split} \tag{35}\] To solve the equations we make the Fourier series ansatz for the expectation values of each observable \(O\) such that \[\langle O\rangle=\sum_{n}\mathrm{e}^{-\mathrm{i}n\Omega t}\langle O\rangle_{n}. \tag{36}\] Then we note that \[\begin{split}\sum_{n}\mathrm{e}^{-\mathrm{i}n\Omega t}& \langle O\rangle_{n}\big{(}\mathrm{cos}(\Omega t+\theta_{k})- \mathrm{cos}(\Omega t+\theta_{l})\big{)}\\ &=\sum_{n}\mathrm{e}^{-\mathrm{i}n\Omega t}\bigg{[}\frac{\eta_{ kl}}{2}\langle O\rangle_{n+1}+\frac{\eta_{kl}^{*}}{2}\langle O\rangle_{n-1} \bigg{]}\end{split} \tag{37}\] with \[\eta_{kl}=(m_{k}\mathrm{e}^{\mathrm{i}\theta_{k}}-m_{l}\mathrm{e}^{\mathrm{i} \theta_{l}}). \tag{38}\] Inserting this ansatz in the set of Eqs. (33)-(34) gives the following set of equations for the Fourier components \[\begin{split}\big{[}-\mathrm{i}n\Omega+2\kappa_{k}\big{]}\langle a _{k}^{\dagger}a_{k}\rangle_{n}&=-\mathrm{i}\sum_{j,j\neq k} \bigl{(}g_{kj}\langle a_{k}^{\dagger}a_{j}\rangle_{n}\\ &\qquad\quad-g_{jk}\langle a_{k}a_{j}^{\dagger}\rangle_{n}\bigr{)} \\ &\quad+2\kappa_{k}n_{k}\delta_{n0}\\ \big{[}-\mathrm{i}n\Omega-\Omega_{kl}\big{]}\langle a_{k}^{ \dagger}a_{l}\rangle_{n}&=-\mathrm{i}\sum_{j\neq k;j\neq l} \bigl{(}g_{lj}\langle a_{k}^{\dagger}a_{j}\rangle_{n}\\ &\qquad\quad-g_{jk}\langle a_{j}^{\dagger}a_{l}\rangle_{n}\bigr{)} \\ &\quad-\mathrm{i}g_{lk}\bigl{(}\langle a_{k}^{\dagger}a_{k}\rangle_ {n}-\langle a_{l}^{\dagger}a_{l}\rangle_{n}\bigr{)}\\ &\quad-\frac{\mathrm{i}\beta\eta_{kl}}{2}\langle a_{k}^{\dagger}a _{l}\rangle_{n+1}\\ &\quad-\frac{\mathrm{i}\beta\eta_{kl}}{2}\langle a_{k}^{\dagger}a _{l}\rangle_{n-1}.\end{split} \tag{39}\] The set of equations for the Fourier components can again be written in matrix form \[\underline{\mathds{L}}\underline{\psi}=\underline{\kappa} \tag{41}\] when introducing the block vector \[\underline{\psi}=(\ldots,\boldsymbol{\psi}_{1},\boldsymbol{\psi}_{0}, \boldsymbol{\psi}_{-1},\ldots)^{t}. \tag{42}\] with \[\boldsymbol{\psi}_{n}=(\langle a_{1}^{\dagger}a_{1}\rangle_{n},\ldots,\langle a _{N}^{\dagger}a_{N}\rangle_{n}, \tag{43}\] as well as the block vector \[\underline{\kappa}=(\ldots,0,0,+2\kappa_{1}n_{1},\ldots,+2\kappa_{N}n_{N},0,0,\ldots)^{t}. \tag{44}\] The block matrix \(\underline{\mathds{L}}\) then takes the form of a tri-diagonal block matrix \[\underline{\mathds{L}}=\begin{pmatrix}\ldots&\ldots&\ldots&\ldots&\ldots\\ \ldots&\mathds{M}_{1}&\mathds{G}^{-}&\mathds{O}&\ldots\\ \ldots&\mathds{G}^{+}&\mathds{M}_{0}&\mathds{G}^{-}&\ldots\\ \ldots&\mathds{O}&\mathds{G}^{+}&\mathds{M}_{-1}&\ldots\\ \ldots&\ldots&\ldots&\ldots&\ldots\end{pmatrix} \tag{45}\] with \[\mathds{M}_{n}=\begin{pmatrix}-\mathrm{i}n\Omega+2\kappa_{1}&0&\ldots&0&+ \mathrm{i}g_{12}&-\mathrm{i}g_{21}&\ldots&0&0\\ 0&-\mathrm{i}n\Omega+2\kappa_{2}&\ldots&0&-\mathrm{i}g_{12}&\mathrm{i}g_{21}& \ldots&\ldots&\vdots\\ \vdots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\vdots\\ 0&0&\ldots&-\mathrm{i}n\Omega+2\kappa_{N}&0&0&\ldots&-\mathrm{i}g_{N-1,N}& \mathrm{i}g_{N,N-1}\\ -\mathrm{i}g_{21}&\mathrm{i}g_{12}&\ldots&0&-\mathrm{i}n\Omega-\Omega_{12}&0& \ldots&0&0\\ \mathrm{i}g_{21}&-\mathrm{i}g_{12}&\ldots&0&0&-\mathrm{i}n\Omega-\Omega_{21}& \ldots&0&0\\ \vdots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\vdots\\ 0&0&-\mathrm{i}g_{3N}&\mathrm{i}g_{3N}&\cdots&\ldots&\ldots&0&-\mathrm{i}n\Omega- \Omega_{N,N-1}\end{pmatrix} \tag{46}\] and \[\mathds{G}^{+}=\frac{\mathrm{i}\beta}{2}\mathrm{diag}(0,\ldots,0,\eta_{12},- \eta_{12},\ldots,\eta_{N-1,N},-\eta_{N-1,N}) \tag{47}\] and \(\mathds{G}^{-}\) defined as the matrix obtained from \(\mathds{G}\) when complex conjugating \(\eta_{kl}\). The different "perturbation orders" \(n\) can be obtained by using \(2n+1\) subblocks in the matrix \(\underline{\mathds{L}}\). Note, that even though we use the same notation as in the qLE approach, the used vectors and matrices are different and also have a different dimension. Here the dimension of the block vectors and matrices is \(N^{2}(2n+1)\) and \(N^{4}(2n+1)^{2}2\). The mean heat flux (transferred power over one oscillation period) from oscillator \(k\) at temperature \(T_{k}\) to an oscillator \(l\) at temperature \(T_{l}=0\,\mathrm{K}\) is defined by [52] \[P_{k\to l}=\hbar\omega_{k}2\kappa_{l}\langle a_{l}^{\dagger}a_{l}\rangle_{0} \tag{48}\] taking \(n_{i}=0\) for all other resonators. Again the total emitted mean power by particle \(k\) is given by \[P_{k}^{\mathrm{em}}=\hbar\omega_{k}2\kappa_{k}(n_{k}-\langle a_{k}^{\dagger}a_ {k}\rangle_{0}) \tag{49}\] and we have energy conservation, i.e. \(P_{k}^{\mathrm{em}}=\sum_{l\neq k}P_{k\to l}\). The advantage of the qME approach is that, differently from the qLE (31)-(30), a frequency integration is not necessary. On the other hand, the size of the matrices for a given perturbation order is much larger than for the qLE approach. ## V Fours Resonators Case: Nonreciprocal Heat Flux with Synthetic Fields We consider here the heat flux in a chain of four resonators as depicted in Fig. 2. We assume that all resonators are identical and we further assume reciprocal nearest-neighbour coupling with identical coupling strength \(g\) so that the non-zero coupling constants are \(g_{12}=g_{21}=g_{32}=g_{23}=g_{34}=g_{43}=g\). The resonance frequencies \(\omega_{1}\) and \(\omega_{4}\) of the resonator \(1\) and \(4\) are fixed to \(\omega_{0}\), whereas the resonance frequencies of the resonators in the middle are modulated as \[\omega_{2} =\omega_{0}+\beta\cos(\Omega t), \tag{50}\] \[\omega_{3} =\omega_{0}+\beta\cos(\Omega t+\theta). \tag{51}\] In this configuration, we first determine the power \(P_{14}\) transferred from resonator \(1\) to resonator \(4\) with \(T_{1}=300\,\mathrm{K}\) and \(T_{2}=T_{3}=T_{4}=0\,\mathrm{K}\). Then we compare with the heat flow in backward direction by calculating the power \(P_{41}\) transferred from resonator \(4\) to resonator \(1\) with \(T_{4}=300\,\mathrm{K}\) and \(T_{1}=T_{2}=T_{3}=0\,\mathrm{K}\). Hence, only the first and the last resonator are in our configuration coupled to a heat bath. Therefore the modulation frequency \(\Omega\) and the modulation strength \(\beta\) are here in principle not limited by the constraint due to the white noise assumption because the two resonators in the middle have zero temperature. Nonetheless, we will restrict ourselves to values which fulfill the above criteria for the white noise approximation. For our numerical calculations we use \(\omega_{0}=1.69\times 10^{14}\,\mathrm{rad/s}\) and \(\kappa=0.013\omega_{0}\) which are the the values taken from those for a graphene flake with \(E_{F}=0.4\,\mathrm{eV}\) from Ref. [62]. The coupling constant \(g\) is determined by the near-field heat flux value which depends on the relative distance between the graphene flakes. For a distance \(d=100\,\mathrm{nm}\) between two graphene flakes a fitting of the resonator model with the results from fluctuating electrodynamics [52] gives \(g=0.011\kappa\). Hence, we are in the weak coupling regime. In Fig. 3(a) we show the results for the transferred power as function of the modulation strength \(\beta\) and for two different values of \(\theta\). We also show the numerical results obtained with the qME method with Eq. (48) and the qLE approach with Eq. (30). First of all, we can see that both methods provide the same values for the exchanged power. Furthermore, it can be seen that the heat flux is clearly nonreciprocal in contrast to the case of two resonators or two graphene flakes where the heat flux is reciprocal despite the nonreciprocal spectra [52]. As detailed in Ref. [51], for instance, the nonreciprocity in transmission as sketched in Fig. 2 can be understood in second-order perturbation theory as an interference of different transmission paths. The energy at \(\omega_{0}\) provided by resonator \(1\) can go through the chain in second order via the upper and lower sideband at \(\omega_{0}\pm\Omega\) by two "scattering events" \(\omega_{0}\rightarrow\omega_{0}+\Omega\) and \(\omega_{0}+\Omega\rightarrow\omega_{0}\) or \(\omega_{0}\rightarrow\omega_{0}-\Omega\) and \(\omega_{0}-\Omega\rightarrow\omega_{0}\) as sketched in Fig. 2. Due to the presence of the synthetic magnetic field a phase is picked up in this process which is not the same in forward transmission from resonator \(1\to 4\) and backward transmission from resonator \(4\to 1\). This symmetry breaking of the synthetic magnetic field can be directly understood from Eq. (14) which shows that upward and downward transitions in the Floquet side-bands is connected with picking up a positive or negative phase. Hence the forward and backward transmission along the upper or lower sidebands results in different phase factors. For a plane wave with frequency \(\omega\) being transmitted through the coupled resonators \(2\) and \(3\) the difference in the transmission is explicitely given by [51] \[\tau_{23}-\tau_{32}=-2\mathrm{i}\frac{\beta^{2}}{4}\big{[}\tau(\omega+\Omega) )-\tau(\omega-\Omega))\big{]}\sin(\theta) \tag{52}\] where \(\tau(\omega)\) is the transmission coefficient without modulation. This transmission coefficient shows that there is a nonreciprocal transmission for any phase difference \(\theta\neq m\pi\) with integer \(m\). From this expression it can be expected that at least in second-order perturbation theory, i.e. when \(\beta\) is sufficiently small, the largest difference can be expected for \(\theta=\pi/2\). For the four resonator configuration depicted in Fig. 2 a similar expression can be derived using a second-order perturbation theory for the qME approach as detailed in appendix A. In the weak Figure 2: Sketch of a chain of four resonators \(1\), \(2\), \(3\), \(4\) with equal nearest-neighbor couplings \(g\) and resonance frequencies \(\omega_{0}\). The oscillators in the middle are modulated with a modulation strength \(\beta\) a relative phase shift \(\theta\) resulting in synthetic electric and magnetic fields. coupling limit \(g\ll\kappa\) we find for the difference of heat flux in forward and backward direction \[\begin{split}\frac{P_{14}-P_{41}}{\hbar\omega_{0}ng}=\beta^{2}\frac{ g^{5}}{\kappa^{5}}\bigg{[}\frac{7}{8}\frac{\mathrm{Im}(A^{2})}{|A|^{4}}+\frac{ \kappa\mathrm{Im}(A^{3})}{|A|^{6}}\\ \qquad\qquad\qquad\qquad\qquad-\frac{\kappa^{3}\mathrm{Im}(A^{5}) }{|A|^{10}}\bigg{]}\sin(\theta)\end{split} \tag{53}\] where \(A=2\kappa-\mathrm{i}\Omega\) and \(n\equiv n_{1}=n_{4}\) is the mean occupation number of the resonators 1 in forward or resonator 4 in backward direction. In Fig. 3(b) we compare its predictions with the numerical exact results from Fig. 3(a) clearly showing its validity in the small \(\beta\) limit. This expression has a similar structure as Eq. (52) indicating the same dependence on \(\theta\) in the limit of small driving amplitudes \(\beta\). To see this effect, we show in Fig. 4 relative power transmission \[E\equiv\frac{P_{14}-P_{41}}{P_{14}+P_{41}}. \tag{54}\] It can be seen that indeed for \(\beta<0.05\omega_{0}\) the maximum difference in forward and backward heat flow happens at \(\theta=\pm\pi/2\). For larger modulation strengths higher order effects play a role so that this maximum shifts to slightly larger or smaller values of the dephasing. Finally, in Fig. 5 the spectra of the power \(P_{14,\omega}\) and \(P_{41,\omega}\) obtained with the qLE approach in the forward and backward direction are shown using \(\Omega=0.05\omega_{0}\) Figure 4: We show the relative power transmission \(E\) defined in Eq. (54) as function of the dephasing \(\theta\) for \(\Omega=0.05\omega_{0}\) and different values of modulation strength \(\beta\) using the qME approach in order \(n=15\). Figure 3: (a) \(P_{14}\) (solid line) and \(P_{41}\) (dashed line) from the qME approach (48) at perturbation order \(n=15\) normalized to the value \(P_{14}(\beta=0)=P_{41}(\beta=0)=5.88\times 10^{-22}\,\mathrm{W}\) for \(g=0.011\kappa\) and \(\Omega=0.05\omega_{0}\) for \(\theta=0.1\pi\) and \(\theta=0.5\pi\). The filled and open symbols are the results for \(P_{14}\) and \(P_{41}\) resulting from integration of spectra as in Fig. 5 from qLE approach according to Eq. (30) at perturbation order \(n=10\). (b) Comparison of exact numerical results (solid lines) for the difference \(P_{14}-P_{41}\) normalized to \(P_{14}(\beta=0)=P_{41}(\beta=0)=5.88\times 10^{-22}\,\mathrm{W}\) with the corresponding power difference from the approximate expression (dashed lines) from Eq. (53). \(\beta=0.05\omega_{0}\), and \(\theta=\pi/2\). It can be seen that the spectra for the heat flow in forward and backward direction are not the same as also found for two graphene flakes only [52]. Furthermore, it can be seen that the side-band contribution is very small so that the main nonreciprocity stems from frequencies around the resonance \(\omega_{0}\). Integrating these spectra according to Eq. (30) gives the full transferred power for the forward and backward direction shown in Fig. 3(a). Hence, in our four resonator system we clearly find a nonreciprocal heat flow due to electric and magnetic synthetic fields. Even though our example might be difficult to realize in practice, it clearly shows that synthetic electric and magnetic fields can generate a nonreciprocal heat flux. We emphasize that this result is not limited to near-field heat transfer between graphene flakes but it is generally valid for any configuration and any heat transfer channel which can be described by four coupled resonators with synthetic fields. ## VI Conclusion To summarize, based on the local qME we have introduced a formalism for a qLE and qME approach for \(N\) coupled resonators with electric and magnetic synthetic fields. The qLE approach is the natural choice when heat flux spectra are studied whereas for the heat flow the qME approach is a better choice, because it is faster. As a very important example, we have used both approaches to show for a system of four linearly coupled resonators that the heat flow is nonreciprocal when electric and magnetic synthetic fields are present. We have also verified numerically that both approach give the same values for the heat flux. Even though for the numerical evaluation we have considered the near-field heat transfer in a system of four coupled graphene flakes our findings are very general and applicable to any system and any heat flux channel which can be described by coupled resonators. Hence, our formalism provides the fundament for further studies on the heat flux and other physical effects in coupled many resonator systems with synthetic fields. ###### Acknowledgements. S.-A. B. acknowledges support from Heisenberg Programme of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the project No. 461632548. S.-A. B. and P. R.-L. thank the University of Montpellier and the group Theory of Light-Matter and Quantum Phenomena of the Laboratoire Charles Coulomb for hospitality during his stay in Montpellier where parts of this work has been done. S.-A. B., P. R.-L., and M. A. acknowledge QuantUM program of the University of Montpellier. G.S. A. thanks the kind support of The Air Force Office of Scientific Research [AFOSR award no. FA9550-20-1-0366] and The Robert A. Welch Foundation [grant no. A-1943]. P. R.-L. acknowledges support from AYUDA PUENTE 2022, URJC. ## Appendix A Perturbation theory for qME approach In this section we derive the second order expression in Eq. (53). To this end, we start with Fourier equations for the qME in (41) taking terms with \(n=0,1,-1\). Then we have \[\mathds{M}_{0}\mathbf{\psi}_{0} =\mathbf{\kappa}-\mathds{G}^{+}\mathbf{\psi}_{+1}-\mathds{G}^{-}\mathbf{\psi} _{-1}, \tag{56}\] \[\mathds{M}_{+1}\mathbf{\psi}_{+1} =-\mathds{G}^{+}\mathbf{\psi}_{2}-\mathds{G}^{-}\mathbf{\psi}_{0},\] (57) \[\mathds{M}_{-1}\mathbf{\psi}_{-1} =-\mathds{G}^{+}\mathbf{\psi}_{0}-\mathds{G}^{-}\mathbf{\psi}_{-2}. \tag{58}\] By inserting the expressions for \(\mathbf{\psi}_{+1/-1}\) into the equation for \(\mathbf{\psi}_{0}\) and neclecting terms from \(|n|\geq 2\) we arrive at \[\mathds{N}\mathbf{\psi}_{0}=\mathbf{\kappa}\quad\Rightarrow\quad\mathbf{\psi}_{0}= \mathds{N}^{-1}\mathbf{\kappa} \tag{59}\] with \[\mathds{N}=\big{[}\mathds{M}_{0}-\mathds{G}^{+}\mathds{M}_{+1}^{-1}\mathds{G} ^{-}-\mathds{G}^{-}\mathds{M}_{-1}^{-1}\mathds{G}^{+}\big{]}. \tag{60}\] By defining \[\mathds{G}^{+}=\frac{\mathrm{i}\beta}{2}\tilde{\mathds{G}}\quad\text{and} \quad\mathds{G}^{-}=\frac{\mathrm{i}\beta}{2}\tilde{\mathds{G}}^{*} \tag{61}\] with \(\tilde{G}=\mathrm{diag}(0,\dots,0,\eta_{12},-\eta_{21},\dots,\eta_{N-1,N},- \eta_{N-1,N})\) we have \[\mathds{N}=\bigg{[}\mathds{M}_{0}+\frac{\beta^{2}}{4}\big{(}\tilde{\mathds{G }}\mathds{M}_{+1}^{-1}\tilde{\mathds{G}}^{*}+\tilde{\mathds{G}}^{*}\mathds{M} _{-1}^{-1}\tilde{\mathds{G}}\big{)}\bigg{]}. \tag{62}\] From this expressions it becomes more obvious that the first non-vanishing contributions to the zeroth order are stemming from the second-order terms, i.e. there is no contribution linear in \(\beta\). For the tight-binding model of the four identical resonators the involved vectors have \(16\) components and the matrices have a size of \(16\times 16\). By definition of \(\mathbf{\psi}_{0}\) we are interested in the terms \(\mathds{N}_{14}^{-1}\) and \(\mathds{N}_{41}^{-1}\) which are determining the transferred power \(P_{4\to 1}\) and \(P_{1\to 4}\). Obviously, there can only be a non-reciprocity if \(\mathds{N}^{-1}\neq\mathds{N}^{-1}{}^{t}\). From the equation for \(\mathds{N}\) it can be seen that due to the phase terms \(\tilde{\mathds{G}}\) and \(\tilde{\mathds{G}}^{*}\) in the second-order contribution, in general, we have \(\mathds{N}\neq\mathds{N}^{t}\) so that also \(P_{4\to 1}\neq P_{1\to 4}\) in general. Hence, the synthetic magnetic field results in an asymmetry for \(\mathds{N}\) and hence for \(\mathds{N}^{-1}\). For small \(\beta\) we can further simplify the inverse of \(\mathds{N}\) in Eq. (60) \[\mathds{N}^{-1} =\bigg{[}\mathds{M}_{0}+\frac{\beta^{2}}{4}\big{(}\tilde{\mathds{ G}}\mathds{M}_{+1}^{-1}\tilde{\mathds{G}}^{*}+\tilde{\mathds{G}}^{*}\mathds{M}_{-1}^{-1} \tilde{\mathds{G}}\big{)}\bigg{]}^{-1}\] \[=\bigg{[}\mathds{1}+\frac{\beta^{2}}{4}\mathds{M}_{0}^{-1}\big{(} \tilde{\mathds{G}}\mathds{M}_{+1}^{-1}\tilde{\mathds{G}}^{*}+\tilde{\mathds{G}}^{ *}\mathds{M}_{-1}^{-1}\tilde{\mathds{G}}\big{)}\bigg{]}^{-1}\mathds{M}_{0}^{-1}\] \[\approx\bigg{[}\mathds{1}-\frac{\beta^{2}}{4}\mathds{M}_{0}^{-1} \big{(}\tilde{\mathds{G}}\mathds{M}_{+1}^{-1}\tilde{\mathds{G}}^{*}+\tilde{ \mathds{G}}^{*}\mathds{M}_{-1}^{-1}\tilde{\mathds{G}}\big{)}\bigg{]}\mathds{M}_{0} ^{-1}. \tag{63}\] In Fig. 6 we show a comparison of the second-order results using Eq. (20) and Eq. (21) with the numrical exact results. As expected the second-order expansion is only reliable for small enough values of \(\beta\) and the perturbation expression in Eq. (20) is valid for a larger range than the perturbative expression in Eq. (21). Now, we want to derive an analytical expression for the heat flux difference. Note, that the heat flux difference for the forward and backward case is in our example given by \[P_{14}-P_{41}=4\hbar\omega_{0}n\kappa^{2}\Delta N_{14} \tag{21}\] where \(\Delta N_{14}=\mathds{N}_{14}^{-1}-\mathds{N}_{41}^{-1}\) and \(n\equiv n_{1}=n_{4}\). That means we can focus on \(\Delta N_{14}\) and add the prefactors later. Starting with the approximate expression in Eq. (21) and making a Taylor expansion for \(g\ll\kappa\) we obtain with Mathematica for \(\Delta N_{14}\) the relatively long expression \[\begin{split}\Delta N_{14}\approx\frac{\beta^{2}g^{2}}{8|A_{1}|^ {6}}\frac{g^{4}}{\kappa^{4}}&\bigg{[}\frac{|A_{1}|^{2}\text{Im}( A_{1}^{2})}{A_{0}^{3}}\bigg{(}4\big{[}\text{Im}(\eta_{13}\eta_{12}^{*})+\text{Im}( \eta_{34}\eta_{24}^{*})\big{]}+3\big{[}\text{Im}(\eta_{23}\eta_{13}^{*})+\text {Im}(\eta_{24}\eta_{23}^{*})\big{]}+\text{Im}(\eta_{14}\eta_{13}^{*})+\text{ Im}(\eta_{24}\eta_{14}^{*})\bigg{)}\\ &+\frac{\text{Im}(A_{1}^{3})}{A_{0}^{2}}\bigg{(}\text{Im}(\eta_{ 14}\eta_{12}^{*})+2\text{Im}(\eta_{24}\eta_{13}^{*})+\text{Im}(\eta_{34}\eta_{ 14}^{*})-3\text{Im}(\eta_{12}\eta_{23}^{*})-3\text{Im}(\eta_{23}\eta_{34}^{*} )\bigg{)}\\ &+2\frac{\text{Im}(A_{1}^{4})}{|A_{1}|^{2}A_{0}}\bigg{(}\text{Im}( \eta_{24}\eta_{12}^{*})+\text{Im}(\eta_{34}\eta_{13}^{*})\bigg{)}-\frac{2\text {Im}(A_{1}^{5})}{|A_{1}|^{4}}\text{Im}(\eta_{12}\eta_{34}^{*})\bigg{]}\end{split} \tag{22}\] where we have introduced \(A_{n}=2\kappa-\text{i}\alpha\). From this expression it can be seen that only for complex \(\eta_{ij}\) there is a non-reciprocity. It can be further observed that there seem to be plenty of combinations which give a non-reciprocal heat flux. In our four oscillator example the resonator 3 is the only one with a nonzero phase \(\theta\equiv\theta_{3}\neq 0\) and resonator 1 and 4 are not modulated at all so that \(\eta_{12}=-1\), \(\eta_{14}=0\), \(\eta_{24}=1\) and \(\eta_{34}=\text{e}^{\text{i}\theta}=-\eta_{13}\) and \(\eta_{23}=1-\text{e}^{\text{i}\theta}\). With these specific values we get \[\Delta N_{14}\approx\frac{\beta^{2}g^{6}}{4\kappa^{4}}\sin(\theta)\bigg{[} \frac{7\text{Im}(A_{1}^{2})}{|A_{1}|^{4}A_{0}^{3}}+\frac{4\text{Im}(A_{1}^{3}) }{|A_{1}|^{6}A_{0}^{2}}-\frac{\text{Im}(A_{1}^{5})}{|A_{1}|^{10}}\bigg{]}. \tag{23}\] By adding the correspondig factors as defined in Eq. (21) and realizing that \(A_{0}=2\kappa\) we obtain the approximative analytical expression for the heat flux difference in Eq. (53).
2302.13194
Observation of optical de Broglie-Mackinnon wave packets
de Broglie wave packets accompanying moving particles are dispersive and lack an intrinsic length scale dictated solely by the particle mass and velocity. Mackinnon proposed almost 45~years ago a localized non-dispersive wave packet constructed out of dispersive de Broglie phase waves via a Copernican inversion of the roles of particle and observer, whereupon an intrinsic length scale emerges by accounting for every possible observer -- rather than by introducing an \textit{ad hoc} uncertainty in the particle velocity. The de Broglie-Mackinnon (dBM) wave packet has nevertheless remained to date a theoretical entity. Here, we report the observation of optical dBM wave packets using paraxial space-time-coupled pulsed laser fields in presence of anomalous group-velocity dispersion. Crucially, the bandwidth of dBM wave packets has an upper limit that is compatible with the wave-packet group velocity and equivalent mass. In contrast to previously observed linear propagation-invariant wave packets whose spatio-temporal profiles at any axial plane are X-shaped, those for dBM wave packets are uniquely O-shaped (circularly symmetric with respect to space and time). By sculpting their spatio-temporal spectral structure, we produce dispersion-free dBM wave packets in the dispersive medium, observe their circularly symmetric spatio-temporal profiles, and tune the field parameters corresponding to particle mass and velocity that uniquely determine the wave-packet length scale.
Layton A. Hall, Ayman F. Abouraddy
2023-02-25T23:43:20Z
http://arxiv.org/abs/2302.13194v1
# Observation of optical de Broglie-Mackinnon wave packets ###### Abstract de Broglie wave packets accompanying moving particles are dispersive and lack an intrinsic length scale dictated solely by the particle mass and velocity. Mackinnon proposed almost 45 years ago a localized non-dispersive wave packet constructed out of dispersive de Broglie phase waves via a Copernican inversion of the roles of particle and observer, whereupon an intrinsic length scale emerges by accounting for every possible observer - rather than by introducing an _ad hoc_ uncertainty in the particle velocity. The de Broglie-Mackinnon (dBM) wave packet has nevertheless remained to date a theoretical entity. Here, we report the observation of optical dBM wave packets using paraxial space-time-coupled pulsed laser fields in presence of anomalous group-velocity dispersion. Crucially, the bandwidth of dBM wave packets has an upper limit that is compatible with the wave-packet group velocity and equivalent mass. In contrast to previously observed linear propagation-invariant wave packets whose spatio-temporal profiles at any axial plane are X-shaped, those for dBM wave packets are uniquely O-shaped (circularly symmetric with respect to space and time). By sculpting their spatio-temporal spectral structure, we produce dispersion-free dBM wave packets in the dispersive medium, observe their circularly symmetric spatio-temporal profiles, and tune the field parameters corresponding to particle mass and velocity that uniquely determine the wave-packet length scale. It is well-known that there are no dispersion-free wave-packet solutions to the \((1\!+\!1)\)D potential-free Schrodinger equation - with the sole exception of the Airy wave packet identified by Berry and Balasz in 1979 [1], which does _not_ travel at a fixed group velocity, but rather accelerates despite the absence of an external force [2]. The Airy wave packet has impacted all areas of wave physics (e.g., optics [3], acoustics [4], water waves [5], electron beams [6], and as a model for Dirac particles [7]). Less-known is that in the year preceding the discovery of the Airy wave packet, Mackinnon identified a non-dispersive \((1\!+\!1)\)D wave packet that travels at a _constant_ group velocity [8], but is constructed out of dispersive de Broglie 'phase waves' that accompany the motion of a massive particle and are solutions to the Klein-Gordon equation. de Broglie had originally demonstrated that the group velocity \(\widetilde{v}\) of a wave packet constructed of phase-waves is equal to the particle velocity \(v\)[9]. However, localized de Broglie wave packets are dispersive, as are Schrodinger wave packets [10]. Moreover, because de Broglie wave packets necessitate introducing an _ad hoc_ uncertainty in the particle velocity [9], and there is no upper limit on the exploitable bandwidth, such wave packets lack an _intrinsic_ length scale (i.e., a scale uniquely determined by the particle mass and velocity). Through a Copernican inversion of the roles of particle and observer, Mackinnon constructed out of dispersive de Broglie phase waves a _non-dispersive_ wave packet [8] - which we refer to henceforth as the de Broglie-Mackinnon (dBM) wave packet. Instead of introducing an _ad hoc_ uncertainty into the particle velocity from the perspective of a privileged reference frame, Mackinnon suggested accounting for all possible observers, who cooperatively report observations made in their reference frames to a single agreed-upon frame in which Lorentz contraction and time dilation are corrected for [8]. Besides retaining the salutary features of conventional de Broglie wave packets, Mackinnon's construction unveiled an intrinsic length scale for the dBM wave packet determined solely by the particle mass and velocity. However, despite the clear algorithmic process for constructing the dBM wave packet, it is _not_ a solution to the Klein-Gordon equation [8], and is instead constructed only epistemologically in the selected reference frame. As such, dBM wave packets have yet to be realized in any physical wave. Nevertheless, it has been recognized that the \((1\!+\!1)\)D dBM wave packet can be mapped to physical solutions of the optical wave equation by first enlarging the field dimensionality to \((2\!+\!1)\)D, which allows introducing angular dispersion [11; 12]. This procedure enables realizing the dBM dispersion relationship for propagation along the optical axis in the initial reduced-dimensionality \((1\!+\!1)\)D space [13]. However, observing optical dBM wave packets in free space faces insurmountable practical difficulties [14; 15]. Specifically, such wave packets are produced by relativistic optical dipoles and are observed by stationary, coherent field detectors that nevertheless fully encircle the moving dipole. We investigate here a different strategy that makes use of the unique characteristics of optical-wave propagation in the anomalous group-velocity dispersion (GVD) regime [16] to produce paraxial dBM wave packets. In this conception, an optical dBM wave packet is a particular realization of so-called'space-time' (ST) wave packets [17; 18; 19; 20; 15] in dispersive media [21; 22; 23; 24; 25; 26; 27]. In general, ST wave packets are pulsed optical beams whose unique characteristics (e.g., tunable group velocity [28] and anomalous refraction [29]) stem from their spatio-temporal spectral structure rather than their particular spatial or temporal profiles. Recent advances in the synthesis of ST wave packets make them a convenient platform for producing a wide variety of structured pulsed fields [15], including dBM wave packets. Here, we provide unambiguous observations of optical dBM wave packets in presence of anomalous GVD. Starting with generic femtosecond pulses, we make use of a universal angular-dispersion synthesizer [30] to construct spatio-temporally structured optical fields in which the spatial and temporal degrees-of-freedom are no longer separable. Critically, the association between the propagation angle and wavelength is two-to-one rather than one-to-one as in conventional tilted pulse fronts [11]. This feature allows folding the spatio-temporal spectrum back on itself, thereby guaranteeing the paraxiality of the synthesized dBM wave packets. Consequently, these wave packets retain in the medium all the characteristic features of their free-space counterparts while circumventing the above-mentioned difficulties. Such space-time-coupled wave packets are dispersive in free space, but become propagation-invariant once coupled to a medium in the anomalous-GVD regime, where they travel at a tunable group velocity \(\widetilde{v}\). Although all previously observed linear, propagation-invariant wave packets have at a fixed axial plane been either X-shaped [31; 32; 15; 33] or separable [34] with respect to the transverse coordinate and time, the spatio-temporal profiles of dBM wave packets are - in contrast - circularly symmetric (O-shaped). In addition to verifying this long-theorized O-shaped spatio-temporal structure [23; 24; 25], we confirm the impact of the two identifying parameters (equivalent to particle mass and velocity) on the bandwidth and length scale of the non-dispersive dBM wave packets. Propagation invariance in the dispersive medium constrains the maximum bandwidth (minimum wave-packet length) according to these selected parameters. Finally, in contrast to Airy wave packets that are the unique non-dispersive solution to the Schrodinger equation, the axial profile of dBM wave packets can be varied almost arbitrarily, which we confirm by modulating their spatio-temporal spectral phase distribution. These results may pave the way to optical tests of the solutions of the Klein-Gordon equation for massive particles. ## Results ### Theory of de Broglie wave packets de Broglie posited two distinct entities accompanying massive particles: an _internal_ 'clock' and an _external_ 'phase wave' [35; 36]. For a particle of rest mass \(m_{\text{o}}\) whose energy is expressed as \(E_{\text{o}}\)=\(m_{\text{o}}c^{2}\)=\(\hbar\omega_{\text{o}}\), the internal clock and the infinite-wavelength phase wave coincide at the same de Broglie frequency \(\omega_{\text{o}}\) in the particle's rest frame [Fig. 1(a)]; here \(c\) is the speed of light in vacuum, and \(\hbar\) is the modified Planck constant. When the particle moves at a velocity \(v\), the frequencies observed in the rest frame diverge: the internal frequency drops to \(\omega\)=\(\omega_{\text{o}}\sqrt{1-\beta_{v}^{2}}\) whereas the phase-wave frequency increases to \(\omega\)=\(\omega_{\text{o}}/\sqrt{1-\beta_{v}^{2}}\) and takes on a finite wavelength \(\lambda\), where \(\beta_{v}\)=\(\frac{v}{c}\) [Fig. 1(b)]. The wave number \(k\)=\(\frac{2\pi}{\lambda}\) for the phase wave is determined by the de Broglie dispersion relationship \(\omega^{2}\)=\(\omega_{\text{o}}^{2}\)+\(c^{2}k^{2}\) [Fig. 1(c)], so that it is a solution to the Klein-Gordon equation. Because de Broglie phase waves are extended, a particle with a well-defined velocity cannot be localized. Instead, spatially localizing the particle requires introducing an _ad hoc_ uncertainty in the particle velocity (a spread from \(v\) to \(v\)+\(\Delta v\)) to induce a bandwidth \(\Delta\omega\) (from \(\omega_{\text{c}}\) to \(\omega_{\text{c}}\)+\(\Delta\omega\)), or \(\Delta k\) (from \(k_{\text{c}}\) to \(k_{\text{c}}\)+\(\Delta k\)) [8; 9], thus resulting in a finite-width wave packet that is also a solution to the Klein-Gordon equation [Fig. 1(c)]. The wave-packet _group velocity_\(\widetilde{v}\)=\(1\big{/}\frac{dk}{d\omega}\big{|}_{\omega_{\text{c}}}\)=\(v\) is equal to the particle velocity, whereas its phase velocity is \(v_{\text{ph}}\)=\(\frac{\omega}{k}\)=\(\frac{c^{2}}{v}\) (\(v_{\text{ph}}\widetilde{v}\)=\(c^{2}\); see Methods). However, de Broglie wave packets are dispersive \(\frac{d\widetilde{v}}{d\omega}\) \(\neq\)0. Moreover, because there is no upper limit on the exploitable bandwidth [Fig. 1(c)], de Broglie wave packets lack an intrinsic length scale; that is, there is no _minimum_ wave-packet length that is uniquely identified by the particle parameters (mass \(m_{\text{o}}\) and velocity \(v\)). ### Non-dispersive de Broglie-Mackinnon (dBM) wave packets Mackinnon proposed an altogether different conception for constructing localized _non-dispersive_ wave packets out of de Broglie phase waves that jettisons the need for introducing an _ad hoc_ uncertainty in particle velocity to localize it. Key to this proposal is a Copernican inversion of the roles of particle and observer. Rather than a single privileged observer associated with the rest frame in Fig. 1(c), Mackinnon considered a continuum of potential observers traveling at physically accessible velocities (from \(-c\) to \(c\)). The wave-packet bandwidth \(\Delta k\) that is established in a particular reference frame is a result of the spread in the particle velocity as observed in all these possible frames. Consequently, the particle can be localized, and a unique wave-packet length scale identified, even when its velocity is well-defined. The physical setting envisioned by Mackinnon is depicted in Fig. 1(d), where the particle moves at a velocity \(v\) and an observer moves at \(u\), both with respect to a common rest frame in which the dBM wave packet is constructed. Each potential observer records a different phase-wave frequency and wavelength. The crucial step is that _all_ potential observers travelling at velocities \(u\) ranging from \(-c\) to \(c\) report their observations to the selected rest frame. These phase waves are superposed in this frame - after accounting for Lorentz contraction and time dilation (Methods) - to yield a wave packet uniquely identified by the particle rest mass \(m_{\rm o}\) and velocity \(v\). Consider first the simple scenario where the particle is at rest with respect to the selected frame (\(v{=}0\)). Each observer reports to the common rest frame a frequency \(\omega^{\prime}{=}\omega_{\rm o}/\sqrt{1-\beta_{u}^{2}}\) and a wave number \(k^{\prime}{=}{-}k_{\rm o}\beta_{u}/\sqrt{1-\beta_{u}^{2}}\), where \(\beta_{u}{=}\frac{u}{c}\). Accounting for time dilation results in \(\omega^{\prime}{\rightarrow}\omega{=}\omega_{\rm o}\), and accounting for Lorentz contraction produces \(k^{\prime}{\rightarrow}k{=}{-}k_{\rm o}\beta_{u}\). Therefore, the frequency in the rest frame based on the recordings of _all_ the observers is \(\omega{=}\omega_{\rm o}\), just as in the case of a conventional de Broglie phase wave, but the wave number now extends over the range from \(-k_{\rm o}\) to \(k_{\rm o}\) as the observer velocity \(u\) ranges from \(c\) to \(-c\) [Fig. 1(e)]. In other words, the observer velocity \(u\) serves as an internal parameter that is swept to establish a new dispersion relationship whose slope is zero, thus indicating a particle at rest \(\widetilde{v}{=}v{=}0\)[8, 13]. The spectral representation of the support domain for this wave packet is a horizontal line \(\omega{=}\omega_{\rm o}\) in \((k,\frac{\omega}{c})\)-space delimited by the two light-lines \(k{=}\pm\frac{\omega}{c}\) [Fig. 1(e)]. In contradistinction to conventional de Broglie wave packets, a physically motivated length scale emerges for the dBM wave packet. The maximum spatial bandwidth is \(\Delta k{=}2k_{\rm o}\), which corresponds to a minimum wave-packet length scale of \(L_{\rm min}{\sim}\frac{\lambda_{\rm o}}{2}\), where \(\lambda_{\rm o}{=}\frac{2\pi}{k_{\rm o}}\). This can be viewed as an 'optical theorem', whereby the dBM wave packet for a stationary particle cannot be spatially localized below the associated de Broglie wavelength \(\lambda_{\rm o}\). Taking an equal-weight superposition across all the wave numbers, the dBM wave packet is \(\psi(z;t)\propto e^{-i\omega_{\rm o}t}{\rm sinc}(\frac{\Delta k}{\pi}z)\), where \({\rm sinc}(x){=}\frac{\sin\pi x}{\pi x}\)[8]. A similar procedure can be followed when \(v\neq 0\), whereupon the frequency and wave number in the selected reference frame are \(\omega{=}\omega_{\rm o}{(1-\beta_{v}\beta_{u})}/\sqrt{1-\beta_{v}^{2}}\) and \(k{=}k_{\rm o}{(\beta_{v}-\beta_{u})}/\sqrt{1-\beta_{v}^{2}}\), respectively (Methods). Because \(v\) is fixed whereas \(u\) extends from \(-c\) to \(c\), a linear dispersion relationship between \(\omega\) and \(k\) is established, \(k{=}\frac{1}{\beta_{v}}(\frac{\omega}{c}-\frac{k_{\rm o}^{2}}{k_{1}})\), where \(k_{1}{=}k_{\rm o}/\sqrt{1-\beta_{v}^{2}}\). The slope of the dBM dispersion relationship indicates that \(\widetilde{v}{=}v\) as in conventional de Broglie wave packets, but the dBM wave packet is now _non-dispersive_, \(\frac{d\widetilde{v}}{d\omega}{=}0\) [Fig. 1(f)]. The limits on the spatial and temporal bandwidths for the dBM wave packet are \(\Delta k{=}2k_{1}\) and \(\frac{\Delta\omega}{c}{=}\beta_{v}\Delta k\), respectively, leading to a reduced characteristic length scale \(L_{\rm min}{\sim}\frac{\lambda_{\rm o}}{2}\sqrt{1-\beta_{v}^{2}}\) as a manifestation of Lorentz contraction; a faster particle is more tightly localized. By assigning equal complex amplitudes to all the phase waves associated with this moving particle, the propagation-invariant dBM wave packet is \(\psi(z;t)\propto e^{i\beta_{v}\Delta k(z-\widetilde{v}t)}{\rm sinc}(\frac{ \Delta k}{\pi}(z-\widetilde{v}t))\). Crucially, unlike conventional de Broglie wave packets, the dBM wave packet is _not_ a solution to the Klein-Gordon equation, although a modified wave equation can perhaps be constructed for it [8]. ### Optical de Broglie-Mackinnon wave packets in free space Despite their intrinsic interest from a fundamental point of view, dBM wave packets have remained to date theoretical entities. It has nevertheless been recognized that _optical_ waves in free space may provide a platform for their construction [13; 14]. Because \((1+1)\)D optical waves in free space are dispersion-free (\(k{=}\frac{\omega}{c}\) and \(v_{\rm ph}{=}\widetilde{v}{=}c\)), producing optical dBM wave packets requires first adding a transverse coordinate \(x\) to enlarge the field dimensionality to \((2+1)\)D. The dispersion relationship thus becomes \(k_{x}^{2}+k_{z}^{2}{=}(\frac{\omega}{c})^{2}\), which represents the surface of a 'light-cone' [15; 37]; here \(k_{x}\) and \(k_{z}\) are the transverse and longitudinal components of the wave vector along \(x\) and \(z\), respectively. The spectral support of any optical field corresponds to some region on the light-cone surface [Fig. 2(a)]. For a fixed value of \(k_{x}{=}\pm\frac{\omega_{\rm o}}{c}\), we retrieve the axial dispersion relationship for de Broglie phase waves \(\omega^{2}{=}\omega_{\rm o}^{2}+c^{2}k_{z}^{2}\). A convenient parametrization of the field makes use of the propagation angle \(\varphi(\omega)\) with respect to the \(z\)-axis for the plane wave at a frequency \(\omega\), whereupon \(k_{x}(\omega){=}\frac{\omega}{c}\sin\varphi(\omega)\) and \(k_{z}(\omega){=}\frac{\omega}{c}\cos\varphi(\omega)\). _Angular dispersion_ is thus introduced into the \((2+1)\)D field [11; 12], and its spectral support on the light-cone surface is a one-dimensional (1D) trajectory. We take _optical_ dBM wave packets to be those whose axial dispersion relationship \(\omega(k_{z})\) conforms to that of a dBM wave packet. This requires that the projection of the spectral support onto the \((k_{z},\frac{\omega}{c})\)-plane be linear and delimited by the light-lines \(k_{z}{=}\pm\frac{\omega}{c}\). Indeed, the spectral projections onto the \((k_{z},\frac{\omega}{c})\)-plane in Fig. 2(a,b) coincide with those in Fig. 1(e,f). Consider first a monochromatic field \(\omega{=}\omega_{\rm o}\) whose spectral support is the circle at the intersection of the light-cone with a horizontal iso-frequency plane [Fig. 2(a)]. This monochromatic field comprises plane waves of the same frequency \(\omega_{\rm o}\) that travel at angles \(\varphi\) extending from 0 to \(2\pi\), whose axial wave numbers are \(k_{z}(\varphi){=}\pm\sqrt{k_{\rm o}^{2}{-}k_{x}^{2}}{=}k_{\rm o}\cos\varphi\) and extend from \(-k_{\rm o}\) to \(k_{\rm o}\). This optical wave packet [Fig. 2(a)] corresponds to the dBM wave packet for a particle in its rest frame [Fig. 1(e)], and \(\varphi\) serves as the new internal parameter to be swept in order to produce the targeted dBM dispersion relationship, corresponding to the observer velocity \(u\) in Fig. 1(e). By setting the spectral amplitudes equal for all the plane-wave components, we obtain \(\psi(x,z;t)\propto e^{-i\omega_{\rm o}t}{\rm sinc}(\frac{\Delta k_{z}}{\pi} \sqrt{x^{2}{+}z^{2}})\), where \(\Delta k_{z}{=}2k_{\rm o}\) [Fig. 2(a)]. Such a wave packet can be produced by a stationary, monochromatic planar dipole placed at the origin of the \((x,z)\)-plane. Observing this optical field requires coherent field detectors arranged around the \(2\pi\) angle subtended by the dipole, and then communicating the recorded measurements to a central station. This procedure is therefore _not_ dissimilar in principle from that envisioned by Mackinnon for the dBM wave packet associated with a stationary particle, in which the measurements recorded by observers traveling at different velocities are communicated to the common rest frame [Fig. 1(d)]. When the dipole moves at a velocity \(v\) along the \(z\)-axis with respect to stationary detectors encircling it, each constituent plane-wave undergoes a different Doppler shift in the rest frame of the detectors. The field still comprises plane waves travelling at angles \(\varphi\) extending from 0 to \(2\pi\), but each plane wave now has a _different_ frequency \(\omega\). Nevertheless, the new spectral support for the dBM wave packet on the light-cone is related to that for the stationary monochromatic dipole. Indeed, the Lorentz transformation associated with the relative motion between the source and detectors tilts the horizontal iso-frequency spectral plane in Fig. 2(a) by an angle \(\theta\) with respect to the \(k_{z}\)-axis as shown in Fig. 2(b), where tan\(\theta{=}\beta_{v}\)[13; 38; 39; 40], thus yielding a tilted ellipse whose projection onto the \((k_{x},\frac{\omega}{c})\) is: \[\frac{k_{x}^{2}}{k_{\rm o}^{2}}+\frac{(\omega-ck_{1})^{2}}{(\Delta\omega/2)^ {2}}{=}1. \tag{1}\] The spectral projection onto the \((k_{z},\frac{\omega}{c})\)-plane is now the line \(k_{z}{=}k_{+}+\frac{\omega-\omega_{+}}{\bar{v}}{=}\frac{1}{\beta_{v}}(\frac{ \omega}{c}-\frac{k_{\rm o}^{2}}{k_{1}})\), where \(\widetilde{v}{=}c\)tan\(\theta{=}v\) is the wave-packet group velocity along \(z\), \(k_{+}{=}\frac{\omega_{+}}{c}{=}k_{\rm o}\sqrt{\frac{1+\beta_{v}}{1-\beta_{v}}}\), and \(k_{1}{=}k_{\rm o}/\sqrt{1-\beta_{v}^{2}}\). The spatial and temporal bandwidths are related via \(\frac{\Delta\omega}{c}{=}\beta_{v}\Delta k_{z}\), where \(\Delta k_{z}{=}2k_{1}\). Each plane wave travels at a different direction in the \((x,z)\)-plane in such a way that their _axial_ wave numbers \(k_{z}\) reproduce the dBM dispersion relationship [compare Fig. 1(f) to Fig. 2(b)]. By setting the complex spectral amplitudes constant for all frequencies, we obtain the dBM wave packet (with \(\widetilde{v}\)\(<\)\(c\)): \[\psi(x,z;t)\)\(\propto\)\(e^{i\beta_{\text{p}}\Delta k_{z}(z-\widetilde{v}t)}\)sinc\(\left(\frac{\Delta k_{z}}{\pi}\sqrt{x^{2}+(z-\widetilde{v}t)^{2}} \right),\) (2) Two parameters uniquely identify the optical dBM wave packet: the group velocity \(\widetilde{v}\) (corresponding to the particle velocity) and the wave number \(k_{\text{o}}\) (corresponding to the particle mass). Furthermore, the signature of the dBM wave packet in Eq. 2 is its circularly symmetric spatio-temporal profile in \((x,t)\)-space in any axial plane \(z\). In contrast, all other propagation-invariant wave packets that have been observed in free space are X-shaped [31; 32; 33; 15; 22] and are _not_ circularly symmetric. Indeed, truncating the spectrum of the optical dBM wave packet obstructs the formation of the circularly symmetric profile and gives rise instead to the more familiar X-shaped counterpart [14; 15]. The O-shaped spatio-temporal profile as indicated by Eq. 2 can be observed only when the full bandwidth - delimited by the light-lines - is included. The field in the \((x,z)\)-plane recorded by stationary detectors encircling the dipole takes the form shown in Fig. 2(b), as pointed out recently in a thought experiment by Wilczek [41]. Despite the conceptual simplicity of this optical scheme for producing dBM wave packets, it nevertheless faces obvious experimental challenges. Encircling an optical dipole moving at a relativistic speed with stationary detectors is far from practical realizability. The more realistic configuration in which the detectors are restricted to a small angular range within the paraxial regime centered on the \(z\)-axis truncates the recorded field and precludes observing of the O-shaped spatio-temporal profile [14; 15]. For these reasons, it is not expected that the O-shaped dBM wave packet can be observed using spatio-temporally structured optical fields in free space. ### Optical de Broglie-Mackinnon wave packets in a dispersive medium The necessity of including the entire bandwidth delimited by the intersection of the dBM dispersion relationship with the free-space light-cone [Fig. 2(a-b)] presents insurmountable experimental obstacles. Producing paraxial dBM wave packets necessitates confining the spectrum to a narrow range of values of \(k_{z}\) centered at a value \(k_{z}\)\(\sim\)\(k_{\text{o}}\)\(>\)0. Crucially, the linear spatio-temporal spectrum projected onto the \((k_{z},\frac{\omega}{c})\)-plane must remain delimited at both ends by the light-line, to produce the circularly symmetric spatio-temporal wave-packet profile. Clearly these requirements cannot be met in free space. Nevertheless, this challenge can be tackled by exploiting the unique fea tures of optical-wave propagation in the presence of anomalous GVD. Specifically, the light-cone structure is modified in presence of anomalous GVD so that the curvature of the light-line has the same sign as that of the de Broglie dispersion relationship [Fig. 2(c)]. In this case, imposing the characteristically linear dBM dispersion relationship produces a spectral support domain on the dispersive light-cone surface that satisfies all the above-listed requirements: (1) \(k_{z}\)\(>\)0 is maintained throughout the entire span of propagation angles \(\varphi(\omega)\); (2) the field simultaneously remains within the paraxial regime; and (3) the spectrum is delimited at both ends by the light-line [Fig. 2(c)], thus yielding a wave packet having a circularly symmetric spatio-temporal profile. The spectral support is in the form of an ellipse at the intersection of the dispersive light-cone with a tilted spectral plane. The center of this ellipse is displaced to a large value of \(k_{z}\), and the spectral projection onto the \((k_{z},\frac{\omega}{c})\)-plane is a line making an angle \(\theta\) with the \(k_{z}\)-axis. The resulting wave packet is propagation-invariant in the dispersive medium and travels at a velocity \(\widetilde{v}\)\(=\)\(c\)tan\(\theta\) independently of the physical parameters of the dispersive medium. In the anomalous-GVD regime, the wave number is given by \(k(\omega)\)\(=\)\(n(\omega)\omega/c\)\(=\)\(k(\omega_{\rm o}+\Omega)\)\(\approx\)\(n_{\rm m}k_{\rm o}+\frac{\Omega}{\widetilde{v}_{\rm m}}-\frac{1}{2}\left|k_{2\rm m} \right|\Omega^{2}\)\(+\cdots\); where \(n(\omega)\) is the refractive index, and the following quantities are all evaluated at \(\omega\)\(=\)\(\omega_{\rm o}\): \(n_{\rm m}\)\(=\)\(n(\omega_{\rm o})\) is the refractive index, \(\widetilde{v}_{\rm m}\)\(=\)\(1\big{/}\frac{dk}{d\omega}\big{|}_{\omega_{\rm o}}\) is the group velocity for a plane-wave pulse in the medium, and \(k_{2\rm m}\)\(=\)\(\frac{d^{2}k}{d\omega^{2}}\big{|}_{\omega_{\rm o}}\)\(=\)\(-\)\(\left|k_{2\rm m}\right|\) is the negative-valued anomalous GVD coefficient [16]. The dispersion relationship in the medium \(k_{x}^{2}\)\(+\)\(k_{z}^{2}\)\(=\)\(k^{2}\) corresponds geometrically to the surface of the modified dispersive light-cone in Fig. 2(c). Similarly to the free-space scenario, we impose a spectral constraint of the form \(k_{z}\)\(=\)\(n_{\rm m}k_{\rm o}+\frac{\Omega}{\widetilde{v}}\)\(=\)\(\frac{1}{\widetilde{\beta}_{v}}\)\(\big{\{}\frac{\omega}{c}-k_{\rm o}(1-n_{\rm m}\beta_{v})\big{\}}\) in the medium, where \(\Omega\)\(=\)\(\omega-\omega_{\rm o}\) and \(\widetilde{v}\)\(=\)\(c\)tan\(\theta\) is the group velocity of the wave packet [Fig. 2(c)]. The wave-packet spectrum as defined by this constraint is delimited by the light-line at its two ends, both located however in the range \(k_{z}\)\(>\)0, in contrast to the previous scenarios depicted in Fig. 1(e,f) and Fig. 2(a,b); see Methods. The spectral projections onto the \((k_{x},\frac{\omega}{c})\) and \((k_{x}\),\(k_{z})\) planes of the spectral support on the dispersive light-cone are ellipses (Methods): \[\frac{k_{x}^{2}}{k_{x,\rm max}^{2}}+\frac{(\omega-\omega_{\rm c})^{2}}{(\Delta \omega/2)^{2}}\,{=}\,1,\quad\frac{k_{x}^{2}}{k_{x,\rm max}^{2}}+\frac{(k_{z} \!-\!k_{\rm c})^{2}}{(\Delta k_{z}/2)^{2}}\,{=}\,1, \tag{3}\] where the temporal bandwidth is \(\frac{\Delta\omega}{c}\)\(=\)\(2\frac{k_{\rm o}}{\sigma_{\rm m}}\)\(\frac{1-\beta_{v}^{\prime}}{\beta_{v}^{2}}\)\(=\)\(\beta_{v}\Delta k_{z}\), \(\sigma_{\rm m}\)\(=\)\(c\omega_{\rm o}\)\(\left|k_{2\rm m}\right|\) is a dimensionless dispersion coefficient, \(\beta_{v}^{\prime}\)\(=\)\(\frac{\widetilde{v}}{\widetilde{v}_{\rm m}}\), \(k_{x,\rm max}\)\(=\)\(\frac{1}{2}\)\(\frac{\Delta\omega}{c}\)\(\sqrt{n_{\rm m}\sigma_{\rm m}}\), \(\omega_{\rm c}\)\(=\)\(\omega_{\rm o}\)\(-\)\(\Delta\omega/2\), \(k_{\rm c}\)\(=\)\(n_{\rm m}k_{\rm o}\)\(-\)\(\frac{\Delta k_{z}}{2}\), \(k_{x,\rm max}\)\(\ll\)\(n_{\rm m}k_{\rm o}\), \(\Delta k_{z}\)\(\ll\)\(n_{\rm m}k_{\rm o}\), and \(\Delta\omega\)\(\ll\)\(\omega_{\rm o}\). It is crucial to recognize that the ellipse projected onto the \((k_{x}\),\(k_{z})\)-plane does _not_ enclose the origin \((k_{x}\),\(k_{z})\)\(=\)\((0,0)\), but is rather displaced to a central value \(k_{\rm c}\)\(\gg\)\(\Delta k_{z}\). There fore, the optical field comprises plane-wave components that propagate only in the forward direction within a small angular range centered on the \(z\)-axis, and the field thus remains within the paraxial domain. Nevertheless, because the spectrum is delineated at both ends by the curved dispersive light-line, the resulting spatio-temporal profile is circularly symmetric in any axial plane \(z\). This wave packet in the dispersive medium thus satisfies all the above-listed desiderata for an optical dBM wave packet, but can be readily synthesized and observed in contrast to its free-space counterparts. One difficulty, however, arises from the form of \(\varphi(\omega)\) in the dispersive medium, which differs fundamentally from that in free space. Each frequency \(\omega\) in a free-space optical dBM wave packet is associated with two propagation angles \(\pm\varphi(\omega)\). However, each propagation angle \(\varphi\) is associated with a single frequency, so that \(|\varphi(\omega)|\) is one-to-one. In the optical dBM wave packet in the dispersive medium, each \(\omega\) is still associated with two propagation angles \(\pm\varphi(\omega)\); but \(\varphi(\omega)\) is now two-to-one, so that \(\varphi(\omega)\) is folded back on itself [Fig. 2(c)]. To synthesize such a field configuration, a synthesizer capable of sculpting \(\varphi(\omega)\) almost arbitrarily is required. ### Experimental confirmation **Setup.** To construct the optical dBM wave packet in free space from a generic pulsed beam in which the spatial and temporal degrees-of-freedom are uncoupled, we introduce angular dispersion by assigning to each wavelength \(\lambda\) a particular pair of angles \(\pm\varphi(\lambda)\), thereby coupling the spatial and temporal degrees-of-freedom. We carry out this task using a universal angular-dispersion synthesizer [30], in which a spatial light modulator (SLM) deflects each wavelength from a spectrally resolved laser pulse at prescribed angles, as illustrated in Fig. 3 (Methods). Because each wavelength \(\lambda\) is deflected at \(\varphi(\lambda)\) independently of all other wavelengths, \(\varphi(\lambda)\) need _not_ be one-to-one. Indeed, it can readily be a two-to-one mapping as required for paraxial optical dBM wave packets. The dBM wave packet is formed once all the wavelengths are recombined by a grating to reconstitute the pulsed field. The spatio-temporal spectrum of the synthesized wave packet is acquired by operating on the spectrally resolved field with a spatial Fourier transform and recording the intensity with a CCD camera. This measurement yields the spatio-temporal spectrum projected onto the \((k_{x},\lambda)\)-plane, from which we can obtain the spectral projection onto the \((k_{z},\lambda)\)-plane. The spatio-temporal envelope \(I(x;\tau)\) of the intensity profile at a fixed axial plane \(z\) is reconstructed in the frame travelling at \(\widetilde{v}\) (\(\tau\!\!=\!\!t\!-\!z/\widetilde{v}\)) via linear interferometry exploiting the procedure developed in Refs. [28; 29; 42] (Methods). The dispersive medium exploited in our measurements is formed of a pair of chirped Bragg mirrors providing an anomalous GVD coefficient of \(k_{\mathrm{2m}}{\approx}{-}500\) fs\({}^{2}\)/mm and \(\widetilde{v}_{\mathrm{m}}{\approx}c\) (Methods). **Measurement results.** We first verify the unique signature of dBM wave packets in presence of anomalous GVD; namely, the O-shaped spatio-temporal intensity profile at any axial plane after inculcating into the field the dBM dispersion relationship. In Fig. 4 we verify three sought-after features: (1) The closed elliptical spatio-temporal spectrum projected onto the (\(k_{x}\),\(\lambda\))-plane; (2) the linear spectral projection onto the (\(k_{z}\),\(\lambda\))-plane, indicating non-dispersive propagation in the dispersive medium; and (3) the circularly symmetric spatio-temporal intensity profile \(I(x;\tau)\) reconstructed at a fixed axial plane (\(z{=}30\) mm). In Fig. 4(a) we plot the measurements for an optical dBM wave packet having a group velocity \(\widetilde{v}{=}0.9975c\). The temporal bandwidth is constrained to a maximum value of \(\Delta\lambda{\approx}16\) nm, and the associated spatial bandwidth \(\Delta k_{x}{\approx}\) 0.03 rad/\(\mu\)m, thus resulting in a pulsewidth \(\Delta T{\approx}200\) fs at \(x{=}0\), and a spatial profile width \(\Delta x{\approx}38\)\(\mu\)m at \(\tau{=}0\). The spectral projection onto the (\(k_{z}\),\(\lambda\))-plane is delimited at both ends by the curved light-line of the dispersive medium. In other words, a larger bandwidth is _in_compatible at this group velocity with propagation invariance in the dispersive medium. Further increase in the bandwidth extends the spectral projection _below_ the dispersive light-line, which contributes to only evanescent field components. The measured spatio-temporal profile \(I(x;\tau)\) therefore has the smallest dimensions in space and time for a circularly symmetric dBM wave packet compatible with the selected group velocity in the medium. To the best of our knowledge, this is the first observation of an O-shaped spatio-temporal intensity profile for a dispersion-free wave packet in a linear dispersive medium. Previous realizations of dispersion-free ST wave packets in dispersive media (whether in the normal- or anomalous-GVD regimes) revealed X-shaped spatio-temporal profiles [27] similar to those observed in free space [28; 31; 43] or in non-dispersive dielectrics [29]. In these experiments, however, the wave packets were _not_ delimited spectrally by the dispersive-medium light-line, which is the prerequisite for the realization of O-shaped optical dBM wave packets. As mentioned earlier, two parameters characterize a dBM wave packet: the velocity \(v\) and the rest mass \(m_{\mathrm{o}}\). The corresponding variables associated with the optical dBM wave packet are \(\widetilde{v}\) and \(\lambda_{\mathrm{o}}\), which can both be readily tuned in our experimental arrangement by changing the functional dependence of \(\varphi\) on \(\lambda\). In this way we can vary the first parameter; namely, the group velocity \(\widetilde{v}\). Increasing the group velocity from \(\widetilde{v}{=}0.9975c\) [Fig. 4(a)] to \(\widetilde{v}{=}0.9985c\) [Fig. 4(b)] and then to \(\widetilde{v}\)=0.999\(c\) [Fig. 4(c)] reduces the maximum exploitable temporal bandwidth from \(\Delta\lambda\)\(\approx\)16 nm to \(\Delta\lambda\)\(\approx\)8 nm and \(\Delta\lambda\)\(\approx\)6 nm, respectively, while retaining the closed elliptic spectral projection onto the \((k_{x}\),\(\lambda)\)-plane, the linear spectral projection onto the \((k_{z}\),\(\lambda)\)-plane, and the associated O-shaped spatio-temporal profile \(I(x;\tau)\). The corresponding spatial bandwidths drop to \(\Delta k_{x}\)\(\approx\)0.023 rad\(/\)\(\mu\)m and \(\Delta k_{x}\)\(\approx\)0.017 rad\(/\)\(\mu\)m, respectively. In all three dBM wave packets in Fig. 4, we retain a fixed intersection with the dispersive light-line at \(\lambda_{\rm o}\)\(\approx\)1054 nm (corresponding to a fixed particle mass), such that reducing \(\widetilde{v}\) decreases the wavelength of the second intersection point. The second parameter, the wavelength \(\lambda_{\rm o}\) corresponding to particle rest mass \(m_{\rm o}\) for de Broglie phase waves, can also be readily tuned [Fig. 5]. Here, the maximum exploitable bandwidth changes as a result of shifting the value of \(\lambda_{\rm o}\) from \(\lambda_{\rm o}\)=1054 nm [Fig. 5(a)] where \(\Delta\lambda\)=16 nm, to \(\lambda_{\rm o}\)=1055 nm [Fig. 5(b)] where \(\Delta\lambda\)=14 nm, and then to \(\lambda_{\rm o}\)=1056 nm [Fig. 5(c)] where \(\Delta\lambda\)=12 nm. Once again, both the spatial and temporal widths of the circularly symmetric O-shaped profile in the \((x,t)\)-domain change accordingly. The Airy wave packet, as mentioned earlier, is the _unique_ non-dispersive solution to Schrodinger's equation - no other waveform will do [44]. Although Mackinnon obtained a particular'sinc'-function-shaped wave packet [8], this waveform is _not_ unique. Indeed, the sinc-function results from combining all the de Broglie phase waves with equal weights. However, dBM wave packets can take on in principle arbitrary waveforms by associating different magnitudes or phases with the plane-wave components constituting it. We confirm in Fig. 6 that the spatio-temporal profile \(I(x;\tau)\) of optical dBM wave packets can be modified while remaining propagation invariant in the dispersive medium. First, setting the complex spectral amplitudes equal along the elliptical spectral support, we obtain propagation-invariant circularly symmetric wave packets in the dispersive medium [Fig. 6(a)]. Truncating the ellipse and eliminating the plane wave components in the vicinity of \(k_{x}\)=0 disrupts the formation of the full circular profile, but the wave packet nevertheless propagates invariantly [Fig. 6(b)]. By introducing a \(\pi\)-step in the spectral phase along \(k_{x}\), a spatial null is formed along \(x\)=0 in the profile of the dBM wave packet [Fig. 6(c)], whereas introducing the \(\pi\)-phase-step along \(\lambda\) produces a temporal null along \(\tau\)=0 [Fig. 6(d)]. Finally, alternating the phases between 0 and \(\pi\) in the four quadrants of the spatio-temporal spectral plane \((k_{x}\),\(\lambda)\) produces spatial and temporal nulls along \(x\)=0 and \(\tau\)=0, respectively [Fig. 6(e)]. Despite such variations in their spatio-temporal profiles, all these optical dBM wave packets propagate invariantly in the dispersive medium. ## Discussion The rapidly evolving versatile techniques for synthesizing optical fields [15; 45] played a critical role in the realization of dBM wave packets as demonstrated here. This has helped confirm the theoretical proposal made by Mackinnon almost 45 years ago for constructing a non-dispersive wave packet from dispersive de Broglie phase waves [8]. Furthermore, the experimental procedure implemented here points to a general synthesis strategy that extends beyond the particular scenario of dBM wave packets. The overarching theme is that novel dispersion relationships for the axial propagation of a wave packet can be imposed by first adding another dimension to the space, and then exploiting the new dimension to tailor the dispersion relationship before spectral projection back onto the original reduced-dimensionality space. In the scenario studied here, we start with a \((1\!+\!1)\)D physical wave in which an axial dispersion relationship \(\omega(k_{z})\) is enforced by the dynamics of the wave equation. Increasing the dimensionality of the space from \((1\!+\!1)\)D to \((2\!+\!1)\)D by including a transverse coordinate \(x\) yields a new dispersion relationship \(\omega(k_{x}\),\(k_{z})\). In free space, optical wave packets are subject to the constraint \(\omega\!\!=\!\!ck_{z}\) in \((1\!+\!1)\)D and \(\omega(k_{x}\),\(k_{z})\!\!=\!\!c\sqrt{k_{x}^{2}\!+k_{z}^{2}}\) in \((2\!+\!1)\)D. Now, by judiciously associating each transverse wave number \(k_{x}\) with a particular axial wave number \(k_{z}\), a reduced-dimensional axial dispersion relationship \(\omega_{\text{red.}}(k_{z})\) is obtained: \(\omega(k_{x}\),\(k_{z})\!\!=\!\!\omega(k_{x}(k_{z}),k_{z})\!\!\mapsto\!\!\omega_{\text{ red.}}(k_{z})\), which can be engineered almost arbitrarily. In the experiment reported here, we employed this strategy to produce a linear dispersion relationship \(\omega(k_{z})\!\!=\!\!(k_{z}\!-\!k_{\text{o}})\widetilde{\upsilon}\) projected onto the \((k_{z},\frac{\omega}{c})\)-plane that deviates away from the light-line \(\omega\!\!=\!\!ck_{z}\). In presence of anomalous GVD, such a spatio-temporal spectrum is delimited at both ends by the curved light-line in the dispersive medium - thereby yielding the circular symmetric spatio-temporal profile characteristic of dBM wave packets. Here, the transverse wave number \(k_{x}\) played the role of the observer velocity \(u\) in the physical configuration envisioned by Mackinnon [Fig. 1(d)]. However, one may envision a variety of other scenarios that can be facilitated by this general strategy. For example, besides tuning the group velocity in free space, linear dispersive media, or nonlinear optical materials and structures, one may produce accelerating wave packets [46; 47; 48] whose group velocity changes with propagation in such media. These features have been recently predicted to produce a host of new phenomena related to two photon-emission [49] and relativistic optics [50; 51]. Intriguingly, the strategy employed here is not constrained to optical waves. Indeed, our approach to spatio-temporal structuring of the field is agnostic with respect to the physical substrate, and can be implemented in principle with acoustic waves, microwaves, surface plasmon polaritons [52], electron beams, neutron beams, or other massive particles. In all cases, an added spatial dimension can be exploited to override the intrinsic dispersion relationship of the particular wave phenomenon, thus producing novel propagation dynamics. The dimensionality of the \((2+1)\)D dBM wave packets synthesized here can be further extended to the full-dimensional \((3+1)\)D space of \((x,y,z;t)\) by including the second transverse coordinate \(y\). This can now be achieved in light of very recent progress in producing so-called 3D ST wave packets that are localized in all dimensions of \((3+1)\)D space [53; 54; 55]. Combining this new synthesis methodology with the procedure outlined here for producing dBM wave packets in the anomalous-GVD regime will yield spherically symmetric propagation-invariant pulsed field structures. Such field configurations provide a platform for exploring proposed topological structures associated with polarization (spin texture) [53]_without_ resorting to stereo-projection onto a 2D plane. Moreover, such spherically symmetric optical dBM wave packets are compatible with coupling to optical fibers and waveguides, thus enabling new opportunities in optical communications, optical signal processing, and nonlinear and quantum optics. Finally, the ideal spectral constraint underlying optical dBM wave packets implies an exact association between the spatial and temporal frequencies. Such idealized wave packets consequently have infinite energy [56]. In any realistic system, however, a spectral uncertainty is inevitably introduced into this association, resulting in a finite-energy wave packet traveling for a finite distance over which it is approximately invariant [57]. In our experiments, this spectral uncertainty arises from the finite spectral resolution of the diffraction grating employed (Fig. 3), which is estimated to be \(\approx\)16 pm, corresponding to a propagation distance of \(\approx\)32 m at a spectral tilt angle \(\theta\)\(=\) 44.99\({}^{\circ}\)[42].
2306.01067
Monte Carlo matrix-product-state approach to the false vacuum decay in the monitored quantum Ising chain
In this work we characterize the false vacuum decay in the ferromagnetic quantum Ising chain with a weak longitudinal field subject to continuous monitoring of the local magnetization. Initializing the system in a metastable state, the false vacuum, we study the competition between coherent dynamics, which tends to create resonant bubbles of the true vacuum, and measurements which induce heating and reduce the amount of quantum correlations. To this end we exploit a numerical approach based on the combination of matrix product states with stochastic quantum trajectories which allows for the simulation of the trajectory-resolved non-equilibrium dynamics of interacting many-body systems in the presence of continuous measurements. We show how the presence of measurements affects the false vacuum decay: at short times the departure from the local minimum is accelerated while at long times the system thermalizes to an infinite-temperature incoherent mixture. For large measurement rates the system enters a quantum Zeno regime. The false vacuum decay and the thermalization physics are characterized in terms of the magnetization, connected correlation function, and the trajectory-resolved entanglement entropy.
Jeff Maki, Anna Berti, Iacopo Carusotto, Alberto Biella
2023-06-01T18:16:22Z
http://arxiv.org/abs/2306.01067v3
**Monte Carlo matrix-product-state approach to the false vacuum decay in the monitored quantum Ising chain** ## Abstract In this work we characterize the false vacuum decay in the ferromagnetic quantum Ising chain with a weak longitudinal field subject to continuous monitoring of the local magnetization. Initializing the system in a metastable state, the false vacuum, we study the competition between coherent dynamics, which tends to create resonant bubbles of the true vacuum, and measurements which induce heating and reduce the amount of quantum correlations. To this end we exploit a numerical approach based on the combination of matrix product states with stochastic quantum trajectories which allows for the simulation of the trajectory-resolved non-equilibrium dynamics of interacting many-body systems in the presence of continuous measurements. We show how the presence of measurements affects the false vacuum decay: at short times the departure from the local minimum is accelerated while at long times the system thermalizes to an infinite-temperature incoherent mixture. For large measurement rates the system enters a quantum Zeno regime. The false vacuum decay and the thermalization physics are characterized in terms of the magnetization, connected correlation function, and the trajectory-resolved entanglement entropy. ###### Contents * 1 Introduction * 2 The model and the measurement scheme * 2.1 The quantum Ising chain and its false vacuum decay: a short review * 2.2 Continuous monitoring of the quantum Ising chain: stochastic quantum dynamics * 3 Simulation protocol with Monte Carlo matrix product states * 4 Results * 4.1 Magnetization and metastability of the false vacuum in the presence of measurements * 4.2 Heating and the emergence of the quantum Zeno regime * 4.3 Correlation functions * 4.4 Entanglement Entropy * 5 Conclusions * A Details on determining the FVD rate B Finite size effects C The melting of order in the transverse Ising model D Schrieffer-Wolff transformation and its application to the quantum Ising model D.1 Schrieffer-Wolff transformation for open quantum systems D.2 Application to the quantum Ising model ## 1 Introduction Metastability is a ubiquitous problem in physics. This phenomenon takes place whenever a system resides at a local minimum of the (free) energy landscape (also called the _false_ vacuum), which is not the the true ground state of the model (dubbed the _true_ vacuum). Classically, and at zero temperature, the system will remain in the false vacuum indefinitely. Thermal fluctuations, however, could enable its decay towards the ground state configuration of the system. When quantum effects are taken into account, the system can undergo quantum tunnelling, and in an energy conserving scenario, can nucleate a resonant bubble of the true vacuum. In both cases the dynamical departure from the false vacuum is known as the false vacuum decay (FVD). Examples of metastable systems include supercooled liquids [1], suspersaturated gases [2] and ferromagnets misaligned with respect to the magnetic field [3]. In all these examples the system is in the proximity of a first-order phase transition, but is found on the _wrong_ side of the associated hysteresis loop. Such a situation can naturally be achieved by quenching a system initially in thermodynamic equilibrium across a first-order phase transition. In this non-equilibrium state, the system needs to overcome or tunnel through a potential barrier in the free-energy in order to reach a more stable state (frozen water, condensed gas, a ferromagnet correctly aligned with the magnetic field). This transition generally occurs on very long time-scales, since the two vacua are associated with two macroscopically different configurations of the system. The system is said to be in a metastable state up until the equilibrium state is reached. For classical systems, the theory of metastability is well understood via statistical physics, where the FVD is entirely driven by thermal fluctuations. In quantum systems, both thermal and quantum flucutations can drive the FVD. The discussions of metastability driven primarily by quantum fluctuations were pioneered in the context of high-energy physics [4, 5] and cosmological inflation theory [6]. Such theories describe a scenario where our universe cooled down into a metastable minimum, and could then nucleate _bubbles_ of the stable vacuum via quantum tunnelling. In this scheme nucleation of bubbles of the true vacuum occurs on an exponentially long time scale. This FVD mechanism is quite general and has appeared in numerous other areas of physics [7, 8, 9, 10, 11]. More recently, metastable dynamics have also been found in open quantum systems [12, 13] associated with the emergence of first-order dissipative phase transitions, and are connected to the critical slowing down [14] in bosonic [15, 16] and spin systems [17, 18, 19]. Remarkably it has recently been shown that the FVD can also be observed in one dimensional quantum spin chains [20, 21, 22]. The simplest example of such a system is the quantum Ising chain with transverse and longitudinal fields. In the ferromagnetic phase, the longitudi nal field lifts the degeneracy between the two ground states with opposite magnetization. By properly tuning the system parameters, one can achieve the needed separation of the different timescales of the problem to observe the FVD. In this class of systems, the magnetization can be used to quantify the departure from the false vacuum, and is expected to decay exponentially with time. The decay rate itself was predicted to be exponentially small in the inverse of the parameter lifting the degeneracy between the two minima [20, 22], i.e. the longitudinal field. This implies that one then has to calculate the system dynamics up to very long times in order to observe such a phenomenon in practice. Although there is agreement about the exponential behaviour of the FVD rate, there exists inconsistencies in the literature concerning the prefactor that call for further investigations [20, 22]. Furthermore, the introduction of a longitudinal field breaks the integrability of the model; no exact solution exists. For these reasons the observation of the FVD in this class of systems is extremely challenging, and its characterization remains largely unexplored, both numerically and experimentally. Only recently have works appeared in the literature characterizing the FVD [21] and non-integrable dynamics [23, 24] of quantum spin chains using tensor-network techniques. Given the intrinsic quantum nature of the false vacuum decay in quantum spin chains, an important question is how its features are affected by the presence of an external measurement apparatus monitoring, for example, the local magnetization of the system. Indeed, the presence of measurements will lead to a competition between the unitary dynamics, nucleating bubbles of the true vacuum and spreading coherence, and local measurements, destroying correlations and heating up the system. In this work we investigate the role of continuous monitoring on the physics of metastibility and FVD. We investigate this issue using a numerical approach based on the combination of a matrix product state (MPS) [25, 26, 27, 28] ansatz for the many-body wave function and stochastic quantum trajectories [29, 30, 31, 32, 33, 34, 35]. The combination of these two techniques [33, 36], which we call Monte Carlo Matrix product states (MCMPS) has recently gained an increasing amount of attention due to the possibility to study measurement-induced phase transitions in the presence of interactions [37] and the computational complexity of monitored systems [38]. Crucially, this method gives access to the dynamics of single quantum trajectories. This resolution allow us to go beyond the computation of standard quantum mechanical expectation values (that could be obtained directly working with the statistical mixture generated by the stochastic dynamics) and gives the possibility to compute nonlinear quantities (as the entanglement entropy) that depends on the nature of the trajectory dynamics (and thus of the measurement protocol). We quantify this physics from the point of view of the magnetization, two-point correlation function, and the bipartite entanglement entropy. We find that continuous monitoring of the local magnetization provides a new pathway for the system to escape the false vacuum. Our numerical results suggest that this rate is exponentially small in the inverse of the measurement rate. At the same time the monitoring also induces heating, driving the system towards infinite temperatures at long times. We analyse the typical thermalization timescale, and found signatures of the quantum Zeno effect for large measurement rates. The paper is organized as follows: In Sec. 2 we briefly review the FVD decay mechanism in the closed quantum Ising model and present the measurement scheme. In Sec. 3 we discuss the simulation protocol used to compute the quantum trajectory dynamics within the framework of matrix-product-states. The results are then presented in Sec. 4, followed by our conclusions in Sec. 5. The model and the measurement scheme ### The quantum Ising chain and its false vacuum decay: a short review The system of interest is the quantum Ising model with both transverse and longitudinal fields: \[\hat{H}=-\sum_{i=1}^{L}\left(J\sigma_{i}^{z}\sigma_{i+1}^{z}+h_{x}\sigma_{i}^{x} +h_{z}\sigma_{i}^{z}\right), \tag{1}\] where \(L\) is the length of the chain, \(\{\sigma_{i}^{\alpha}|\alpha=x,y,z\}\) are the Pauli matrices acting of the \(i\)-th site, \(J>0\) the is the nearest-neighbour ferromagnetic coupling, and \(h_{x,z}\) set the magnitude of the transverse and longitudinal fields, respectively. For \(h_{z}=0\), the ground state of the Hamiltonian (1) has a second-order quantum phase transition at \(J/|h_{x}|=1\)[39]. For \(J/|h_{x}|>1\), the system spontaneously breaks the inherent \(\mathbb{Z}_{2}\) symmetry in the model (\(\sigma_{i}^{z}\rightarrow-\sigma_{i}^{z},\forall i\)), resulting in a ferromagnetic phase. In this ferromagnetic phase there are two degenerate ground states with opposite local magnetization along the \(z\) direction: \(\langle\sigma_{i}^{z}\rangle=\pm M\) with \(M=\pm(1-h_{x}^{2})^{1/8}\). In the regime \(h_{z}=0\), this system can be solved exactly by exploiting the Jordan-Wigner transformation, and thus allows for an analytical understanding of both the ground state and dynamical properties. Physically, the excitations on top of the ferromagnetic ground states are topological defects, i.e. domain walls (or kinks) interpolating between the two vacua. Since these domain walls map onto free fermionic excitations when \(h_{z}=0\), the energy of the system depends only on the number of kinks and their kinetic energies, not on the size of the resulting domains. Furthermore, since the fermionic excitations are non-interacting the model is integrable, hence there is no possibility for thermalization. When \(h_{z}\neq 0\) the situation qualitatively changes. The degeneracy between the two ground states is lifted and the energy difference between the two vacua scales extensively with the system size, \(L\), as \(\Delta\sim|h_{z}|ML\), where \(M\) is the magnetization. The state where the spins are aligned with the longitudinal field (the _true_ vacuum) is energetically favoured, while the state with the opposing magnetization is metastable and plays the role of the _false_ vacuum. The metastability of this false vacuum depends crucially on the system's excitations. When \(h_{z}\neq 0\), the excitations above the true vacuum can no longer be described by non-interacting fermions [40, 41, 42]. In particular, the domain walls now feel a potential linear in their separation, which prevents them from proliferating and leads to the confinement of excitations. This can be clearly seen by looking at the energy cost of forming a true vacuum bubble of size \(\ell\) with respect to the false vacuum: \[E_{b}=2m-(\ell-1)2h_{z}M. \tag{2}\] where \(2m\) is the energy needed to create two domain walls, while \((\ell-1)2h_{z}M\) is the energy difference produced by the longitudinal field. Since energy is conserved in the FVD process, there exists a resonant bubble size for which the energy cost vanishes: \(\tilde{\ell}=1+m/h_{z}/M\). Such a bubble can be resonantly excited during the dynamics. However, this process is very _slow_ for large bubbles as the system can only virtually create bubbles of size \(O(1)\) until the resonant bubble of size \(\tilde{l}\gg 1\) is created. Thus creating a resonant bubble is a high-order process in \(h_{z}\), resulting in a matrix element connecting the two states that is exponentially small in \(\tilde{l}\propto 1/h_{z}\). In Ref. [20] the following expression for the decay rate per site has been proposed: \[\gamma_{\text{FVD}}=\frac{\pi}{9}h_{z}Me^{-q/h_{z}}, \tag{3}\] where \(q\) and \(M\) are a function of \(h_{x}\) only. The exponential part of the decay rate (3) has been recently confirmed in numerical simulations [21]1. Footnote 1: The prefactor in Eq. (3) is non-universal, and is currently debated in the literature. See e.g. Ref. [22]. Furthermore, by tuning the ratio between the longitudinal \(h_{z}\) and the transverse field \(h_{x}\), one could activate new more relevant decay paths giving rise to different decay behaviours [43]. In order to observe the FVD, it is crucial \(h_{z}/J\ll 1\) and that \(h_{x}/J<1\). However, \(h_{x}/J\) can not be too close to unity, as the mass gap decreases as \(h_{x}/J\to 1\). When this happens it is no longer justified to assume that the system wants to populate states with only two kinks (i.e. a single domain wall). When more kinks are generated there can be additional non-trivial dynamics due to the collisions of different kinks, obscuring the FVD. In Ref. [21] the authors proposed a parameter regime where the FVD could be unambiguously observed [24]. In particular for the quantum Ising chain this is found, for example, by setting \(h_{z}/J\approx 0.08\) and \(h_{x}/J\approx 0.4-0.8\). We conclude this subsection by remarking that the numerical simulation of the FVD in quantum spin chains is computationally a hard task. Indeed, in order to probe the metastability of the false vacuum, we need to simulate the long-time dynamics following a quantum quench of an interacting spin system. Since we are dealing with a one-dimensional system, the most promising approach makes use of an infinite matrix-product-state (iMPS) ansatz for the many-body wavefunction. This ansatz accounts for the translational invariance of the system and allows to efficiently compute the time-evolved state up to times \(Jt\sim 15\) for the parameters range mentioned above. This time window allows for a direct observation of the FVD, but not of the final thermalization of the system expected for \(h_{z}\neq 0\). Figure 1: A sketch of the system under consideration. The decay of the false vacuum of the quantum Ising chain takes place through the virtual occupation of \(\mathcal{O}(\tilde{l})\) off-resonant states (\(\tilde{l}\) being the size of the bubble). This metastable dynamics is continuously monitored by measuring the local magnetization. This process induces incoherent spin flips in random positions (denoted by red spins) and affects the closed-system dynamics. ### Continuous monitoring of the quantum Ising chain: stochastic quantum dynamics The physics discussed previously was for the case of an isolated 1D Ising spin chain and its unitary evolution. In this work we will add a further measurement apparatus which continuously monitors the local magnetization along the longitudinal, or \(z\), direction. When the spins of the quantum Ising model are measured continuously in time, the evolution of the many-body wavefunction is governed by quantum trajectories \(|\psi(\mathbf{N_{t}})\rangle\) which follow the following stochastic Schrodinger equation [44]: \[d|\psi(\mathbf{N_{t}})\rangle=d\,t\left[-iH-\frac{\gamma_{d}}{2}\sum_{i=1}^{L} \left(L_{i}^{\dagger}L_{i}-\langle L_{i}^{\dagger}L_{i}\rangle_{\mathbf{N_{t} }}\right)\right]|\psi(\mathbf{N_{t}})\rangle+\sum_{i=1}^{L}\left(\frac{L_{i}}{ \sqrt{\left(L_{i}^{\dagger}L_{i}\right)_{\mathbf{N_{t}}}}}-1\right)\delta N_{t }^{i}|\psi(\mathbf{N_{t}})\rangle \tag{4}\] where \(H\) is the system Hamiltonian (1) ruling the unitary evolution, \(\gamma_{d}\) is the measurement rate, and \(L_{i}\) are the jump operators that measure the \(+z\) longitudinal component of the spin: \[L_{i}=n_{i}\equiv\frac{\sigma_{z}+\mathbb{I}}{2}. \tag{5}\] The vector \(\mathbf{N_{t}}=[N_{t}^{1},N_{z}^{2},\ldots,N_{t}^{L}]\) is a collection of uncorrelated Poisson processes which satisfy \(\delta N_{t}^{i}=0,1\), \(\left(\delta N_{t}^{i}\right)^{2}=\delta N_{t}^{i}\) and have expectation values \(\mathbb{E}[\delta N_{t}^{i}]=\gamma_{d}\,d\,t\,\langle L_{i}^{\dagger}L_{i} \rangle_{\mathbf{N_{t}}}\) where \(\langle\bullet\rangle_{\mathbf{N_{t}}}=\langle\psi(\mathbf{N_{t}})|\bullet| \psi(\mathbf{N_{t}})\rangle\). The quantum jump trajectories, or simply quantum trajectories (QTs) of Equation (4) faithfully describe the dynamics when the monitoring apparatus acts occasionally but abruptly on the system causing a random local spin to be projected along the \(+z\) direction (second term in Eq. (4)) with probability \(p_{i}(t)=\mathbb{E}[\delta N_{t}^{i}]=\gamma_{d}\,d\,t\,\langle n_{i}\rangle_{ \mathbf{N_{t}}}\) proportional to the measurement rate and to the probability of the spin on the \(i\)-th site to be in the \(+z\) direction. When no jump occurs, the system evolves according to the non-Hermitian Hamiltonian (the first term in Eq. (4)): \[H_{\mathrm{eff}} = H-i\frac{\gamma_{d}}{2}\sum_{i=1}^{L}L_{i}^{\dagger}L_{i} \tag{6}\] \[= H-i\frac{\gamma_{d}}{2}\sum_{i=1}^{L}n_{i}.\] with probability \(1-\sum_{i=1}^{L}p_{i}(t)\). For simplicity, we label the wavefunction resulting from a single noise realization as \(|\psi_{a}(t)\rangle\) and the conditional density matrix \(\rho_{a}(t)=|\psi_{a}(t)\rangle\langle\psi_{a}(t)|\). From these quantities we can reconstruct the mean state of the system at a given time \(t\) as: \[\overline{\rho}(t)=\lim_{N_{\mathrm{traj}}\rightarrow\infty}\frac{1}{N_{ \mathrm{traj}}}\sum_{a=1}^{N_{\mathrm{traj}}}\rho_{a}(t), \tag{7}\] where \(N_{\mathrm{traj}}\) is the number of QTs. One can readily show that given the stochastic Schrodinger equation in Eq. (4), the equation of motion for the mean density matrix \(\overline{\rho}(t)\) is the Linblad master equation [45]: \[\frac{d}{d\,t}\overline{\rho}(t) \equiv \mathcal{L}[\overline{\rho}(t)] \tag{8}\] \[= -\frac{i}{\hbar}\left[H_{\mathrm{eff}},\overline{\rho}(t)\right] +\gamma_{d}\sum_{i=1}^{L}n_{i}\rho(t)n_{i},\] where we have defined the Liouvillian superoperator \(\mathcal{L}[\bullet]\). From Eq.(8) we can conclude that the average dynamics induced by the continuous monitoring of the local magnetization is equivalent to that of a system coupled to an infinite temperature thermal bath causing pure dephasing at a rate \(\gamma_{d}\). Equation (8) admits a unique stationary state that is the maximally mixed density matrix: \[\rho_{\text{ss}}\equiv\lim_{t\rightarrow\infty}\overline{\rho}(t)=\frac{ \mathbb{I}}{2^{N}}. \tag{9}\] In other words, the continuous measurement protocol heats up the system, asymptotically driving it toward an equally-probable incoherent mixture of all the many-body states, i.e. infinite temperature. At large times the mean state is expected to relax exponentially to \(\rho_{\text{ss}}\), and the typical relaxation rate \(\gamma_{\text{th}}\) would be given by the so-called Liouvillian gap (i.e. the spectral gap of \(\mathcal{L}\)), characterising the asymptotic decay rate of the system [14]. The quantum expectation values of generic quantities, \(O\), which are independent of the state of the system \(\rho_{a}\) are related to the average over QTs: \[\langle O\rangle(t)=\text{Tr}\left[O\overline{\rho}(t)\right]=\lim_{N_{\text{ traj}}\rightarrow\infty}\frac{1}{N_{\text{traj}}}\sum_{a=1}^{N_{\text{traj}}} \langle\psi_{a}(t)|O|\psi_{a}(t)\rangle, \tag{10}\] i.e. we can sample the expectation value of a given observable by averaging over many stochastic realization. However, if the quantity, \(O\), we want to compute depends on \(\rho_{a}(t)\), the second equality in (10) does not hold. A particular example relevant to our case is the bipartite entanglement entropy: \[S_{a}(t)=-\text{Tr}[\rho_{a}^{A}(t)\ln\rho_{a}^{A}(t)] \tag{11}\] where the reduced density matrix for region \(A\) is \(\rho_{a}^{A}(t)=\text{Tr}_{B}\big{[}\rho_{j}(t)\big{]}\), with \(\text{Tr}_{B}\left[\bullet\right]\) denoting the partial trace over the complimentary region \(B\). One can immediately see that the average of Eq. (11) over quantum trajectories: \[S(t)=\lim_{N_{\text{traj}}\rightarrow\infty}\frac{1}{N_{\text{traj}}}\sum_{a= 1}^{N_{\text{traj}}}S_{a}(t)\neq-\text{Tr}\big{[}\overline{\rho}^{A}(t)\ln \overline{\rho}^{A}(t)\big{]} \tag{12}\] is not the same as the entanglement entropy one would obtain from using the mean reduced density matrix over the subspace \(A\), \(\overline{\rho}^{A}(t)\). The entanglement entropy calculated from the reduced density matrix will contain classical contributions due to the fact that \(\overline{\rho}^{A}(t)\) is a mixed state, alongside the contributions from quantum entanglement. For this reason the entanglement entropy \(S\) is a quantity that depends on the specific trajectory protocol arising from a given measurement procedure. Our approach based on MCMPS allows us to simulate the dynamics of individual QTs, thus enabling the study of both trajectory-dependent nonlinear quantities (like the entanglement entropy in (12)) as well as the quantum expectation value of standard observables like the magnetization (as described in (10)). A sketch of the system under consideration is shown in Fig. (1). ## 3 Simulation protocol with Monte Carlo matrix product states In this work we numerically compute the system dynamics according to the stochastic Schrodinger equation, Eq. (4), after that the system is initially prepared in the false vacuum. To this end we adopt a MPS representation of the many-body state [25], and we evolve the wavefunction using the Time Evolving Block Decimation (TEBD) scheme [27] combined with stochastic QTs accounting for the measurement process [36]. This method goes under the name of Monte Carlo matrix product states (MCMPS). All numerical calculation were done using the ITensor library [46, 47] of the Julia Programming Language [48]. The main steps of the algorithm are summarized as follows: * **Ground state preparation**. We first prepare the system in the ground state of the Hamiltonian in Eq. (1) with fields \(h_{x}\) and \(h_{z}\), \(H(h_{x},h_{z})\). This is done using an imaginary time evolution starting from an initially random MPS of \(L=100\) sites. The imaginary time evolution is also done using the TEBD scheme with a cutoff of singular values set to \(10^{-8}\), which controls the truncation error for the state propagation. We evolved the system up until an imaginary time \(J\tau=10\) with an imaginary time step \(Jd\tau=10^{-2}\). This choice of parameters provided adequate convergence. * **Quench from the false vacuum**. From the initial state, we suddenly quench the longitudinal field globally: \(h_{z}\to-h_{z}\), and evolve the initial state according to the same stochastic Schrodinger equation, Eq. (4), but with the Hamiltonian to \(H(h_{x},-h_{z})\). If the magnitude of the longitudinal field \(h_{z}\) is small compared to the other energy scales this procedure can be seen as a quench to the the false vacuum of the Hamiltonian \(H(h_{x},-h_{z})\). However this procedure always produces some unwanted low-lying excitations on top of the false vacuum that will affect the short-time behavior of the system. * **Stochastic quantum dynamics**. The algorithm for implementing Eq. (4) was shown in Refs. [33, 36], In order to implement the stochastic dynamics in Eq. (4), we discretize the time evolution and after each time step, \(dt\), we stochastically choose whether to evolve the system with the non-Hermitian effective Hamiltonian (6) [with probability \(1-\sum_{i=1}^{L}p_{i}(t)\)]: \[|\psi_{a}(t+dt)\rangle=\frac{e^{-iH_{\text{eff}}dt}|\psi_{a}(t)\rangle}{\|e^{ -iH_{\text{eff}}dt}|\psi_{a}(t)\rangle\|},\] (13) Figure 2: A sketch of the MCMPS method. At each time step \(dt\) the evolution of the MPS \(|\psi_{a}(t)\rangle\) obeys the stochastic dynamics (4). With probability \(1-\sum_{i=1}^{L}p_{i}(t)\) and \(p_{i}(t)=\gamma_{d}dt\langle n_{i}\rangle_{a}\), the system evolves according to the effective non-Hermitian Hamiltonian \(H_{\text{eff}}\) in its Trotterized form (15) (left side) or, with probability \(\sum_{i=1}^{L}p_{i}(t)\), undergoes a quantum jump. In this a quantum jump occurs on the \(j\)-th site (right side). or, otherwise, to apply the \(i\)-th jump operator [with probability \(p_{i}(t)\)]: \[|\psi_{a}(t+dt)\rangle=\frac{n_{i}|\psi_{a}(t)\rangle}{\|n_{i}|\psi_{a}(t) \rangle\|}. \tag{14}\] The trajectory evolution scheme described above has to performed within the MPS representation of the many-body wavefunction. The non-Hermitian evolution ruled by the effective Hamiltonian in Eq. (6) can be easily cast into a MPS friendly form using the Trotter decomposition: \[e^{-idtt_{\rm eff}}\simeq\left(\prod_{i=1}^{L-1}e^{-h_{\rm eff}^{i,i+1}dt/2} \right)\left(\prod_{i=1}^{L-1}e^{-h_{\rm eff}^{i-i,L-i+1}dt/2}\right)+\mathcal{ O}\left(dt^{3}\right), \tag{15}\] In defining Eq. (15) we used the fact that the effective Hamiltonian contains only local and nearest neighbours terms and thus can be written as \[H_{\rm eff}=\sum_{i=1}^{L-1}h_{\rm eff}^{i,i+1} h_{\rm eff}^{i,i+1} =-(J\sigma_{i}^{z}\sigma_{i+1}^{z}+h_{x}\sigma_{i}^{x}+h_{z} \sigma_{i}^{z}) \tag{16}\] The action of a given quantum jump can be easily computed by applying the local operator \(L_{i}=n_{i}\) to the MPS structure. The whole MCMPS procedure is illustrated in Fig. (2). As one may expect, the weak continuous measurement protocol is very sensitive to the time step \(dt\). From our explorations2 we found the optimal time step to be \(Jdt=10^{-3}\). Unless otherwise specified, we consider systems of size \(L=100\), and work with an initial Hamiltonian with \(h_{x}/J=0.8\) and \(h_{z}/J=0.08\). This choice of parameters was used in Ref. [21] to observe the FVD in the absence of measurement, and provides a benchmark against the closed system. Footnote 2: For any larger time steps, we observed discrepancies in the quantum trajectories for \(Jt\gtrsim 10\). The quantities under consideration are always averaged over \(N_{\rm traj}\geq 600\). ## 4 Results We characterize the FVD and the thermalization dynamics using several physical observables. The first is the following figure of merit: \[F(t)=\frac{\sum_{i=1}^{L}\left(\langle\sigma_{i}^{z}(t)\rangle_{t}+\langle \sigma_{i}^{z}(0)\rangle_{t}\right)}{2\sum_{i=1}^{L}\langle\sigma_{i}^{z}(0) \rangle_{t}}. \tag{17}\] Its behaviour quantifies the departure from the false vacuum starting from \(F(0)=1\). Since we know that the density matrix will relax to the the infinite temperature state (9) which has vanishing magnetization along all directions, we have at infinite time: \(\lim_{t\to\infty}F(t)=\frac{1}{2}\). In what follows we will study how \(F(t)\) interpolates between these two values and we will quantify what is the typical relaxation rate \(\gamma_{\rm th}\) towards \(\rho_{\rm ss}\). The analysis of \(\gamma_{\rm th}\) will be also corroborated by studying the behaviour of the Liouvillian gap defined in Eq. (24) via an exact diagonalization of \(\mathcal{L}\) for a small system of size \(L=6\). To quantify the behaviour of correlations we also compute the connected part of the equal-time two-point correlation function: \[C(r,t)=\frac{1}{N_{r}}\sum_{i=1}^{L}\left(\langle\sigma_{i}^{z}\sigma_{i+r}^{ z}\rangle_{t}-\langle\sigma_{i}^{z}\rangle_{t}\langle\sigma_{i+r}^{z}\rangle_{t}\right) \tag{18}\] In Eq. (18) we also average over all positions \(i\), thus we have introduced a factor of \(N_{r}\) to count all possible pairs of sites separated by a distance \(r\). The quantities in Eq. (17) and Eq. (18) will be evaluated by averaging over QTs [as described in Eq. (10)], and as they are linear in the state of the system, are properties of the mean state \(\overline{\rho}(t)\). Finally, we compute the behaviour of the half-chain entanglement entropy \(S\) defined in Eq. (12), averaging the single-trajectory entanglement entropy \(S_{\alpha}\) defined in Eq. (11) where \(A\) is the connected region embedding the sites \(i=1,\ldots,L/2\). As discussed in Sec. 2.2 this quantity in nonlinear in the state of the system and thus depends on the specific measurement protocol performed on the quantum Ising chain. ### Magnetization and metastability of the false vacuum in the presence of measurements Fig. (3) reports the results for the figure of merit of the average magnetization, Eq. (17), for various values of \(\gamma_{d}\). For all \(\gamma_{d}\) the system exhibits an exponential decay away from the false vacuum, i.e. the FVD after an initial transient (lasting up to \(Jt\sim 1\)). Finally, at large times, the system approaches the infinite temperature limit \(F(t\to\infty)=1/2\). First, let's consider the FVD physics. Increasing \(\gamma_{d}\) has two main effects, the first is that the exponential decay rate appears to grow larger, while the second effect is to decrease the time window where the exponential decay is observable. To quantify this more precisely, we extract the FVD rate, \(\gamma\), as a function of the measurement rate, \(\gamma_{d}\). To do this we fit the dynamics of \(F(t)\) to an exponential decay within the appropriate time-window and extract the decay rate. The details of this procedure are shown in Appendix A, while the results are shown in Fig. (4). Inspired by the analytical formula for the FVD rate in a closed system, we fit the numerical Figure 3: \(F(t)\) defined in Eq. (17) for various values of the coupling to the environment, \(\gamma_{d}\). In these simulations we consider a system of size \(L=100\), and with parameters \(h_{x}=0.8\) and \(h_{z}=0.08\). Each solid line represents the average of \(N_{\rm traj}\geq 600\) trajectories. The dashed line corresponds to the infinite temperature steady state where \(F(t)=1/2\). results for the FVD rate, \(\gamma\), to a phenomenological Arrhenius law: \[\gamma\propto\exp\left[-\frac{AJ}{B\gamma_{d}+h_{z}}\right] \tag{19}\] The fit appears to describe the physics reasonably well 3 for the range of measurement rates considered. Equation (19) is appealing as it smoothly connects to the expression (3) for vanishing measurement rate \(\gamma\to 0\)4 and states that the departure from the false vacuum is exponentially small in \(1/\gamma_{d}\) up to \(\gamma_{d}\thicksim J\). Footnote 3: with \(A\approx 0.07\) and \(B\approx 0.3\). Footnote 4: For \(\gamma_{d}=0\) our results slightly differ quantitatively with respect to what reported in Ref. [21]. This is due to finite size effects which are discussed in Appendix B. The fact that the FVD decay rate is still exponentially suppressed for quite large values of \(\gamma_{d}\) is quite surprising. It suggests that the metastability of the false vacuum is not immediately spoiled by measurements: the coupling to the environment assists the tunneling process and renormalizes the decay rate, i.e. the general trend remains the same. This is even more striking since the mechanism for departing from the false vacuum is quite different in the monitored scenario; the measurements can make a a single site with virtual spin in the \(+z\) direction real at a rate \(\gamma_{d}\). This process then causes a cascade of further measurements as the probability for a measurement to occur is proportional to \(\langle n_{i}\rangle_{\alpha}\), i.e. the probability for a spin to be oriented along the \(+z\) direction. To further study this mechanism, we examined the local magnetization for a single QT for various \(\gamma_{d}\), see Fig. (5). When \(\gamma_{d}=0\), we see the magnetization evolves slowly in the bulk. There are also significant dynamics in the magnetization at the boundaries due to finite size effects. When \(\gamma_{d}\neq 0\), we see that first, the change in the magnetization in the bulk is slower than when \(\gamma_{d}=0\). This is due to the non-Hermitian evolution of the system which favors the spins to stay oriented in the \(-z\) direction and suppresses the states with spins in the \(+z\) direction. A nice consequence of the non-Hermitian evolution is that finite size effects do not penetrate into the bulk, and one can access the thermodynamic limit more quickly. This is discussed in more detail in Appendix B. The initial change in the magnetization primarily Figure 4: FVD rate, \(\gamma\), as a function of \(\gamma_{d}/J\). The solid dots correspond to the results of the QT simulation, and the red line is a fit to Eq. (19). Figure 5: Local magnetization for a single quantum trajectory for various \(\gamma_{d}\). For \(\gamma_{d}=0\), the change in magnetization is dominated by the spins at the boundary, and represent finite size effects. For finite and increasing \(\gamma_{d}\) one observes the appearance of a single spin projected along the \(+z\) direction due to a quantum jump. This single site domain wall appears to spread ballistically and causes further quantum jumps, nucleating more spins. The number of quantum jumps increases both as a function of time and of \(\gamma_{d}\). comes from the measurement process creating a local spin oriented along the \(+z\) direction. These excitations then expand ballistically, causing more measurements. We do not observe the confinement of excitations on this time scale for finite \(\gamma_{d}\) due the cascade of further measurements. For larger values of \(\gamma_{d}\) this process occurs at a larger rate thus driving faster the system away from the false vacuum. Since measurements are the leading mechanism driving the system away from its initial state, it is quite natural to expect the same physics to occur in the limit of zero longitudinal field \(h_{z}=0\), i.e. the transverse Ising model. In this case, we study the dynamics when the system is prepared in the ground state where the magnetization is in the \(-z\) direction. The measurement apparatus can still project local spins onto the \(+z\) direction, which starts a cascade of further measurements that melts the order in a manner similar to the case of finite \(h_{z}\). Thus we expect there is an exponential decay in \(F(t)\) with a decay rate, \(\gamma\), given by Eq. (19) but with \(h_{z}=0\). We have numerically confirmed that the melting of the order exhibits an exponential decay that is described by an Eq. (19), as discussed in Appendix C. ### Heating and the emergence of the quantum Zeno regime The second major feature of the dynamics of the magnetization contained in Fig. (3) is a decay towards the infinite temperature state at long times. The asymptotic decay (\(tJ\gg 1\)) towards \(\rho_{\rm ss}\) is exponential with a thermalization rate, \(\gamma_{\rm th}\): \[\|\overline{\rho}(t)-\rho_{\rm ss}\|\sim e^{-\gamma_{\rm th}t}, \tag{20}\] which implies \(|F(t)-1/2|\thicksim e^{-\gamma_{\rm th}t}\) for \(Jt\gg 1\). For the parameters under consideration, we witness thermalization for \(\gamma_{d}/J\thicksim 1\). For significantly smaller or larger values of \(\gamma_{d}/J\) the thermalization time scale is longer than the time scales accessible to our MPS calculation. To overcome these numerical limitations, we note that the thermalization rate must correspond to the spectral gap of the Liouvillian superoperator defined in Eq.(24); the thermalization rate is governed by the eigenvalue of \(\mathcal{L}\) with the smallest absolute value of the real part [14]: \[\gamma_{\rm th}=-{\rm Re}\left[\lambda_{1}\right]. \tag{21}\] Hence the thermalization rate can be accessed by diagonalizing the Liouvillian superoperator. In Fig. (6) we report the thermalization rate for a quantum Ising spin chain obtained via exact diagonalization for a system of size \(L=6\) for and various values of \(\gamma_{d}/J\). The values of the longitudinal and transverse fields, \(h_{z}\) and \(h_{x}\), are the same as those used in the simulations shown in Fig. (3). As one can see for small \(\gamma_{d}/J\), the value of \(\gamma_{th}\) increases with measurement rate \(\gamma_{d}\). This intuitive behaviour indicates that the faster the system is monitored, the faster the chain heats up toward \(\rho_{\rm ss}\). However, for \(\gamma_{d}/J\gtrsim 5\) we find that the thermalization rate decreases with increasing \(\gamma_{d}\). This signals the appearance of the quantum Zeno regime [49, 50, 51, 52] in our protocol. In this regime the system is governed by a reduced subspace of dark states which are insensitive to the monitoring. In our case such dark states correspond to density matrices with definite magnetization along \(z\), see Appendix D. In the limit \(\gamma_{d}\gg h_{x}\) (defining the quantum Zeno regime of the model and that in our case also implies \(\gamma_{d}\gg J\)) we can obtain an analytical expression for \(\gamma_{\rm th}\) by employing a dissipative Schrieffer-Wolff transformation [53] in order to construct an effective Liouvillian for these dark states. The details of this calculation are shown in Appendix D. The result is that the thermalization rate in the quantum Zeno regime is given by \[\gamma_{\rm th}\approx\frac{8h_{x}^{2}}{\gamma_{d}}. \tag{22}\] Equation (22) is independent of the system size, and applies equally to infinitely large systems as local processes dominates over the non-local coupling rate \(J\) in the quantum Zeno regime. One key feature to note is that Eq. (22) doesn't depend on either \(J\) or \(h_{z}\) to leading order, which is a consequence of the fact that we monitor the \(z\) component of the spin. In Fig. (6) we also present this analytical solution alongside the thermalization rate obtained from the exact diagonalization of the Liouvillian. We find excellent agreement for large \(\gamma_{d}/J\). For small values of \(\gamma_{d}/J\) we find that the thermalization rate is proportional to \(\gamma_{d}/J\). The transition between these two regimes occurs when \(\gamma_{d}\approx\delta h_{x}^{2}/J\). For the value of \(h_{x}/J=0.8\) used in our simulations the quantum Zeno regime is for: \(\gamma_{d}\gg SJ\). ### Correlation functions Next we consider the equal-time two-point connected correlation function, \(C(r,t)\), defined in Eq. (18). The results of the numerical simulation for the connected correlation function are shown in Fig. 7 for various values of \(\gamma_{d}/J\). When \(\gamma_{d}=0\), we observe that the correlations grow balistically, after an initial transient that last up to \(Jt\simeq 1\). At larger times \(Jt\approx 10\) the correlations reach a maximum range, and then begins to turn back. This is related to the confinement of excitations due to the longitudinal magnetic field. The presence of continuous measurements progressively kills such correlations. For small values of \(\gamma_{d}/J\), one can still see that the correlations expand ballistically, but then decay at large values of \(r\) and \(Jt\). This effect becomes more extreme as one increases the measurement rate, \(\gamma_{d}\), drastically restricting the range (both in space and time) of quantum correlations. To examine this more carefully, in Fig. (8) we plotted the connected correlation function as a function of \(Jt\) at fixed \(r=1\) and as a function of \(\gamma_{d}\). After some initial growth due to the unitary dynamics, there is an exponential decay in the correlations. This exponential decay is evident for all values of \(\gamma_{d}\). The same behaviour can also be shown if one examines Figure 6: The Liouvillian gap \(\gamma_{\rm th}\) as determined from exact diagonalization (E.D.) of the Liouvillian for a small system of \(L=6\) alongside the analytical prediction in the Quantum Zeno (Q.Z.) regime, Eq. (22). We consider \(h_{x}/J=0.8\) and \(h_{z}/J=0.08\). For these parameters, the transition to the Q.Z. regime is denoted by the red dashed line, and occurs for \(\gamma_{d}/J\approx 5\). the connected correlation function as a function of \(r\) for fixed \(Jt\), where one observes an exponential decay of the correlations in space, see Fig. (8) b). This decay of correlations is a precursor to the eventual thermalization of the system, and is markedly different from the case \(\gamma_{d}=0\). Indeed we know that, for any finite measurement rate, \(\gamma_{d}>0\), the system will asymptotically approach \(\rho_{\rm ss}\) which implies \[\lim_{t\to\infty}C(r,t)=0,\quad\forall r, \tag{23}\] since the steady-state is completely factorizable in space \(\rho_{\rm ss}=\bigotimes_{i=1}^{L}\mathbb{I}/2\). ### Entanglement Entropy In order to further characterize the behavior of correlations we have also studied the dynamics of the entanglement entropy, Eq. (11). For simplicity we only consider the bipartite entanglement entropy where we trace over half the system. The entanglement entropy is presented in Fig. 9 for the same parameters as our QT simulations of the magnetization. In the absence of dissipation the entanglement strictly grows and we observe: \(S\propto t\) at long times. For small values of \(\gamma_{d}/J\), the entropy still grows linearly in time for \(Jt<15\), however the rate of entropy growth decreases as the measurements destroy the correlations generated by the unitary dynamics. In the time-window observed, this process seems to be non-monotonic with the strength of \(\gamma_{d}\). It appears that for small values of \(\gamma_{d}\), the time range probed in our simulation belongs to a transient regime. Its actual duration is hard to quantify as the unitary and measurement dynamics are competing on equal footing. When \(\gamma_{d}>0.5J\) we instead see that dynamics due to the measurements overcome the unitary Figure 7: Logarithm of the connected correlation function, Eq. (18), for various \(\gamma_{d}/J\). In the absence of measurements, \(\gamma_{d}=0\), there is a clear growth of correlations with time due to the unitary dynamics. For finite \(\gamma_{d}\), there is a competition between the fore mentioned unitary dynamics, and dissipation. The measurements decrease the correlations at both large distances and times, in comparison to the closed system. Figure 8: Connected correlation function, Eq. (18), as a) function of \(Jt\) at fixed \(r=1\) and b) a function of \(r\) at fixed \(Jt=5\) for various \(\gamma_{d}/J\). When \(\gamma_{d}\neq 0\), there is a clear exponential decay. The decay rate of the correlation function as a function of \(Jt\) and \(r\) depends on \(\gamma_{d}\) and increases with increasing \(\gamma_{d}\). Figure 9: Bipartite entanglement entropy as a function of time for various values of \(\gamma_{d}\). Again we simulate the dynamics using QT with \(L=100\), \(h_{x}/J=0.8\), and \(h_{z}/J=0.08\). dynamics. For such values of \(\gamma_{d}\) the entropy approaches a stationary state value that decreases monotonically with increasing \(\gamma_{d}\). We expect such a saturation of the entanglement entropy to occur when the system thermalizes. However, only for \(\gamma_{d}/J\approx 1\) can we observe such physics in the time frame which is accessible to the numerics. As discussed previously, this is because either a) the thermalization time is too long to be observed in our numerics for \(\gamma_{d}/J\ll 1\), or b) we enter the quantum Zeno regime where the approach to the thermal state again becomes too slow to be observed for \(\gamma_{d}/J\gg 1\). Finally, for \(\gamma_{d}/J=1\) we show how the entanglement-entropy in the final steady state satisfies an area law. This is evident in Fig. (10), where we find the entanglement entropy to be independent of \(L\), up to fluctuations in the trajectories. Such an area law is expected in the quantum Zeno regime. Although we are not strictly in the quantum Zeno regime, we still observe an area law. This is most likely due to the fact the relevant states of the system probed the QTs are those with area law behaviours. ## 5 Conclusions In this work we characterized the decay from the false vacuum of the quantum Ising model in the presence of a measurement apparatus monitoring the local magnetization. To simulate the system dynamics we employed a Monte Carlo matrix-product-state approach. The many-body wavefunction is thus encoded in a matrix-product-state ansatz which evolves in time accordingly to a stochastic Schrodinger equation describing quantum jump trajectories. This protocol allows for the simulation of the real-time dynamics of individual quantum trajectories. We find that the presence of the continuous monitoring affects the decay of the false vacuum, introducing novel decay paths. In particular, the measurements can locally nucleate spins aligned along the \(+z\) and accelerate the departure from the false vacuum. We quantify this process and show that the magnetization fidelity, Eq. (17), decays exponentially within a Figure 10: Entanglement entropy, \(S(t)\), for fixed \(\gamma_{d}=1\) and variable system size \(L\). When the system thermalizes, the entropy saturates at a value that only depends on \(\gamma_{d}\), not the system size. In this simulation we used \(N_{traj}=600\). time window that depends on the measurement rate, and at a rate that is itself exponentially small in the measurement rate. At long times the system eventually approaches a thermal regime where the mean state of the system is maximally mixed. The typical timescale characterizing the asymptotic approach to the steady state depends on the measurement rate and shows signatures of the quantum Zeno effect. We connect the emergence of this regime to the behaviour of the spectral gap of the Liouvillian and we develop an analytical approach (based on the dissipative Schrieffer-Wolff transformation) able to predict such the Zeno decay rate as well as the critical point. From the methodological point of view, this work highlights the high potentiality of Monte Carlo matrix product states for the simulation of metastable phenomena in monitored interacting spin systems. This aspect paves the way for more general future explorations concerning, for example, the study the dynamics of the entanglement under different measurement protocols (from quantum jumps to quantum state diffusion [44, 54]) in matrix-product simulations [38]. This work also proposes another avenue for observing the FVD and thermalization in interacting systems. The continuous monitoring can speed up both the FVD and thermalization in a controllable way, rendering them visible on computational and experimental time scales. Although the FVD decay in spin-chains have not been currently observed experimentally, trapped ion experiments can already study non-integrable dynamics of meson confinement [55]. Another potential platform for studying the FVD is the two-component Bose-Einstein condensates [56, 57, 58, 59]. There the spin-degrees of freedom act as a quantum Ising model, but with the added benefit of the long coherence time provided by condensates. Extending such studies to optical lattice systems could then lead to direct realizations of similar physics studied in this manuscript. Finally we note that there are many other intriguing research directions. First and foremost, it would be interesting to develop an analytic treatment of the measurement apparatus via perturbation theory in the regime of small measurement rates, \(\gamma_{d}\). In particular, this could be done for the the dynamics of the mean state by examining the Lindblad master equation. Beyond this, there are more general questions pertaining the FVD in open quantum systems; such as how much does the FVD physics depend on the different unravelings (corresponding to different measurement protocols) of the Lindblad master equation and on the symmetries of the Hamiltonian. ## Acknowledgements We acknowledge useful discussions with A. Bastianello, L. Mazza, F. Minganti, D. Rossini, L. Rosso, M. Schiro and S. Scopa. We also acknowledge continuous insightful discussions with the experimental team at the Pitaevskii BEC center (R. Cominotti, G. Ferrari, G. Lamporesi, C. Rogora and A. Zenesini) and theoreticians (A. Recati, G. Rastelli) working on related topics. Funding informationWe acknowledge financial support from the Provincia Autonoma di Trento. ## Appendix A Details on determining the FVD rate We determine the FVD rate by examining the dynamics of the magnetization, as shown in Fig. (3). As stated in the main text, the signature of the FVD is an exponential decay away from the initial state, with a decay rate \(\gamma\). To emphasize the fitting procedure we plot \(\ln(F(t))\) for \(\gamma_{d}/J=0,0,1,0.3\), where \(F(t)\) is the figure of merit defined in Eq. (17). As \(\gamma_{d}\) increases, the time-window where the FVD is observable becomes smaller. In Tab. (1) we show the relevant time-window for various values of \(\gamma_{d}\). These time-windows are only approximate, and we fit the dynamics of \(\ln(F(t))\) to a linear fit within these time windows. This procedure appears to be accurate, as extrapolating said fit to the entire FVD regime provides excellent agreement. Examples of this procedure are shown in Fig. (11). The linear fits are shown by the dotted-dashed lines, while the solid lines are the results of the numerical simulation. Within the concerned time-domain, the linear fit, i.e. the exponential decay, is a good description of the dynamics. ## Appendix B Finite size effects In this work we focus on systems with \(L=100\). It is then natural to ask whether the system is truly in the thermodynamic limit? We examined this issue by looking at both the magnetization, or more exactly \(F(t)\) in Eq. (17), at a time \(Jt=15\), both in the presence and absence of dissipation. The results are shown in Fig. (12). From Fig. (12) one can conclude that finite \begin{table} \begin{tabular}{|c|c|c|} \hline \(\gamma_{d}\) & \(Jt_{min}\) & \(Jt_{max}\) \\ \hline 0 & 8 & 13 \\ \hline 0.1 & 5 & 7 \\ \hline 0.3 & 4 & 5 \\ \hline \end{tabular} \end{table} Table 1: Window where the FVD is observed in the dynamics of \(F(t)\), Eq. (17). These bounds are only approximate, and the fitting to the FVD is done within these time-domains. Figure 11: Fitting Protocol for the FVD rate from \(F(t)\), Eq. (17). The time-window where the FVD rate is unambiguous is shown in Tab. (1). Within the given domain we fit \(\ln(F(t))\) with a linear function, the slope of which is the FVD rate, \(\gamma\). The linear fits are shown by the dotted-dashed line, while the solid lines are the results of our numerical simulation. size effects are more important in the absence of continuous monitoring. In our simulations we work at \(L=100\) sites, and one does not see a direct convergence to the thermodynamic limit for \(\gamma_{d}=0\). For \(\gamma_{d}=0.1\), we see that the system approaches the thermodynamic limit for \(L\approx 100\) sites. This observation is quite natural; in the absence of unitary dynamics correlations can spread throughout the whole system, while the presence of monitoring will kill correlations, especially at larger distances. This lack of finite size effects in the presence of continuous monitoring can also be demonstrated by considering the entanglement entropy when the system has thermalized. We demonstrated this fact by evaluating the entanglement entropy for various \(L\) when \(\gamma_{d}=1\), as shown in Fig. (10). ## Appendix C The melting of order in the transverse Ising model In Sec. (4) we considered the FVD dynamics of the quantum Ising model, Eq. (1), in the presence of a finite longitudinal field. As discussed in the main text, the presence of measurements can nucleate single site bubbles of the true vacuum at a rate \(\gamma_{d}\). This mechanism is quite different than that of the closed quantum system, and suggests that one can observe metastability and the melting of order in the transverse Ising model, i.e. when the longitudinal field is zero. We confirmed this numerically by performing our stochastic matrix product state algorithm on a system of \(L=100\) sites for \(h_{x}=0.8\) and \(h_{z}=10^{-4}\). The finite value of \(h_{z}\) was chosen to guarantee convergence to the desired ground state, but is otherwise negligible. The results of our simulations for \(N_{traj}=200\) QTs are shown in Fig. (13). In the absence of measurements, the initial state is in an exact eigentstate of the system, hence there is no evolution of the magnetization. In the presence of measurement, the magnetization decays in a manner qualitatively similar to the case of finite longitudinal field, see Fig. (3). Similar to the case of finite \(h_{z}\), we can identify a regime where there is an exponential Figure 12: Value of figure of merit, \(F(Jt=15)\), at a given time \(Jt=15\) and for various system sizes \(L\) in the presence (blue) and absence (red) of monitoring. In the absence of monitoring, the system is more sensitive to finite size effects. For \(\gamma_{d}=0.1\) we already see convergence to the infinite size limit for \(L=100\) sites. decay away from the initial state with a rate which we also call \(\gamma\). Similar to the FVD, \(\gamma\) sets the rate at which the initial magnetic order is melted by measurements. We expect \(\gamma\) to still obey an Arrhenius law, i.e. Eq. (19) but with \(h_{z}=0\). To test this we fit the measured decay rates to an Arrhenius law. To simplify the fitting we consider: \(\gamma_{d}\ln\left(\gamma\right)\). This transforms the Arrhenius law to a linear fit which is shown in Fig. (14). The linear fit reproduces the data quite well. ## Appendix D Schrieffer-Wolff transformation and its application to the quantum Ising model In this section we consider the Schrieffer-Wolff transformation for open quantum systems [53], and apply this approach to the open quantum Ising model with both longitudinal and transverse magnetic fields in order to understand the quantum Zeno effect and thermalization time scale. ### Schrieffer-Wolff transformation for open quantum systems The Schrieffer-Wolff (SW) transformation is a perturbative approach to generate an effective Hamiltonian or equation of motion for a reduced subspace of relevance to the problem. This can be done to arbitrary order in the coupling of the reduced subspace to the remaining Hilbert space [60]. Here we apply a similar procedure but to the Linblad master equation: \[\partial_{t}\rho(t)=\mathcal{L}\rho(t) \tag{24}\] where \(\rho(t)\) is the time-dependent density matrix and \(\mathcal{L}\) is the Liouvillian super-operator of the form: \[\mathcal{L}\rho(t)=-i\left[H,\rho(t)\right]+\sum_{i}\left(L_{i}\rho(t)L_{i}^{ \dagger}-\frac{1}{2}\left\{L_{i}^{\dagger}L_{i},\rho(t)\right\}\right) \tag{25}\] Figure 13: F(t) defined in Eq. (17) for various values of the coupling to the environment, \(\gamma_{d}\), in the absence of the longitudinal field, \(h_{z}=0\). In these simulations we set \(L=100\) and \(h_{x}=0.8\), and \(N_{traj}=200\). The dashed line represents the infinite temperature steady state with zero magnetization, i.e. \(F(t)=1/2\). Eq. (25) depends on the many-body Hamiltonian, \(H\), and the jump operators \(L_{i}\) which induce dephasing in the system. For the moment we will consider general jump operators and a general Hamiltonian. As stated previously, Eq. (25) is a super-operator, i.e. it maps an operator onto another operator, similar to how an operator maps one state onto another. In this way we can introduce a Hilbert space of all density matrices \(\rho(t)\), which the super-operator acts on. A state in this Hilbert space can be represented as a column vector, while the super-operator can be represented as a matrix. This is known as the vectorized representation. Consider a Liouvillian, \(\mathcal{L}_{0}\). In general \(\mathcal{L}_{0}\) is non-Hermitian and can have complex eigenvalues, These eigenvalues, \(\lambda_{\alpha,j}\), can be organized into sectors, \(\alpha\), where each eigenvalue in that sector, \(j\), is closely spaced. More plainly, the eigenvalue spacing between the sectors, \(\lambda_{\alpha+1,j}-\lambda_{\alpha,j}\), is much larger than the spacing within each sector, \(\lambda_{\alpha,j+1}-\lambda_{\alpha,j}\). Without loss of generality, we will consider \(\alpha=0\) as the lowest lying eigenvalues of the Liouvillian, while the other sectors have larger eigenvalues. The left and right eigenstates (or rather eigenmatrices) corresponding to these eigenvalues are: \[\mathcal{L}_{0}|\alpha,v_{j}\rangle =\lambda_{\alpha,j} \langle\alpha,u_{j}|\mathcal{L}_{0} =\lambda_{\alpha,j} \tag{26}\] which satisfy the normalizaiton condition: \[\langle\alpha,u_{j}|\beta,v_{k}\rangle =\delta_{\alpha,\beta}\delta_{j,k} \tag{27}\] Finally we note that the projector onto the \(\alpha\) subspace can be written as: \[P_{\alpha}=\sum_{j}|\alpha,v_{j}\rangle\langle\alpha,u_{j}| \tag{28}\] The goal of the SW transformation will be to perturbatively integrate the couplings between each subspace and to construct to construct an effective Liouvillian that is "block diagonal", i.e. no coupling between sectors of differing \(\alpha\): Figure 14: Arrhenius law behaviour for the decay rate \(\gamma\), as a function of \(\gamma_{d}/J\). The data corresponds to the results presented in Fig. (13). The red line corresponds to an Arrhenius law, with \(h_{z}=0\), see Eq. (19). \[\mathcal{L}_{eff}=\sum_{a}P_{a}\mathcal{L}_{eff}P_{a} \tag{29}\] In this way we can trivially trace out the irrelevant degrees of freedom to a problem. To this end consider the Liouvillian: \[\mathcal{L}=\mathcal{L}_{0}+\xi\mathcal{L}_{1} \tag{30}\] where \(\xi\) is a small dimensionless number. In order to implement the SW transformation we use the following transformation: \[\mathcal{L}^{\prime}=Q\mathcal{L}Q^{-1} \tag{31}\] where the operator \(Q\) is given by \[Q=e^{\eta} Q^{-1}=e^{-\eta} \tag{32}\] Eq. (31) is a similarity transformation which preserves the trace of the density matrix. Formally speaking Eq. (31) can be written as a set of nested commutators: \[\mathcal{L}^{\prime}=\mathcal{L}+[\eta,\mathcal{L}]+\frac{1}{2!}[\eta,[\eta, \mathcal{L}]]+... \tag{33}\] Eq. (33) can be evaluated to each order in \(\xi\) by also expanding \(\mathcal{L}^{\prime}\) and \(\eta\) to the appropriate order: \[\mathcal{L}^{\prime}=\mathcal{L}^{(0)}+\xi\mathcal{L}^{(1)}+\xi^ {2}\mathcal{L}^{(2)}+...\] \[\eta=\xi\eta^{(1)}+\xi^{2}\eta^{(2)}+.. \tag{34}\] At order \(O\left(\xi^{0}\right)\) the effective Liouvillian is just \(\mathcal{L}_{0}\). At \(O\left(\xi\right)\) one finds: \[\mathcal{L}^{(1)}=\mathcal{L}_{1}+[\eta(1),\mathcal{L}_{0}] \tag{35}\] As the goal of the SW transformation is to integrate out, i.e. decouple, the subspace \(\alpha=0\) from the higher \(\alpha\neq 0\) subspaces, we require that such "off-diagonal" elements vanish. That is we require: \[\langle\alpha u_{k}|\mathcal{L}^{(1)}|\beta\nu_{j}\rangle=0 \alpha\neq\beta \tag{36}\] From Eqs. (35-36) one can then obtain the matrix elements for \(\eta^{(1)}\): \[\langle\alpha,u_{k}|\eta^{(1)}|\beta,\nu_{j}\rangle=\frac{\langle\alpha,u_{k}| \mathcal{L}_{1}|\beta,\nu_{j}\rangle}{\lambda_{\alpha,k}-\lambda_{\beta,j}} \tag{37}\] for \(\alpha\neq\beta\). For \(\alpha=\beta\) we can choose: \(\langle\alpha,u_{j}|\eta^{(1)}|\alpha,\nu_{k}\rangle=0\), without loss of generality. Given Eq. (37), one can evaluate the leading correction to the Liouvillian is: \[\mathcal{L}^{(1)}=\sum_{a}P_{a}\mathcal{L}_{1}P_{a} \tag{38}\] where we have used the property that \(\eta^{(1)}\) only couples different sectors of eigenvalues together. At \(O(\xi^{2})\) one finds a similar expression for the effective Liouvillian: \[\mathcal{L}^{(2)}=\left[\eta^{(1)},\mathcal{L}_{1}\right]+\left[\eta^{(2)}, \mathcal{L}_{0}\right]+\frac{1}{2}\left[\eta^{(1)},\left[\eta^{(1)},\mathcal{L }_{0}\right]\right] \tag{39}\] Similar to the linear order case, we can again look at the matrix elements of Eq. (39). Just as in the first order case, we set the "off-diagonal" matrix elements of Eq. (39) to zero, and solve for \(\eta^{(2)}\). To simplify the calculation we note that from Eq. (35): \[\left[\eta^{(1)},\left[\eta^{(1)},\mathcal{L}_{0}\right]\right]=\left[\eta^{ (1)},\mathcal{L}^{(1)}-\mathcal{L}_{1}\right] \tag{40}\] Thus: \[\langle\alpha u_{k}|\eta^{(2)}|\beta v_{j}\rangle=\frac{1}{2} \frac{1}{(\lambda_{a,k}-\lambda_{\beta,j})}\] \[\langle\alpha,u_{k}|\left(\left[\eta^{(1)},\mathcal{L}\right]+ \left[\eta^{(1)},\mathcal{L}^{(1)}\right]\right)|\beta,v_{j}\rangle \tag{41}\] valid for \(\alpha\neq\beta\). A similar analysis to the linear case shows that again \(\eta^{(2)}\) only couples different sectors together, hence we only need to consider matrix elements with \(\alpha\neq\beta\). Eq. (41) allows one to evaluate the effective action at quadratic order: \[\mathcal{L}^{(2)}=\frac{1}{2}\sum_{\alpha}P_{\alpha}\left[\eta^{(1)},\mathcal{ L}^{(1)}\right]P_{\alpha} \tag{42}\] In terms of the original Liouvillian, the matrix elements of the new effective Liouvillian are: \[\langle\alpha u_{k}|\mathcal{L}_{eff}|\alpha v_{j}\rangle=\langle \alpha u_{k}|\mathcal{L}_{0}|\alpha v_{j}\rangle+\xi\langle\alpha u_{k}| \mathcal{L}_{1}|\alpha v_{j}\rangle\] \[\quad\quad+\frac{\xi^{2}}{2}\sum_{\beta}\sum_{\ell}\langle \alpha u_{k}|\mathcal{L}_{1}|\beta v_{\ell}\rangle\langle\beta u_{\ell}| \mathcal{L}_{1}|\alpha v_{j}\rangle\] \[\quad\quad\quad\times\left(\frac{1}{\lambda_{a,k}-\lambda_{\beta,\ell}}+\frac{1}{\lambda_{a,j}-\lambda_{\beta,\ell}}\right) \tag{43}\] Eq. (43) is the final result which tells one how to construct an effective Liouvillian of the form Eq. (29). ### Application to the quantum Ising model Let us now consider the application of the SW transformation to the study of the quantum Zeno effect and the thermalization of a one dimensional quantum Ising model with transverse and longitudinal magnetic fields that is coupled to an infinite thermal bath. The dynamics of the density matrix are governed by Eq. (25). The unitary dynamics are governed by the following Hamiltonian: \[H=-\sum_{i}\left[J\hat{\sigma}_{i}^{z}\hat{\sigma}_{i}^{z}+h_{x}\hat{\sigma}_ {i}^{x}+h_{z}\hat{\sigma}_{i}^{z}\right] \tag{44}\] where \(\hat{\sigma}_{i}^{(x,y,z)}\) is the \(x,y,z\) Pauli matrix for the \(i=1,2,...N\) site. The dephasing is governed by the set of jump operators for each site \(i\): \[L_{i}=\sqrt{\gamma_{d}}\frac{1}{2}\left(I_{i}+\hat{\sigma}_{i}^{z}\right) \tag{45}\] with \(\gamma_{d}\) as the dephasing rate and \(\hat{I}_{i}\) is the identity operator for site \(i\). In the quantum Zeno limit, \(J\ll\gamma_{d}\), the unitary part of the Liouvillian acts as a small perturbation. Hence we define the zeroth-order Liouvillian as: \[\mathcal{L}_{0}\rho(t)=\sum_{i}\biggl{(}L_{i}\rho(t)L_{i}^{\dagger}-\frac{1}{2 }\{L_{i}^{\dagger}L_{i},\rho(t)\}\biggr{)} \tag{46}\] It is straightforward to show that Eq. (46) has a set of states with zero-eigenvalue, the so-called dark states. These states do not exhibit dissipation and have \(\lambda_{0,j}=0\). These states are associated with the probability density matrices: \[|0,j\rangle=|\{\sigma_{i}\}\rangle\langle\{\sigma_{i}\}| \tag{47}\] where \(|\{\sigma_{i}^{\mathrm{z}}\}\rangle\) is a many-body state with definite spin along the z-direction: \[\sum_{i}\hat{\sigma}_{i}^{z}|\{\sigma_{i}^{\mathrm{z}}\}\rangle=\sum_{i} \sigma_{i}|\{\sigma_{i}^{\mathrm{z}}\}\rangle \tag{48}\] with \(\sigma_{i}=\pm 1\). The next degenerate set of states have eigenvalues \(\lambda_{1,j}=-\gamma_{d}/2\) and corresponds to density matrices of the form: \[|1,i\rangle=|\{\sigma_{i}\}\rangle\langle\{\sigma_{i}\}^{\prime}| \tag{49}\] where \(|\{\sigma_{i}\}^{\prime}\rangle\) denotes a many-body state that differs from \(|\{\sigma_{i}\}\rangle\) by a single flipped spin. The unitary evolution will naturally couple these sets of eigenstates together. Thus we treat: \[\mathcal{L}_{1}=-i[H_{x},\rho(t)] \tag{50}\] where \(H_{x}=-\sum_{i}h_{x}\hat{\sigma}_{i}^{x}\) is the contribution to the Hamiltonian from the transverse field. Then we can apply the derived SW transformation to obtain an effective theory for the dark states. Before proceeding further we note that in writing Eq. (50) we note that the remaining terms of the Hamiltonian in Eq. (44) produce a vanishing result. Upon substituting Eq. (50) into Eq. (43), on can immediately show that the term linear in \(h_{x}\) is zero and one needs to go to quadratic order. A careful examination of the matrix elements shows that one can write the effective Liouvillian at second order in \(h_{x}\) as: \[\mathcal{L}_{eff}=\frac{4h_{x}^{2}}{\gamma_{d}}\sum_{i}\bigl{[}\hat{\sigma}_{ i}^{x}\rho_{0}(t)\hat{\sigma}_{i}^{x}-\rho_{0}(t)\bigr{]} \tag{51}\] where \(\rho_{0}(t)=P_{0}\rho(t)P_{0}\) is the density matrix projected onto the set of dark states. Eq. (51) has the form of a Liouvillian with no unitary time evolution, but with dissipation in the \(x\)-direction with strength, \(4h_{x}^{2}/(\gamma_{d})\). It is well known that the system will thermalize on a time scale set by the Liouvillian gap which in our case is simply the negative of the smallest finite eigenvalue of the effective Liouvillian super-operator. This is because the Liouvillian gap represents the longest time scale in the problem, while the larger eigenvalues of the Liouvillian represent motion that have been damped out. Given Eq. (51) it is straightforward to show that the lowest Liouvillian gap is \(8h_{x}^{2}/\gamma_{d}\), or equivalently the thermalization time scale, \(\tau_{th}\), is: \[\tau_{th}=\frac{\gamma_{d}}{8h_{x}^{2}} \tag{52}\] The linear dependence of \(\tau_{therm.}\) on \(\gamma_{d}\) in this regime is indicative of the quantum Zeno effect. Increasing the dissipation slows down the dynamics as the thermalization is ultimately controlled by states that are dark to the dissipation.
2310.13385
Tuna: Instruction Tuning using Feedback from Large Language Models
Instruction tuning of open-source large language models (LLMs) like LLaMA, using direct outputs from more powerful LLMs such as Instruct-GPT and GPT-4, has proven to be a cost-effective way to align model behaviors with human preferences. However, the instruction-tuned model has only seen one response per instruction, lacking the knowledge of potentially better responses. In this paper, we propose finetuning an instruction-tuned LLM using our novel \textit{probabilistic ranking} and \textit{contextual ranking} approaches to increase the likelihood of generating better responses. Probabilistic ranking enables the instruction-tuned model to inherit the relative rankings of high-quality and low-quality responses from the teacher LLM. On the other hand, learning with contextual ranking allows the model to refine its own response distribution using the contextual understanding ability of stronger LLMs. Furthermore, we apply probabilistic ranking and contextual ranking sequentially to the instruction-tuned LLM. The resulting model, which we call \textbf{Tuna}, consistently improves the performance on Super Natural Instructions (119 test tasks), LMentry (25 test tasks), Vicuna QA, and can even obtain better results than several strong reinforcement learning baselines. Our code and data are available at \url{ https://github.com/microsoft/LMOps}.
Haoran Li, Yiran Liu, Xingxing Zhang, Wei Lu, Furu Wei
2023-10-20T09:55:06Z
http://arxiv.org/abs/2310.13385v1
# Tuna: Instruction Tuning using Feedback from Large Language Models ###### Abstract Instruction tuning of open-source large language models (LLMs) like LLaMA, using direct outputs from more powerful LLMs such as Instruct-GPT and GPT-4, has proven to be a cost-effective way to align model behaviors with human preferences. However, the instruction-tuned model has only seen one response per instruction, lacking the knowledge of potentially better responses. In this paper, we propose finetuning an instruction-tuned LLM using our novel _probabilistic ranking_ and _contextual ranking_ approaches to increase the likelihood of generating better responses. Probabilistic ranking enables the instruction-tuned model to inherit the relative rankings of high-quality and low-quality responses from the teacher LLM. On the other hand, learning with contextual ranking allows the model to refine its own response distribution using the contextual understanding ability of stronger LLMs. Furthermore, we apply probabilistic ranking and contextual ranking sequentially to the instruction-tuned LLM. The resulting model, which we call **Tuna**, consistently improves the performance on Super Natural Instructions (119 test tasks), LMentry (25 test tasks), Vicuna QA, and can even obtain better results than several strong reinforcement learning baselines. Our code and data are available at [https://github.com/microsoft/LMOps](https://github.com/microsoft/LMOps). ## 1 Introduction Large language models (LLMs) have made significant progress by scaling up model size and data size Peters et al. (2018); Devlin et al. (2019); Radford et al. (2019); Brown et al. (2020); OpenAI (2023) for unsupervised pre-training and subsequently applying reinforcement learning from human feedback (RLHF) to align model responses with human preferences Christiano et al. (2017); Ouyang et al. (2022). More recently, instruction tuning Wei et al. (2022) with Self-Instruct algorithm Wang et al. (2022) has emerged as a cost-effective method for aligning with human preferences. In this approach, open LLMs like LLaMA Touvron et al. (2023) can be finetuned on instruction-following data generated by OpenAI GPT using the Self-Instruct algorithm. The Alpaca model Taori et al. (2023) exemplifies this technique, which enables close alignment with human preferences while reducing dependence on human-labeled data. However, instruction tuning offers only a broad guideline for the base LLMs to transition from "next token prediction" to a more interactive, instruction-following style. As a result, the model may learn some superficial features or styles from the instruction data but still lacks a deeper understanding of what constitutes a preferred response. For instance, when given a question like "Give Figure 1: The finetuning process using probabilistic ranking (top), contextual ranking (middle), and a combination of both (bottom). three tips for staying healthy", a base LLM may generate fluent yet undesirable continuations, while an instruction-tuned LLM could offer three general tips. Humans might prefer more detailed tips over general tips, but such tips are less likely to be sampled since they have lower likelihood within the current model distribution. This can be attributed to the fact that they are either unseen during instruction tuning or hard to be sampled due to the exposure bias (Ranzato et al., 2015). To address this, we propose further finetuning of an instruction-tuned LLM to discern the quality of multiple responses more precisely, using our novel probabilistic ranking (Sec. 2.2; Fig. 1 top) and contextual ranking (Sec. 2.3; Fig. 1 middle) approaches. Probabilistic ranking enables the instruction-tuned LLM to inherit the high-quality and low-quality responses as well as their relative rankings from the teacher LLM (e.g., text-davinci-003). In contrast, contextual ranking aims to re-balance the instruction-tuned model's own response distribution with the help of stronger LLMs (e.g., GPT-4), mitigating the exposure bias issue. We apply probabilistic ranking and contextual ranking sequentially to an instruction-tuned model, i.e., Alpaca (Taori et al., 2023), resulting in a model called **Tuna** (Sec. 2.4; Fig. 1 bottom). We evaluate Tuna on various benchmarks, including Super Natural Instructions (Wang et al., 2022), which contains 119 diverse test tasks; LMentry (Efrat et al., 2022), comprising 25 tasks to assess the basic capabilities and robustness of LLMs; and Vicuna QA (Chiang et al., 2023) which evaluates the model's ability to answer a diverse set of questions with the assistance of GPT-4. Experimental results demonstrate that the Tuna model not only consistently outperforms the standard instruction-tuned models on all benchmarks, but also surpasses several strong RLHF baselines (Ouyang et al., 2022). To summarize, our contributions are as follows: * We propose _probabilistic ranking_ and _contextual ranking_, which enable the instruction-tuned model to distinguish high-quality and low-quality responses and assign higher probability to the former accordingly. * The **Tuna** model, obtained by sequentially applying probabilistic ranking and contextual ranking on an instruction-tuned LLM, achieves better results than several strong benchmarks, including RLHF models; * Our model, data and code will be released to facilitate future research. ## 2 Methodology In this section, we describe how to obtain our **Tuna** model using the feedback from LLMs. We first describe the vanilla instruction tuning. We then introduce our probabilistic ranking and contextual ranking approaches. Lastly, we describe how to integrate both ranking approaches. ### Instruction Tuning LLMs like GPT-3 (Brown et al., 2020) have been trained on a massive text corpus using maximum likelihood estimation (MLE): \[L_{\text{MLE}}(y)=-\frac{1}{|y|}\sum_{t}\log p(y_{t}|y_{<t};\theta), \tag{1}\] where \(\theta\) represents the parameters of the base model. The pre-training objective function compels the model to predict the next token \(y_{t}\) given its prefix \(y_{<t}=[y_{0},y_{1},...,y_{t-1}]\). A sufficiently-trained LLM can generate fluent continuations given almost any prefix. However, the generated continuations may not align well with human preferences. As the primary goal of an LLM is to assist humans, it becomes essential to encourage the generation of content that follows human instructions and aligns with human preferences. The current dominant approach to enhance LLMs' instruction-following ability is called _instruction tuning_(Mishra et al., 2021; Wei et al., 2022; Taori et al., 2023), which finetunes the base LLMs in a supervised manner on instruction-response pairs \(\{i,r\}\) (where \(i\) is an instruction and \(r\) is its response) using MLE: \[L_{\text{MLE}}(i,r)=-\frac{1}{|r|}\log p(r|i;\theta^{\prime}), \tag{2}\] where \(\theta^{\prime}\) represents the parameters of the instruction-tuned model. After instruction tuning, we expect the model distribution \(p(\cdot|i;\theta^{\prime})\) to allocate higher probabilities to proper responses like \(r\) rather than undesirable continuations. Note that the responses in instruction-response pairs can either be annotated by humans1 or generated by strong LLMs, such as Instruct-GPT or GPT-4 (Wang et al., 2022). A prevalent and cost-effective approach for generating instruction tuning data is the Self-Instruct algorithm (Wang et al., 2022a). Specifically, it uses a strong LLM, e.g., text-davinci-003, to create instructions based on a few seed instructions, and then generates a single response for each instruction using the same LLM. ### Probabilistic Ranking Instruction tuning with the data generated by the Self-Instruct algorithm is essentially a form of sequence-level distillation (Kim and Rush, 2016). The rationale behind this class of distillation method is that the current commercial LLMs have significantly better capabilities than their open-source counterparts. Instead of learning from the single-response data, our _probabilistic ranking_ approach leverages the relative rankings of multiple responses based on the teacher model's probabilities for better pseudo label distillation (see Fig. 1 top). Let \(r\) denote the original response for instruction \(i\) in the instruction tuning dataset. We query strong (teacher) LLMs, such as text-davinci-003, to generate \(N\) new responses for \(i\). Let \(r^{(0)},r^{(1)},\ldots,r^{(N-1)}\) denote these new responses, and \(p(r^{(0)}|i),p(r^{(1)}|i),\ldots,p(r^{(N-1)}|i)\) denote their probabilities. While the teacher LLMs are expected to produce responses of comparable quality on average, there will inevitably be some variation in the quality of the generated responses. This inherent variability manifests itself in various aspects, such as differences in accuracy (Wang et al., 2023), response length, and level of details provided (Wang et al., 2023). Intuitively, if a model is perfectly distilled, the relative probabilities assigned to two samples should be the same as those of the teacher model. Specifically, let \(p(r^{(j)}|i;\theta^{\prime})\) and \(p(r^{(k)}|i;\theta^{\prime})\) denote the probabilities of \(r^{(j)}\) and \(r^{(k)}\) w.r.t. the student model. If \(p(r^{(j)}|i)>p(r^{(k)}|i)\), then \(p(r^{(j)}|i;\theta^{\prime})>p(r^{(k)}|i;\theta^{\prime})\). We use the following normalized log-likelihood as the teacher model quality score to account for differences in response lengths: \[s(i,r^{(k)})=\frac{\log p(r^{(k)}|i)}{|r^{(k)}|^{\beta}},\quad k=\{0,...,N-1\} \tag{3}\] where \(|r^{(k)}|\) is the length of \(r^{(k)}\) and \(\beta\) represents the length penalty. We then rank those responses in decreasing order based on \(s(i,r^{(k)})\). The resulting instruction-response pairs become \(\{i,r,(r^{[0]},...r^{[N-1]})\}\), where \(i,r\) are from the original instruction tuning data, and \(r^{[j]}\) is considered to have better quality than \(r^{[k]}\), if \(j<k\). Once we obtain the ranked responses, we can encourage our model to learn from these rankings using a pairwise ranking objective, which has been successfully employed in previous work (Zhong et al., 2020; Liu et al., 2022; Zhang et al., 2022; Zhao et al., 2023). The ranking objective function is as follows: \[L_{\text{rank}}=\sum_{0\leq j<k\leq N-1}L_{\text{rank}}^{j,k} \tag{4}\] \[L_{\text{rank}}^{j,k}=\max\Big{(}0,v_{\theta^{\prime}}^{k}-v_{\theta^{\prime} }^{j}+m\times(k-j)\Big{)},j<k \tag{5}\] where \(v_{\theta^{\prime}}^{k}=\frac{1}{|r^{[k]}|}\log p\big{(}r^{[k]}|i;\theta^{ \prime}\big{)}\), \(m>0\) is the margin hyper-parameter. The ranking loss, \(L_{\text{rank}}\), aims to teach the model to distinguish good responses from bad ones based on the teacher LLM's perspective. In addition to \(L_{\text{rank}}\), we also apply a cross-entropy loss on the original response as regularization: \[L=L_{\text{rank}}+\lambda L_{\text{MLE}},\quad L_{\text{MLE}}=\frac{1}{|r|} \log p(r|i;\theta^{\prime}) \tag{6}\] where \(r\) is the original response, and \(\lambda>0\) controls the importance of \(L_{\text{MLE}}\), which helps prevent over-optimization of the ranking loss. After learning with probabilistic ranking, the model can better assign probabilities to superior and inferior responses. ### Contextual Ranking During the instruction tuning or the probabilistic ranking stage, the model is finetuned to generate a good \(r\) given an instruction \(i\). However, given the same \(i\) during inference, the model may still generate a relatively low-quality response \(r^{\prime}\). This is related to the exposure bias problem (Ranzato et al., 2015), where the model fails to generate \(r\) due to accumulated errors during the auto-regressive generation process. To address this issue, we use our _contextual ranking_ approach to refine the distribution of responses generated by the model itself, assigning higher probabilities to better responses with the help of strong LLMs (Fig. 1 middle), thus alleviating exposure bias (Ranzato et al., 2015). For each instruction, we first sample \(N\) responses from the instruction-tuned model itself, i.e., \(r^{(0)},r^{(1)},...,r^{(N-1)}\sim p(\cdot|i;\theta^{\prime})\). We hope the samples to be diverse enough so that better responses are more likely to appear in the sampled results. To ensure diversity, we impose a constraint on the ROUGE-L Lin (2004) score between each pair of responses, requiring it to be less than a threshold \(\tau\). If the ROUGE-L score exceeds \(\tau\), we increase the sampling temperature and resample another response. If multiple trials still result in a ROUGE-L score above \(\tau\), we retain the least similar response from the trials. After obtaining \(N\) responses, we leverage the contextual understanding ability of commercial LLMs, such as GPT-4 OpenAI (2023), to rank them based on various aspects. The ranking process consists of multiple steps. First, we ask GPT-4 to assess whether the instruction requires an open-ended answer (e.g., story generation) or a close-ended answer (e.g., solving a math problem). We then request GPT-4 to generate its own response as a reference. Next, GPT-4 compares the reference response with the \(N\) responses from different aspects and assign scores to each response. For open-ended instructions, GPT-4 evaluates relevance (score 0-5), level of details/justification (score 0-5), and accuracy (score 0-5) of the model responses compared to its reference response. For close-ended instructions, the evaluation criteria are accuracy (score 0-5), level of details/justification (score 0-5), and clarity (score 0-5). Finally, GPT-4 ranks responses in decreasing order based on the sum of their scores (see Appendix E for our complete prompt). We also manually evaluated GPT-4 rankings, which have achieved a strong correlation with human judgements (see Appendix G, H). As in Sec. 2.2, the resulting instruction tuning dataset becomes \(\{i,r,(r^{[0]},...r^{[N-1]})\}\). Note that the \(r^{[k]},0\leq k\leq N-1\), is derived from the instruction-tuned model itself. Lastly, we use the same objective function as in Eq. 6 to encourage the model to assign higher probabilities to better responses. ### Integrating Probabilistic and Contextual Ranking Given an instruction-tuned model, there are several options for further finetuning: 1) learning with probabilistic ranking alone; 2) learning with contextual ranking alone; 3) learning with probabilistic ranking followed by contextual ranking (see Fig. 1 bottom). We refer to the models finetuned with these three methods as \(\textbf{Tuna}_{\textbf{p}}\), \(\textbf{Tuna}_{\textbf{c}}\), and \(\textbf{Tuna}\), respectively. To optimally integrate both probabilistic ranking and contextual ranking techniques, it is recommended to first obtain a \(\textbf{Tuna}_{p}\) model, followed by applying contextual ranking to \(\textbf{Tuna}_{p}\)'s response distribution, resulting in the Tuna model. There are two reasons for this choice. First, although it is beneficial to learn the ranking of different responses from the teacher LLM's perspective (probabilistic ranking), the model might not fully capture the teacher's ranking knowledge due to its limited capacity. Second, contextual ranking enables the model to better adapt to its own capacity by working with the model's own generations. By generating its own responses, the model can finetune its understanding with the help of stronger LLMs and more effectively produce responses that are both closer to human preferences and compatible with its capacity constraints, alleviating the exposure bias issue Ranzato et al. (2015). ## 3 Experiments ### Model and Data In our experiments, we use a 7B LLaMA model Touvron et al. (2023) as the base model. The instruction tuning data is sourced from Alpaca Taori et al. (2023), which consists of 52K instructions paired with responses that are generated by text-davinci-003 using the Self-Instruct algorithm Wang et al. (2022). We perform instruction tuning on 52K Alpaca data using recommended hyperparameters, such as a learning rate of 2e-5 and the AdamW optimizer \((0.9,0.999)\)Loshchilov and Hutter (2019).2 For simplicity, we also refer to the instruction-tuned model as **Alpaca**. Footnote 2: [https://github.com/AetherCortex/Llama-X](https://github.com/AetherCortex/Llama-X) For probabilistic ranking, we input 52K instructions from Alpaca dataset into text-davinci-003 to produce \(N=4\) responses per instruction along with their log-likelihoods3, with an inference temperature of 1. We calculate response scores using Eq. 3 with \(\beta\) being 1.3, and rank the responses accordingly. Subsequently, we finetune the Alpaca model for 1 epoch with a learning rate 1e-5, margin \(m=0.1\), and cross entropy regularizer weight \(\lambda=1.0\). We denote the model trained exclusively with probabilistic ranking as \(\textbf{Tuna}_{\textbf{p}}\). Footnote 3: GPT-4 is more powerful but it does not return log-likelihoods. For contextual ranking, we sample \(N=4\) responses from the Alpaca model with temperature \(T=1\) for each instruction. To avoid similar generations, we ensure the pairwise ROUGE-L Lin (2004) between responses is less than \(\tau=0.8\). Oth erwise, we remove the similar response, increase the temperature by 0.1, and resample. If three trials fail to produce unique enough responses, we keep the least similar one. We then employ GPT-4 to rank responses for the first **13K** instruction data with the GPT-4 inference temperature to be 0. The contextual ranking prompt is shown in Table 9.4 The finetuning hyperparameters follow those of probabilistic ranking. We refer to the model trained on 13K contextual ranking data of the Alpaca model as \(\mathbf{Tuna_{c}}\). Footnote 4: The cost of calling OpenAI API is listed in Appendix B. Furthermore, we use the 13K GPT-4 ranking data to train a proxy ranking model (PRM) based on StableLM-3B.5 The PRM is employed to re-rank Alpaca's responses on 52K instructions. We refer to the Alpaca model trained with 52K ranking data totally generated by the PRM as \(\mathbf{Tuna_{c}}\) (**PRM**). Footnote 5: [https://github.com/Stability-AI/StableLM](https://github.com/Stability-AI/StableLM) Lastly, we also collect 13K GPT-4 contextual ranking data based on \(\mathbf{Tuna_{p}}\)'s responses instead of Alpaca's. We refer to the model finetuned on \(\mathbf{Tuna_{p}}\) as \(\mathbf{Tuna}\). We also included strong reinforcement learning baselines for comparison (i.e., PPO-sim and PPO-sim-GPT4-20K models from AlpacaFarm Dubois et al. (2023)).6 Footnote 6: We also trained our own RLHF model, which is not as good as the ones in AlpacaFarm. The comparison can be found in Appendix I ### Evaluation Super Natural Instruction (Super NI)Super NI Wang et al. (2022) contains 119 test tasks designed to evaluate a model's cross-task generalization ability. It includes a variety of classification and generation tasks, such as textual entailment and title generation. We report both 0-shot and 2-shot performance, where 0-shot provides only an instruction (referred to as "definition" in their literature) and 2-shot offers two additional positive examples. The evaluation metric for all 119 tasks is **ROUGE-L**Lin (2004), which is strongly correlated with human evaluation with a Pearson coefficient of 0.998 according to Wang et al. (2022). Greedy decoding is applied during inference. LMentryLMentry Efrat et al. (2022) is a benchmark that primarily focuses on the accuracy and ro \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**Super NI**} & \multicolumn{2}{c}{**LMentry**} & \multicolumn{2}{c}{**Vicuna QA**} \\ & **0-shot** & **2-shot** & **LMentry Score** & **Win** & **Lose** & **Tie** \\ \hline LLaMA & 11.0 & 23.6 & 26.3 & 4\% & 92\% & 4\% \\ T5-LM 11B & - & 30.2 & 20.6 & - & - & - \\ T0 11B & - & 32.3 & 31.6 & - & - & - \\ InstructGPT 175B & - & 52.1 & 48.4 & - & - & - \\ \hline Alpaca & 36.0 & 44.5 & 31.4 & - & - & - \\ + PPO-sim & 31.9 (-4.1) & 37.5 (-7.0) & 27.8 (-3.6) & 79\% & 16\% & 5\% \\ + PPO-sim-GPT4-20K & 37.1 (+1.1) & 44.9 (+0.4) & 27.8 (-3.6) & 74\% & 22\% & 4\% \\ \(\mathbf{Tuna_{p}}\) & **39.4 (+3.4)** & 43.9 (-0.6) & **35.0 (+3.6)** & 68\% & 27\% & 5\% \\ \(\mathbf{Tuna_{c}}\) & 37.7 (+1.7) & **46.6 (+2.1)** & 32.2 (+0.8) & 74\% & 20\% & 6\% \\ \(\mathbf{Tuna_{c}}\) (PRM) & 34.2 (-1.8) & 40.1 (-4.4) & 32.2 (+0.8) & 75\% & 19\% & 6\% \\ Tuna & **38.7 (+2.7)** & **45.0 (+0.5)** & **34.7 (+3.3)** & **86\%** & **10\%** & **4\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of different models on Super NI, LMentry and Vicuna QA. The numbers in bold indicate the top-2 results. The numbers in parentheses indicate the performance differences compared to Alpaca. The results of T5-LM 11B Raffel et al. (2020), T0-11B Sanh et al. (2022), InstructGPT 175B Ouyang et al. (2022) are taken from Wang et al. (2022); Efrat et al. (2022). The RLHF baselines PPO-sim and PPO-sim-GPT4-20K, which apply the PPO algorithm Schulman et al. (2017), are taken from Dubois et al. (2023). \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & Alpaca & Alpaca+PPO-sim & \(\mathbf{Tuna_{p}}\) & \(\mathbf{Tuna_{c}}\) & \(\mathbf{Tuna}\) \\ \hline **Score** & 2.13 & 2.95\({}^{*}\) & 2.98\({}^{*}\) & 3.15\({}^{*}\) & 3.80\({}^{*\dagger}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Human evaluation on Vicuna QA. * denotes that the model is significantly (\(p<0.01\)) better than Alpaca, while \(\dagger\) denotes that Tuna is significantly (\(p<0.01\)) better than other models. bustness aspects of LLMs' generations. It contains 25 short tasks that are trivial to humans but challenging for LLMs. The final metric is **LMentry score**, which is calculated by multiplying its mean accuracy on 25 tasks with the robustness score. The model will be evaluated in a 0-shot manner, and greedy decoding is applied during inference. Vicuna QAVicuna QA [14] comprises 80 test questions across 9 categories that measure an LLM's ability to generate relevant, detailed and accurate responses and it has been widely adopted in many works. Instead of having a ground truth for evaluation, it conducts pairwise comparisons with the help of GPT-4 [4]. It prompts GPT-4 to compare the outputs of our models to the Alpaca model. We report the win/lose/tie rate against the Alpaca model. Human EvaluationAdditionally, we conduct human evaluations on Vicuna QA. Specifically, responses from five anonymous systems, namely Alpaca, Alpaca + PPO-sim, Tuna, Tuna\({}_{p}\), and Tuna\({}_{c}\), were randomly shuffled and presented to annotators who were then asked to rank these outputs. The scoring was designed such that the \(i\)-th ranked system receives a score of \(6-i\), meaning the best-ranked system receives a score of 5, and the worst-ranked system receives a score of 1. Each question was annotated by two different annotators, and the score was averaged. ### Main Results The main results are presented in Table 1. After instruction tuning, Alpaca demonstrates significant performance improvements over LLaMA on all three benchmarks. This highlights the successful transition from the "next token prediction" paradigm to a more interactive instruction-following paradigm. Furthermore, both contextual and probabilistic ranking enhance performance across all three benchmarks. Specifically, Tuna\({}_{c}\) exhibits more improvement on the Super NI7 2-shot results while Tuna\({}_{p}\) performs better on Super NI 0-shot and LMentry, narrowing the performance gap with much larger models like InstructGPT-175B. Since the 2-shot input is longer than 0-shot, we conjecture that contextual ranking might be more beneficial for longer sequence generation than probabilistic ranking. On the Vicuna QA benchmark, both Tuna\({}_{p}\) and Tuna\({}_{c}\) outperform Alpaca significantly on nearly \(70\%\) of the questions, as evaluated by GPT-4. Upon comparison with the RLHF baselines, Tuna\({}_{p}\) and Tuna\({}_{c}\) consistently demonstrate superior performances on both the Super NI and LMentry benchmarks. However, when it comes to the Vicuna QA benchmark, their performance is marginally lower than that of the RLHF baselines. Moreover, Tuna achieves the best performance on Vicuna QA while maintaining competitive scores on Super-NI and LMentry. Human results on Vicuna QA (see Table 2) also confirm that humans prefer the responses from our models. Footnote 7: ROUGE is used as the default metric on Super NI. However, our results follow the same trend using BERTScore (see Appendix J). Furthermore, Tuna\({}_{c}\) (PRM) demonstrates comparable performance to Tuna\({}_{c}\) on Vicuna QA and LMentry, but it underperforms both Tuna\({}_{c}\) and Alpaca on Super NI. This suggests that although the PRM has primarily learned ranking from the GPT-4 contextual ranking data, it also introduces some noise during the learning process. Overall, it is more effective to learn directly from GPT-4 contextual ranking data.8 Footnote 8: Experiments with more PRMs can be found in App. D. ### Ablation Study In this subsection, we delve deeper into the performance of our approach by examining several aspects, including: (a) the effect of more responses in instruction tuning, (b) the order of applying two ranking methods, (c) the influence of the cross entropy regularization, (d) the amount of probabilistic ranking data, and (e) the risks of GPT-4 evaluation. More Responses in Instruction TuningWe explore whether Tuna's effectiveness is solely due to the increased response data by examining the impact of adding more responses per instruction during instruction tuning. We create a new model, Alpaca-Mul, by adding four extra responses from the probabilistic ranking dataset to the Alpaca dataset and fine-tuning the LLaMA model using Eq. 2. The results are presented in Table 3. Upon evaluation on Super NI, Alpaca-Mul's performance is nearly identical to that of Alpaca but falls short when compared to the 0-shot settings of Tuna\({}_{p}\) and Tuna. On LMentry, Alpaca-Mul outperforms Alpaca, yet it still does not reach the performance levels of Tuna\({}_{p}\) and Tuna. Interestingly, in the Vicuna QA task, Alpaca-Mul slightly underperforms compared to Alpaca. These findings suggest that merely adding more responses without differentiating them does not necessarily lead to improved response generation. Overall, the results of Alpaca-Mul indicate that Tuna's superior performance cannot be solely attributed to the availability of more response data. Integration OrderAn alternative approach to Tuna involves first training the Tuna\({}_{c}\) model, and subsequently continuing training the Tuna\({}_{c}\) model with probabilistic ranking data. The resulting model is referred to as Tuna\({}_{cp}\). We explore various strategies for training Tuna\({}_{cp}\): 1) finetuning Tuna\({}_{c}\) with the first 13K probabilistic ranking data (Tuna\({}_{cp}\)-13K); 2) finetuning Tuna\({}_{c}\) model with last 39K probabilistic ranking data (Tuna\({}_{cp}\)-39K); 3) finetuning Tuna\({}_{c}\) model with 52K probabilistic ranking data (Tuna\({}_{cp}\)-52K). Additionally, we also try to finetune original Alpaca model with a combination of 13K GPT-4 contextual ranking data (generated from Alpaca model's responses) and the last 39K probabilistic ranking data (mix-Tuna-52K). We also finetune Alpaca model with 52K contextual ranking data (13K GPT-4 contextual ranking + 39K ranking-model-generated data) plus 52K probabilistic ranking data (mix-Tuna-104K). The training details are listed in the Appendix C. The results are listed in Table 3. None of the combination strategies consistently outperform both Tuna\({}_{p}\) and Tuna\({}_{c}\) across the Vienna QA and Super NI benchmarks. On LMentry, however, finetuning Tuna\({}_{c}\) with probabilistic ranking data is beneficial, especially when no duplicate data is present (Tuna\({}_{cp}\)-39K). This suggests that shorter probabilistic ranking data are beneficial when high accuracy and robustness are top priority. Interestingly, Tuna\({}_{cp}\) is not comparable to Tuna, indicating that the order in which the model is trained with contextual and probabilistic ranking matters. One plausible explanation is that both the original Alpaca data and the probabilistic ranking data are generated by text-davinci-003, while Tuna\({}_{c}\) has significantly shifted the model distribution by re-ranking the Alpaca model's responses, making it challenging to finetune Tuna\({}_{c}\) with probabilistic ranking data again. The Effect of Cross Entropy RegularizerWe examine the influence of the weight \(\lambda\) of the cross entropy regularizer in Eq. 6 on performance by varying \(\lambda\) across different values: \(\{0,0.1,1,5,10\}\) while training the Tuna\({}_{c}\) model. Fig. 2 illustrates that as \(\lambda\) increases, the performance on accuracy-oriented benchmarks such as Super NI and LMentry improves, while the performance on open questions does not necessarily follow the same trend. On one hand, this finding suggests that with a small \(\lambda\), learning with contextual ranking may induce \begin{table} \begin{tabular}{l c c c c} \hline \hline rank & 1 & 2 & 3 & 4 \\ \hline contextual ranking & 66.4 & 55.2 & 51.4 & 44.8 \\ prob. ranking & 55.8 & 54.3 & 52.5 & 49.4 \\ PRM & 69.2 & 57.8 & 50.9 & 44.7 \\ \hline \hline \end{tabular} \end{table} Table 4: The average ranking lengths of contextual ranking data, probabilistic ranking data and the data generated by the proxy ranking model (PRM). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**Super NI**} & \multicolumn{2}{c}{**LMentry**} & \multicolumn{2}{c}{**Vicuna QA**} \\ & **0-shot** & **2-shot** & **LMentry Score** & **Win** & **Lose** & **Tie** \\ \hline Alpaca & 36.0 & 44.5 & 31.4 & - & - & - \\ Alpaca-Mul & 34.7 (-1.3) & **45.7 (+1.2)** & 33.9 (+2.5) & 42\% & 53\% & 5\% \\ \hline Tuna\({}_{p}\) & **39.4 (+3.4)** & 43.9 (-0.6) & **35.0 (+3.6)** & 68\% & 27\% & 5\% \\ Tuna & **38.7 (+2.7)** & 45.0 (+0.5) & 34.7 (+3.3) & **86\%** & **10\%** & **4\%** \\ Tuna\({}_{c}\) & 37.7 (+1.7) & **46.6 (+2.1)** & 32.2 (+0.8) & **74\%** & **20\%** & **6\%** \\ Tuna\({}_{cp}\)-13K & 35.7 (-0.3) & 44.0 (-0.5) & 33.5 (+2.1) & 58\% & 37\% & 5\% \\ Tuna\({}_{cp}\)-39K & 34.8 (-1.2) & 43.4 (-1.1) & **35.4 (+4.0)** & 46\% & 48\% & 6\% \\ Tuna\({}_{cp}\)-52K & 35.0 (-1.0) & 42.6 (-1.9) & 33.8 (+2.4) & 51\% & 41\% & 8\% \\ \hline mix-Tuna-52K & 37.7 (+1.7) & 44.2 (-0.3) & 30.0 (-1.4) & 70\% & 23\% & 7\% \\ mix-Tuna-104K & 36.0 (+0.0) & 40.0 (-4.5) & 32.6 (+1.2) & 55\% & 40\% & 5\% \\ \hline \hline \end{tabular} \end{table} Table 3: Different combinations of probabilistic ranking data and contextual ranking data. The numbers in bold represent the top-2 results. The numbers in parentheses represent the performance difference compared to Alpaca. long and detailed answers, but those answers are not always accurate. On the other hand, it implies that accuracy-oriented benchmarks and open QA benchmarks are complementary, and researchers should consider more diverse test cases to thoroughly evaluate a model Wang et al. (2023). **The Amount of Probabilistic Ranking Data** We investigate the impact of varying the amount of probabilistic ranking data used for finetuning the \(\text{Tuna}_{p}\) model by testing different data sizes, i.e., \(\{0,13000,24000,52000\}\). \(0\) refers to the Alpaca model. The results, shown in Fig. 3, reveal that for probabilistic ranking, 13K data points are sufficient for Super NI and LMentry, while Vicuna QA requires 24K data points. We conjecture that this saturation phenomenon can be attributed to two reasons. First, 52K Alpaca instructions generated by Self-Instruct algorithm are not diverse enough, as new instructions are produced by text-davinci-003 using prompt instructions sampled from a limited seed task pool. Second, instruction tuning itself may only require a limited amount of data to perform behavior cloning, as discussed in Zhou et al. (2023). Thus, we can further reduce the cost of probabilistic ranking data generation by half. **The Risks in GPT-4 Evaluation** We present evidence that evaluating a model on open QA with the help of GPT-4 may be risky. Table 4 displays the ranking length of our proxy ranking model (PRM). It shows that the PRM has inherited GPT-4 ranking's bias towards longer outputs Li et al. (2023). However, as we discussed in Sec. 3.3, the data generated by the PRM is not as good as the original 13K contextual ranking data, as assessed by more targeted automatic evaluations like Super NI and LMentry. Despite the inferior quality of the PRM-generated data, the performance on Vicuna QA remains almost unaffected (see \(\text{Tuna}_{c}\) (PRM) in Table 1). This observation suggests that evaluating LLMs on open QA with GPT-4 may not always be as accurate as it appears, echoing the findings of Wang et al. (2023). It highlights the need for more representative test questions or additional targeted benchmarks for evaluation. ## 4 Related Work Instruction TuningInstruction tuning aims to improve the usability of base language models Brown et al. (2020); Raffel et al. (2020); Chowdhery et al. (2022) by finetuning them on instruction-response pairs in a zero-shot Wei et al. (2022) or few-shot manner Mishra et al. (2021); Wang et al. (2022); Mallen et al. (2023). The instruction data can be sourced from off-the-shelf NLP benchmarks Mishra et al. (2021); Wei et al. (2022); Wang et al. (2022) or generated by LLMs Wang et al. (2022); Honovich et al. (2022); Taori et al. (2023); Peng et al. (2023). **Ranking Loss** Learning through re-ranking sequence-level outputs has been studied in sequence-to-sequence models Wiseman and Rush (2016); Edunov et al. (2018); Liu et al. (2022); Zhang et al. (2022). BRIO and MoCa algorithms Liu Figure 3: The effect of varying the number of probabilistic ranking data on \(\text{Tuna}_{p}\). Figure 2: The effect of varying the weight \(\lambda\) of cross entropy regularization in Eq. 6 on \(\text{Tuna}_{c}\). The win/lose/tie rate on Vicuna is computed against Alpaca. et al., 2022; Zhang et al., 2022) adopt a pairwise ranking loss to guide the model to generate summaries with higher ROUGE scores Lin (2004). In this paper, we use GPT-4's (OpenAI, 2023) strong contextual understanding ability and text-davinci-003's Ouyang et al. (2022) intrinsic probability measures for ranking. In parallel with our work, Yuan et al. (2023) also propose pairwise ranking loss for finetuning LLMs. Key differences include: 1) our pipeline finetuning strategy; 2) our focus on ranking the model's responses; 3) our use of the original response for cross entropy regularization, while they select the highest-reward response. Additionally, Liu et al. (2023) also employs GPT models for finetuning BART Lewis et al. (2019) on the summarization task. Pre-Trained Model EvaluationLarge pretrained models are powerful evaluation metrics due to their strong contextual understanding ability, such as BERTScore Zhang* et al. (2020), BARTScore Yuan et al. (2021), MoverScore Zhao et al. (2019), COMET Rei et al. (2020), and GPTScore Fu et al. (2023). More recently, there are more evaluation strategies based on GPT-3.5 and GPT-4 Liu et al. (2023); Gao et al. (2023). ## 5 Conclusion In this paper, we propose to finetune an instruction-tuned LLM using our probabilistic ranking approach (Tuna\({}_{p}\)), contextual ranking approach (Tuna\({}_{c}\)), and a combination of both (Tuna). Our comprehensive experiments demonstrate consistent performance improvements across three benchmarks: Super Natural Instructions (119 test tasks), LMentry (25 test tasks), and vicuna QA. Furthermore, our methods outperform popular reinforcement learning from human feedback baselines that rely on the proximal policy optimization algorithm. These findings underscore the effectiveness of our approach in enhancing the performance of instruction-tuned LLMs and pave the way for future research in this area. ## Limitations Despite the promising results achieved by our Tuna model, there are several limitations that should be acknowledged. The first limitation is GPT-4 ranking inconsistency. In our experiments, we relied on GPT-4 for contextual ranking, which may introduce bias due to the inconsistency in its ranking performance. As a powerful LLM, GPT-4 is generally expected to provide accurate and reliable rankings; however, it may still be sensitive to the phrasing or structure of prompts Dubois et al. (2023). This inconsistency may lead to suboptimal rankings and potentially affect the overall performance of the Tuna model. In future work, it would be beneficial to design more robust prompts that can mitigate the impact of GPT-4's ranking inconsistencies. Another limitation is the evaluation benchmark. In this paper, we evaluated the Tuna model on three benchmarks, which provided a diverse range of tasks and challenges. However, it is unclear how well the Tuna model would generalize to other types of tasks, domains, or languages. Further research is needed to explore the applicability of the Tuna model to a broader range of problems and settings. The last limitation is the reliance on the use of proprietary LLMs, such as GPT-4 and text-davinci-003, for generating responses and rankings. This dependency may limit the accessibility and reproducibility of our method for researchers who do not have access to these proprietary models. Developing alternative methods that can leverage open-source LLMs or other ranking mechanisms would be a valuable direction for future research. ## Acknowledgements We would like to thank reviewers for their valuable feedback. This research/project is supported by Ministry of Education, Singapore, under its Tier 3 Programme (The Award No.: MOET320200004), the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2-RP-2020-016), and Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOET2EP20122-0011). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore.
2305.08275
ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding
Recent advancements in multimodal pre-training have shown promising efficacy in 3D representation learning by aligning multimodal features across 3D shapes, their 2D counterparts, and language descriptions. However, the methods used by existing frameworks to curate such multimodal data, in particular language descriptions for 3D shapes, are not scalable, and the collected language descriptions are not diverse. To address this, we introduce ULIP-2, a simple yet effective tri-modal pre-training framework that leverages large multimodal models to automatically generate holistic language descriptions for 3D shapes. It only needs 3D data as input, eliminating the need for any manual 3D annotations, and is therefore scalable to large datasets. ULIP-2 is also equipped with scaled-up backbones for better multimodal representation learning. We conduct experiments on two large-scale 3D datasets, Objaverse and ShapeNet, and augment them with tri-modal datasets of 3D point clouds, images, and language for training ULIP-2. Experiments show that ULIP-2 demonstrates substantial benefits in three downstream tasks: zero-shot 3D classification, standard 3D classification with fine-tuning, and 3D captioning (3D-to-language generation). It achieves a new SOTA of 50.6% (top-1) on Objaverse-LVIS and 84.7% (top-1) on ModelNet40 in zero-shot classification. In the ScanObjectNN benchmark for standard fine-tuning, ULIP-2 reaches an overall accuracy of 91.5% with a compact model of only 1.4 million parameters. ULIP-2 sheds light on a new paradigm for scalable multimodal 3D representation learning without human annotations and shows significant improvements over existing baselines. The code and datasets are released at https://github.com/salesforce/ULIP.
Le Xue, Ning Yu, Shu Zhang, Artemis Panagopoulou, Junnan Li, Roberto Martín-Martín, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, Silvio Savarese
2023-05-14T23:14:09Z
http://arxiv.org/abs/2305.08275v4
# ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding ###### Abstract Recent advancements in multimodal pre-training methods have shown promising efficacy in 3D representation learning by aligning multimodal features across 3D shapes, their 2D counterparts, and language descriptions. However, the methods used by existing multimodal pre-training frameworks to gather multimodal data for 3D applications lack scalability and comprehensiveness, potentially constraining the full potential of multimodal learning. The main bottleneck lies in the language modality's scalability and comprehensiveness. To address this, we introduce ULIP-2, a tri-modal pre-training framework that leverages state-of-the-art large multimodal models to automatically generate holistic language counterparts for 3D objects. It does not require any 3D annotations, and is therefore scalable to large datasets. We conduct experiments on two large-scale 3D datasets, Objaverse and ShapeNet, and augment them with tri-modal datasets of 3D point clouds, images, and language for training ULIP-2. ULIP-2 achieves significant improvements on downstream zero-shot classification on ModelNet40 (**74.0% in top-1 accuracy**); on the real-world ScanObjectNN benchmark, it obtains **91.5% in overall accuracy** with only 1.4 million parameters, signifying a breakthrough in scalable multimodal 3D representation learning without human 3D annotations. The code, along with the generated tri-modal datasets, can be found at [https://github.com/salesforce/ULIP](https://github.com/salesforce/ULIP). ## 1 Introduction 3D visual understanding has seen a surge of interest in recent years [1; 2; 3; 4] due to its growing applications in augmented reality and virtual reality (AR and VR) [5; 6; 7; 8], autonomous driving [9; 10], metaverse, and robotics [11; 12]. Despite this, the collections and annotations of 3D data remain a costly and labor-intensive process [13; 14; 15]. In response to this challenge, researchers have turned to other more abundantly available modalities, _e.g._, image and natural language, to provide supervisory signals for learning 3D representations. This approach has not only led to improved single-modal representation abilities but also cultivated a richer multimodal representation capability. The results have been promising, and to some extent, have alleviated the need for single-modal dense annotations in the 3D domain. However, multimodal learning frameworks in this direction commonly face the challenge of assembling scalable, high-quality, and well-aligned multimodal data for 3D applications. We identify the language modality for 3D as the critical bottleneck in this process. Existing frameworks tend to utilize category names and short descriptions derived from metadata as the language counterparts for the 3D data. Those approaches, however, lack scalability as they always rely on some extent of human annotations and the dataset collection process, which will be hard to scale up. Furthermore, existing methods are not comprehensive enough as the derived language information might not provide sufficient details and lacks variations. This highlights the need for an innovative paradigm to generate language counterparts for 3D data that are both scalable and comprehensive, thereby truly harnessing the potential of multimodal learning. However, the optimal way to acquire and utilize language data is unclear. Although well-trained human annotators could potentially provide detailed language descriptions of 3D objects, such a method is both costly and lacks scalability. Moreover, identifying the appropriate language counterpart modality for a 3D object is not a straightforward task. To address these issues, we first reconsider what the 2D image counterpart modality for a 3D object should be. Semantically, if we can render 2D images of a 3D object from any viewpoint, the collection of all these rendered images should approximately encapsulate all information about this 3D object, thus forming an appropriate image counterpart modality for 3D. By analogy, if we can linguistically describe a 3D object from any viewpoint, the compilation of all these language descriptions from all perspectives should also approximately encompass all linguistically expressible information about this object, thus forming an appropriate language modality for the 3D object. In practice, for efficiency, we may sample a finite fixed set of holistic viewpoints instead of "any viewpoint". If we apply the same set of viewpoints for creating the language modality as we render the images, this task naturally boils down to describing the rendered 2D image for a given viewpoint. Given the exciting advancements in large multimodal models, which have been trained on a wealth of enriched language data and thus have the capacity to generate detailed descriptions from images, we propose to utilize these models for this task. This approach not only allows us to fully automate the process in a scalable manner but also leverages the detailed and comprehensive language generation capabilities of the large multimodal models, thereby potentially enhancing the language details and comprehensiveness for 3D objects. In light of the preceding reasoning, and also in response to the challenge of scalable and comprehensive multimodal 3D data acquisition, we introduce ULIP-2, a novel framework that encompasses Figure 1: An illustration of language description generation from 2D images. These images are rendered from a set of holistic viewpoints of a 3D object. In some views, the chair is not visible, while in other views, the sword/scepter cannot be seen. Combining descriptions of all views is essential for the model to learn comprehensive and holistic information about the 3D object. an innovative approach to generate well-aligned, holistic multimodal data for 3D understanding, coupled with an efficient multimodal pre-training architecture capable of aligning this multimodal data, thereby harnessing the full potential of multimodal learning. Given a 3D object, our initial step involves extracting 3D point cloud data to serve as the 3D modality input. We then render this object into a series of images from a fixed set of holistic viewpoints, providing the 2D modality input. For each rendered image, we employ a large multimodal model to generate detailed descriptions, thereby establishing the language modality (as illustrated in Figure 1). This approach allows us to create scalable multimodal data for 3D, as it only necessitates the 3D data itself. Furthermore, by generating descriptions from a comprehensive set of holistic views, we address the prior issues of detail and comprehensiveness in the language modality. By then employing an efficient multimodal pre-training architecture to align this multimodal data, we facilitate the learning of a comprehensive multimodal 3D representation, as described in Figure 2. Consequently, ULIP-2 offers a promising solution for scalable and comprehensive multimodal pre-training for 3D representation learning. Our paper has three main contributions: 1. It enables scalable multimodal pre-training without necessitating any human annotations. ULIP-2 is applicable to any 3D dataset, regardless of whether the data is labeled or not since it requires only the 3D data itself. Figure 2: Overview of the ULIP-2 framework. ULIP-2 employs a large multimodal model to generate detailed descriptions for each 2D-rendered image from holistic viewpoints of a 3D object. ULIP-2 takes advantage of a pre-aligned and fixed Vision-Language feature space to achieve alignment among the triplet modalities: descriptive texts, images, and 3D point clouds. 2. It obtains considerable improvement in learning multi-modal representations. On the challenging ScanObjectNN benchmark, ULIP-2 achieves an overall accuracy of **91.5%** using only 1.4 million parameters. It also achieves **74.0%** in top-1 accuracy for zero-shot classification on ModelNet40. Moreover, ULIP-2 can effectively synergize with the ever-increasing capacity of 3D data and the development of large multimodal models. 3. For two large-scale 3D datasets, Objavverse and ShapeNet, we release the generated triplets of point clouds, images, and language, as "**ULIP-Objavverse Triplets**" and "**ULIP-ShapeNet Triplets**". The statistics of these datasets can be found in Table 1. To our knowledge, we are the first to release such large-scale, aligned, tri-modal datasets for 3D understanding. ## 2 Related Work **Multimodal Representation Learning**. In recent years, multimodal representation learning has emerged as a popular research topic due to its remarkable capabilities and applications. Most research works focus on learning multimodal representation for only two modalities: language and image modalities, which have led to remarkable outcomes. One line of research in this area emphasizes the interaction between image regions and caption tokens using Transformer-based architectures [16; 17; 18; 19], which exhibit strong predictive capabilities but are computationally expensive to train. Alternatively, methods such as CLIP [20] and SLIP [21] target generating single features for image and text independently and subsequently align these two modalities. This simplified architecture promotes robust and efficient large-scale pre-training, even on noisy data. Recent works have demonstrated promising results by extending multimodal representation learning to 3D modality. ULIP [22] is one of the pioneering works in creating (3D point cloud - image - language) triplets, By aligning these three modalities together, ULIP enhances 3D representation learning and mitigates the need for single-model dense 3D data annotations, thereby partially alleviating the data scarcity issue in 3D. A recent work [23] seeks to learn 3D representations from pre-trained 2D encoders via Image-to-Point Masked Autoencoders. However, this approach does not involve alignment with the language modality, which potentially limits its capacity for more complex multimodal tasks. The concurrent work of [24] unifies the contrastive and generative modeling paradigms, but it shares similar scalability constraints with ULIP. Moreover, both of these two methods are not model-agnostic and require the employment of transformer architectures as the 3D encoder, which inherently increases the needed model size, while ULIP-2 is model-agnostic and can be integrated with any 3D backbone, thus offering a more flexible and scalable solution for multimodal 3D representation learning. Despite the development of methods such as ULIP to reduce the single-model dense annotation effort, they still confront scalability challenges due to their dependency on dataset metadata and category names for obtaining the language counterpart modality. Additionally, the prompt-based pseudo-captions generated by these methods lack the fine-grained information, details, and variations that are necessary for comprehensive understanding. In contrast, ULIP-2 overcomes these limitations by leveraging the power of state-of-the-art large multimodal models. This approach fundamentally improves scalability and diminishes data requirements, thereby enabling more efficient applications on larger datasets. **Generative Large Multimodal Models**. It has been evidenced an effective trend to increase the scales of language model pre-training for performance improvements in generative language tasks such as question-answering. This line of works, i.e. large multimodal models, typically increases the size of transformer models, scaling up parameters and FLOPS-per-token roughly in proportion: from 213M parameters in GPT [25], to 300M parameters in BERT [26], to 1.5B parameters in GPT-2 [27], to 8B parameters in Megatron-LM [28], to 11B parameters in T5 [29], to 175B parameters in GPT-3 [30], and to nowadays astonishingly GPT-4 [31]. Especially GPT-4 allows images as \begin{table} \begin{tabular}{l c c} \hline \hline **Modality** & **ULIP - Objavverse Triplets** & **ULIP - Shapenet Triplets** \\ \hline 3D Point Clouds & \(\sim\) 800k & \(\sim\) 52.5k \\ Images & \(\sim\) 10 million & \(\sim\) 3 million \\ Language Descriptions & \(\sim\) 100 million & \(\sim\) 30 million \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of ULIP - Objavverse Triplets and ULIP - ShapeNet Triplets. inputs and enables image captioning and visual Q&A. This framework was originally proposed by Anderson et al. [32] as a large multimodal model, which first learns image and language encoders via multimodal learning, and then attaches language decoders to language encoders to generate language texts. Follow-up works improve visual grounding for text generation through cross-attention between image encoders and text encoders [33; 34; 35; 36; 37; 18; 38; 39; 40]. In this paper, in order to eliminate the dependency on manual language annotations for 3D datasets which are often incomplete, noisy, and unscalable, we harness a multimodal generative model, BLIP-2 [39], to generate expressive descriptions for 2D renderings of 3D objects. Because of its automation, we are able to generate a variety of annotations for a dense set of holistic viewpoints, which benefits multimodal 3D representation learning. **3D Point Cloud Understanding**. There are two primary approaches to 3D point cloud modeling. The first approach involves projecting 3D point clouds into voxel or grid-based formats [41; 42], followed by 2D or 3D convolutions for feature extraction. On the other hand, PointNet [43] serves as a pioneer in directly ingesting 3D point clouds and extracting permutation-invariant feature representations [23]. PointNet++ [44] proposes a hierarchical neural network that can extract local features with increasing contextual scales. More recently, PointNeXt [45] has emerged as a lightweight, modern version of PointNet++, demonstrating promising results. Furthermore, self-supervised learning for 3D point cloud understanding has shown encouraging outcomes. For example, Point-BERT [46] adopts a masking strategy for self-supervised learning with 3D point clouds, drawing inspiration from BERT [26] in the language domain. Similar to ULIP [22], our approach is orthogonal to the aforementioned approaches, implying that improvements in their methods could potentially enhance our method as well. ## 3 Method ULIP-2 assimilates the pre-training framework of ULIP and introduces a scalable and comprehensive multimodal triplet creation technique that eliminates the need for human language annotations. This innovation allows ULIP-2 to combine the efficiency of ULIP's multimodal pre-training with a scalable triplet creation method. As a result, ULIP-2 facilitates large-scale pre-training without any manual effort, mirroring a pseudo self-supervised learning approach. We demonstrate that this method effectively mitigates the data scalability issue, and simultaneously advances the field of multimodal representation learning for 3D understanding to new levels of performance. ### Preliminary: ULIP ULIP [22] presents an efficient multimodal pre-training framework that constructs triplets encompassing three modalities: (1) the 3D modality, obtained by extracting 3D point cloud data; (2) the image modality, generated by rendering images from 3D object files across multiple viewpoints; and (3) the language modality, derived by prompting dataset metadata such as descriptive terms and category names into cohesive sentences. ULIP leverages the powerful pre-trained vision-language model, SLIP [21], to learn 3D representations. It accomplishes this by aligning 3D modality features to the feature space shared by language and image modalities. ULIP-2 shares a similar objective with ULIP in aligning the (image, text, 3D) modalities, which prompts us to adopt its pre-training framework. Given the close resemblance in setup between ULIP and ULIP-2, we choose ULIP as our experimental baseline. ### Scalable Triplet Creation In ULIP-2, the model similarly utilizes three input modalities, though it only requires the 3D object data itself. As depicted in Fig. 2, given a 3D object, we extract 3D point clouds from the surface as the input to the 3D encoder and generate images from various viewing angles. We then leverage BLIP-2 [40], a cutting-edge large multimodal model, to generate descriptive texts for each rendered 2D image. For each image, we generate a set of sentences, rank them, and aggregate the top-\(k\) sentences to form the language modality in the triplet. This scalable triplet creation approach facilitates dataset scaling, eliminating the need for dataset metadata collection and necessitating only the 3D data itself. Our method is capable of aligning 3D representations with holistic image-text pairs in any unannotated dataset, thereby providing a more comprehensive and scalable solution for 3D understanding. ### Tri-modal Pre-training ULIP-2 aligns the triplet of 3D point clouds, 2D rendered images, and comprehensive descriptions to a unified feature space. We adopt a powerful pre-trained vision language model SLIP [21] and freeze it during the pre-training. The feature space, already pre-aligned by SLIP, serves as the target space where we aim to integrate the 3D modality. During tri-modal pre-training, given a 3D object \(\mathbf{O}\), we extract its 3D point cloud \(\mathbf{P}\), randomly sample its 2D rendered image \(\mathbf{I}\sim\mathsf{render}(\mathbf{O})\), and generate its language description \(\mathbf{T}\sim\mathsf{blip2}(\mathbf{I})\), where render is the 3D-to-2D rendering operation and \(\mathsf{blip2}\) is to query BLIP-2 [40] for image description. We then extract the image feature \(\mathbf{f}^{\mathbf{I}}=E_{\mathbf{I}}(\mathbf{I})\) and text feature \(\mathbf{f}^{\mathbf{T}}=E_{\mathbf{T}}(\mathbf{T})\) based on the pre-aligned and fixed image encoder \(E_{\mathbf{I}}\) and text encoder \(E_{\mathbf{T}}\) in SLIP [21]. We target to train a 3D point cloud encoder \(E_{\mathbf{P}}\) such that its 3D feature \(\mathbf{f}^{\mathbf{P}}=E_{\mathbf{P}}(\mathbf{P})\) is aligned with its image and text features. We formulate the 3D-to-image alignment using the contrastive loss similar in spirit to CLIP [20]: \[\mathcal{L}_{\mathtt{P2I}}=-\frac{1}{2}\sum_{i}\log\frac{\exp(\mathbf{f}_{i}^{ \mathbf{P}}\mathbf{f}_{i}^{\mathbf{I}}/\tau)}{\sum_{j}\exp(\mathbf{f}_{i}^{ \mathbf{P}}\mathbf{f}_{j}^{\mathbf{I}}/\tau)}+\log\frac{\exp(\mathbf{f}_{i}^{ \mathbf{P}}\mathbf{f}_{i}^{\mathbf{I}}/\tau)}{\sum_{j}\exp(\mathbf{f}_{j}^{ \mathbf{P}}\mathbf{f}_{i}^{\mathbf{I}}/\tau)} \tag{1}\] where \(i\), \(j\) are the sampling indices, and \(\tau\) is a learnable temperature parameter. The first term indicates that the dot product of the 3D feature and the image feature of the same sample should stand out among other products where the _image features_ are from different samples. Likewise, the second term indicates that the dot product of the 3D feature and the image feature of the same sample should stand out among other products where the _3D features_ are from different samples. Similarly, we formulate the 3D-to-text alignment loss as: \[\mathcal{L}_{\mathtt{P2T}}=-\frac{1}{2}\sum_{i}\log\frac{\exp(\mathbf{f}_{i}^{ \mathbf{P}}\mathbf{f}_{i}^{\mathbf{T}}/\tau)}{\sum_{j}\exp(\mathbf{f}_{i}^{ \mathbf{P}}\mathbf{f}_{j}^{\mathbf{T}}/\tau)}+\log\frac{\exp(\mathbf{f}_{i}^{ \mathbf{P}}\mathbf{f}_{i}^{\mathbf{T}}/\tau)}{\sum_{j}\exp(\mathbf{f}_{j}^{ \mathbf{P}}\mathbf{f}_{i}^{\mathbf{T}}/\tau)} \tag{2}\] Our final training objective is to train the 3D encoder \(E_{\mathbf{P}}\) that minimizes the sum of the two contrastive alignment losses above: \[\min_{E_{\mathbf{P}}}\mathcal{L}_{\mathtt{P2I}}+\mathcal{L}_{\mathtt{P2T}} \tag{3}\] ## 4 Experiments ### ULIP-Objaverse Triplets and ULIP-ShapeNet Triplets Creation We extract triplets of 3D point clouds, images, and language descriptions based on two large-scale datasets of 3D objects. The first dataset is Objaverse [47], the recently released and largest-scale realistic 3D dataset. It has approximately 800K real-world 3D objects, each of which is associated with metadata containing a "name" field. For each 3D object, we use Blender [48] to render 12 images, spaced equally by 360/12 degrees. For each rendered image, we employ BLIP-opt6.7B in BLIP-2 [40] to generate 10 detailed descriptions, which are then ranked using CLIP-VIT-Large [20] image-text similarity score. Based on an ablation study in Sec. 5.3, we choose to use an ensemble of the top 5 descriptions as the language modality input. We extract 8k and 2k points from each 3D object to accommodate different downstream tasks. Our generated well-paired triplets of comprehensive descriptions, 2D rendering images, and 3D point clouds are released as **ULIP-Objaverse Triplets**. The second dataset is ShapeNet [13], a renowned synthetic dataset. We employ its publicly available subset which has around 52.5K 3D objects with 55 annotated categories. For each object, we sampled 30 equally spaced view angles, for each view angle, we render an RGB image and a depth map. The image description method is the same as that in Objaverse. We release these triplets as **ULIP-ShapeNet Triplets**. ### Downstream Tasks We conduct experiments on two downstream tasks: (1) the zero-shot 3D classification task involving multimodal inputs and (2) the standard 3D classification task involving a single modality. We compare ULIP-2 to existing methods, and show that ULIP-2 improves downstream task performances over existing methods by a significant margin. In addition to its remarkable performance, ULIP-2 also offers a significant advantage in that it does not require any human annotation during the pre-training process. This eliminates a substantial amount of manual labor typically associated with such tasks, further underscoring the scalability and efficiency of our approach. We use the ModelNet40 [13] and ScanObjectNN [49] datasets to benchmark ULIP-2. ModelNet40 is a synthetic CAD model dataset. It contains 9,843 training samples and 2,468 testing samples. ScanObjectNN is a real-world 3D dataset with 2,902 objects under 15 categories. We follow the same dataset setup and preparation protocols used in ULIP, ensuring consistency in our comparisons. ### Setup **Evaluation Metrics**. We adopt the same evaluation metrics used in ULIP: top-1 and top-5 accuracy for the zero-shot 3d classification task; overall accuracy and class average accuracy for the standard 3D classification task. **Backbones**. We pre-train ULIP-2 on two representative backbones: **Point-BERT**[46] is a transformer-based backbone that exhibits strong performance in ULIP's zero-shot classification experiments. **PointNeXt**[45] is a recent work that proposes a lightweight backbone based on PointNet++ [44] and delivers promising results on the ScanObjectNN benchmark. **Pre-training Details**. ULIP-2 is pre-trained on 8 Nvidia A100 GPUs (40G) with a batch size of 64 and a learning rate of 1e-3. We pre-train for 50 epochs on Objaverse and 250 on ShapeNet, taking 1.5 days and 12 hours respectively. The final checkpoints are used for downstream tasks. ### Experimental Results **Zero-Shot 3D Classification**. We follow the same procedure as in ULIP for zero-shot 3D classification. We present the zero-shot 3D classification results on ModelNet40 in Table 2. First, we observe that, benefited from pre-training, both PointNeXt and Point-BERT obtain significantly better results than the PointCLIP. Moreover, ULIP-2 has a significant improvement margin over ULIP on both datasets. Specifically, on ShapeNet, ULIP-2 improves top-1 accuracy over ULIP by **8.3%** and **6.0%** individually with PointNeXt and Point-BERT. On Obiayverse, the improvements are on a similar scale. This validates the effectiveness of our holistic-view language descriptions to boost the representation capability during pre-training. Especially, unique captions per 2D view enrich the language descriptions of a 3D object, in return enhancing the language-3D alignment. **Standard 3D Classification**. We adhere to ULIP and community protocols for standard 3D classification. We present 3D classification results on ScanObjectNN hardest set in Table 3. It is observed that ULIP-2 (using the Point-BERT backbone) improves the baseline method (no multimodal pre-training) by 3.6% on ShapeNet and by 5.9% on Objaverse. By using the PointNeXt backbone, ULIP-2 obtains \begin{table} \begin{tabular}{l l l l c c} \hline \hline \multirow{2}{*}{**Model**} & **Pre-training** & **Pre-training** & **Manual** & \multicolumn{2}{c}{**Accuracy**} \\ & **dataset** & **method** & **captions?** & **top-1** & **top-5** \\ \hline PointCLIP [50] & \multicolumn{3}{c}{No multimodal pre-training} & \multicolumn{2}{c}{20.2} & – \\ \hline \multirow{4}{*}{PointNeXt [45]} & ShapeNet [13] & ULIP [22] & ✓ & 56.2 & 77.0 \\ & ULIP-2 (ours) & ✗ & 64.5 (\(\uparrow 8.3\)) & 81.3 (\(\uparrow 4.3\)) \\ \cline{2-6} & Obiayverse [47] & ULIP [22] & ✓ & 40.3 & 70.0 \\ & ULIP-2 (ours) & ✗ & 49.0 (\(\uparrow 8.7\)) & 79.7 (\(\uparrow 9.7\)) \\ \hline \multirow{4}{*}{Point-BERT [46]} & ShapeNet [13] & ULIP [22] & ✓ & 60.4 & 84.0 \\ & ULIP-2 (ours) & ✗ & 66.4 (\(\uparrow 6.0\)) & 87.7 (\(\uparrow 3.7\)) \\ \cline{1-1} \cline{2-6} & Obiayverse [47] & ULIP [22] & ✓ & 67.2 & 83.1 \\ \cline{1-1} & Obiayverse [47] & ULIP-2 (ours) & ✗ & 70.2 & 87.0 \\ \cline{1-1} & ULIP-2 * (ours) & ✗ & **74.0** (\(\uparrow 6.8\)) & **90.0** (\(\uparrow 6.9\)) \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot 3D classification on ModelNet40. * indicates it uses ensembled test-set class names. 91.5% overall accuracy and sets up a new record on the ScanObjectNN benchmark. We therefore confirm the generalizable benefits of holistic-view language descriptions regardless of pre-training datasets or encoder backbones. ## 5 Ablation Study ### Different Vision-Language Models Considering the language description quality from large multimodal models plays an important role in 3D representation pre-training, we conduct an ablation study over two such models. We use BLIP-2 throughout the benchmarking experiments above. We hereby compare it to its earlier version BLIP [39] for the zero-shot 3D classification task using ShapeNet pre-training dataset and Point-BERT backbone. Results in Table 4 show that using BLIP-2 generated descriptions achieves slightly better results, thanks to its evolved vision-language alignment capability, suggesting that as the large multimodal models advance, the performance of ULIP-2 can be expected to improve correspondingly. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Large multimodal models**} & \multicolumn{2}{c}{**Accuracy**} \\ & & **top-1** & **top-5** \\ \hline \multirow{2}{*}{Point-BERT w/ ULIP-2} & BLIP [39] & 64.1 & 87.3 \\ & BLIP-2 [40] & **66.4** & **87.7** \\ \hline \hline \end{tabular} \end{table} Table 4: Zero-shot 3D classification on ModelNet40. Pre-trained on ShapeNet due to the computation efficiency. \begin{table} \begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**\# Params**} & \multicolumn{1}{c}{**Overall**} & \multicolumn{1}{c}{**Class-average**} \\ & & **(M)** & **accuracy** & **accuracy** \\ \hline PointNet [43] & & 3.5 & 68.2 & 63.4 \\ PointNet++ [44] & & 1.5 & 77.9 & 75.4 \\ DGCNN [51] & & 1.8 & 78.1 & 73.6 \\ MVTN [52] & & 11.2 & 82.8 & – \\ RepSurf-U [53] & & 1.5 & 84.6 & – \\ Point-MAE [54] & & 22.1 & 85.2 & – \\ PointMLP [55] & & 12.6 & 85.7 & 84.4 \\ Point-M2AE [56] & & 15.3 & 86.43 & – \\ PointCMY [57] & & 12.6 & 86.7 & 84.8 \\ ACT [58] & & 22.1 & 88.21 & – \\ P2P [59] & & & - & 89.3 & – \\ I2P-MAE [23] & & 12.9 & 90.11 & – \\ \hline \multirow{4}{*}{Point-BERT [46]} & \multirow{2}{*}{ShapeNet [13]} & \multirow{2}{*}{\begin{tabular}{c} **Pre-training** \\ **methods** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Manual** \\ **captions?** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} \\ \end{tabular} } \\ & & & & \\ \hline \multirow{4}{*}{Point-BERT [46]} & \multirow{2}{*}{ShapeNet [13]} & ULIP [22] & ✓ & 22.1 & 86.4 & – \\ & ULIP-2 (ours) & ✗ & 22.1 & 86.7(\(\uparrow 3.6\)) & – \\ \cline{2-6} & \multirow{2}{*}{Obiverse [47]} & ULIP [22] & ✓ & 22.1 & 88.7 & – \\ & ULIP-2 (ours) & ✗ & 22.1 & 89.0(\(\uparrow 5.9\)) & – \\ \hline \multirow{4}{*}{PointNeXt [45]} & \multirow{2}{*}{ShapeNet [13]} & – & – & 1.4 & 87.5 & 85.9 \\ & ULIP & 22 & ✓ & 1.4 & 89.7 & 88.6 \\ \cline{1-1} & \multirow{2}{*}{Obiverse [47]} & ULIP [22] & ✓ & 1.4 & 90.1 & 89.2 \\ \cline{1-1} & & ULIP-2 (ours) & ✗ & 1.4 & 90.8(\(\uparrow 3.3\)) & 90.3(\(\uparrow 4.4\)) \\ \cline{1-1} & & ULIP-2 * (ours) & ✗ & 1.4 & **91.5**(\(\uparrow 4.0\)) & **91.2**(\(\uparrow 5.3\)) \\ \hline \hline \end{tabular} \end{table} Table 3: 3D classification results on ScanObjectNN. ULIP-2 significantly outperforms the baselines. * means the voting technique [55] is used. The green numbers following \(\uparrow\) indicate the amounts of improvement of our ULIP-2 over the corresponding ULIP baseline. ### Number of 2D Views Per 3D Object We further perform an ablation study for zero-shot 3D classification w.r.t. the number of 2D views per 3D object in pre-training. Results in Table 5 demonstrate that, with the increase of the number of views, zero-shot classification accuracy increases accordingly. This validates our statement that diverse language descriptions of holistic views benefit multimodal 3D representation learning. ### Top-\(k\) Captions Per 2D View To investigate the pre-training sensitivity w.r.t. the number of captions per view being used, we conduct an ablation study on Point-BERT with ULIP-2 pre-trained on ShapeNet for zero-shot 3D classification. Results in Table 6 show the insensitivity regardless of the number of top-\(k\) captions being selected per 2D view (in total 10 captions). As a result, without losing generality, we use top-5 descriptions per 2D rendering throughout our experiments. ## 6 Conclusion and Discussion We have introduced ULIP-2, a framework for multimodal 3D representation learning. It utilizes large multimodal models to generate comprehensive language descriptions of 3D objects, enabling us to overcome the limitations of existing 3D object datasets with regard to language descriptions' quality and scalability. Combined with an efficient multimodal pre-training framework and pre-train on our triplets of point clouds, images, and language from two large-scale 3D datasets, we demonstrate substantial and consistent improvements in zero-shot and standard 3D object classification over previous methods. Furthermore, our framework achieves a new state-of-the-art performance on the ScanObjectNN challenge leaderboard with a minimal number of parameters. To encourage future research, we release our pre-training multimodal triplets as "ULIP-Objaverse Triplets" and "ULIP-ShapeNet Triplets." Broader Impact.This work is introduced for 3D multimodal pre-training without leveraging any human annotation effort. It has positive impacts such as reducing human labor. However, reducing human labor may also cause negative consequences such as job loss or displacement, particularly amongst low-skilled labor who may be most in need of painful employment. The negative impact is not specific to this work and should be addressed broadly in the field of AI research. Limitations.Our training data is from research public data and possibly contains biased information. Furthermore, the large multimodal models that we use to generate descriptions are trained on public language data and may generate biased language. The model could be improved if more comprehensive and less biased open-source datasets become available, as this would help prevent the generation of improper content. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & **\# captions** & **Top-\(k\)** & \multicolumn{2}{c}{**Accuracy**} \\ & **generated** & **selected** & **top-1** & **top-5** \\ \hline \multirow{3}{*}{Point-BERT w/ ULIP-2} & & 3 & **66.7** & 87.2 \\ & & 5 & 66.4 & **87.7** \\ \cline{1-1} & & 10 & 66.3 & 85.1 \\ \hline \hline \end{tabular} \end{table} Table 6: Zero-shot 3D classification on ModelNet40, pre-trained on ShapeNet. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & **Pre-training dataset** & **\# Views** & \multicolumn{2}{c}{**Accuracy**} \\ & & & **top-1** & **top-5** \\ \hline \multirow{4}{*}{Point-BERT w/ ULIP-2} & & 2 & 61.5 & 81.4 \\ & & 6 & 67.9 & 87.0 \\ & & 12 & **70.2** & **87.0** \\ \cline{2-5} & & 2 & 55.5 & 77.1 \\ \cline{1-1} & & 15 & 64.2 & 85.3 \\ \cline{1-1} & & 30 & **66.4** & **87.7** \\ \hline \hline \end{tabular} \end{table} Table 5: Zero-shot 3D classification on ModelNet40, pre-trained on different numbers of views. Acknowledgment We would like to extend our sincere thanks to Dongxu Li for his assistance with the Lavis Library.
2307.09712
Planetesimal Accretion at Short Orbital Periods
Formation models in which terrestrial bodies grow via the pairwise accretion of planetesimals have been reasonably successful at reproducing the general properties of the solar system, including small body populations. However, planetesimal accretion has not yet been fully explored in the context of the wide variety of recently discovered extrasolar planetary systems, particularly those that host short-period terrestrial planets. In this work, we use direct N-body simulations to explore and understand the growth of planetary embryos from planetesimals in disks extending down to ~1 day orbital periods. We show that planetesimal accretion becomes nearly 100 percent efficient at short orbital periods, leading to embryo masses that are much larger than the classical isolation mass. For rocky bodies, the physical size of the object begins to occupy a significant fraction of its Hill sphere towards the inner edge of the disk. In this regime, most close encounters result in collisions, rather than scattering, and the system does not develop a bimodal population of dynamically hot planetesimals and dynamically cold oligarchs, like is seen in previous studies. The highly efficient accretion seen at short orbital periods implies that systems of tightly-packed inner planets should be almost completely devoid of any residual small bodies. We demonstrate the robustness of our results to assumptions about the initial disk model, and also investigate the effects that our simplified collision model has on the emergence of this non-oligarchic growth mode in a planet forming disk.
Spencer C. Wallace, Thomas R. Quinn
2023-07-19T01:47:12Z
http://arxiv.org/abs/2307.09712v1
# Planetesimal Accretion at Short Orbital Periods ###### Abstract Formation models in which terrestrial bodies grow via the pairwise accretion of planetesimals have been reasonably successful at reproducing the general properties of the solar system, including small body populations. However, planetesimal accretion has not yet been fully explored in the context of the wide variety of recently discovered extrasolar planetary systems, particularly those that host short-period terrestrial planets. In this work, we use direct N-body simulations to explore and understand the growth of planetary embryos from planetesimals in disks extending down to \(\simeq 1\) day orbital periods. We show that planetesimal accretion becomes nearly 100 percent efficient at short orbital periods, leading to embryo masses that are much larger than the classical isolation mass. For rocky bodies, the physical size of the object begins to occupy a significant fraction of its Hill sphere towards the inner edge of the disk. In this regime, most close encounters result in collisions, rather than scattering, and the system does not develop a bimodal population of dynamically hot planetesimals and dynamically cold oligarchs, like is seen in previous studies. The highly efficient accretion seen at short orbital periods implies that systems of tightly-packed inner planets should be almost completely devoid of any residual small bodies. We demonstrate the robustness of our results to assumptions about the initial disk model, and also investigate the effects that our simplified collision model has on the emergence of this non-oligarchic growth mode in a planet forming disk. ## 1 Introduction Planetesimal accretion is a key phase in the terrestrial planet growth process, bridging the gap from kilometer-sized bodies up to roughly moon-sized objects known as planetary embryos. In the earliest stages of the planet formation process, beginning from \(\mu\)m sizes, aerodynamic forces dominate the growth and evolution of the solids and statistical models (Johansen et al., 2014; Birnstiel et al., 2016) are appropriate to describe how these numerous, small bodies coagulate. Due to the internal pressure support of the gas disk, the gas itself orbits at sub-Keplerian speed and exerts a headwind on any solids large enough to decouple from the gas (Weidenschilling, 1977). Around a meter in size, this headwind is maximally effective at capping away orbital angular momentum, and planet-building material can fall onto the central star on catastrophically short timescales (Weidenschilling, 1977; Nakagawa et al., 1986). Additionally, laboratory experiments suggest that collisions between mm- to cm- sized solids tend to result in bounces or destruction, rather than continued growth (Blum & Munch, 1993; Colwell, 2003; Beitz et al., 2011). For these reasons, a number of mechanisms to radially concentrate solids in a planet-forming disk have been proposed to facilitate fast growth from mm to km sizes (Johansen et al., 2007; Lyra et al., 2008; Bai & Stone, 2010) in order to surmount these barriers. Interestingly, formation models for the short-period multiplanet systems revealed by Kepler (Fabrycky et al., 2014) also seem to require enhanced concentrations of planet-building material to reproduce the observed architectures (Raymond et al., 2007; Hansen & Murray, 2012). Regardless of how the mm- to km-sized growth barriers are surmounted, gravity begins to dominate and aerodynamic gas drag plays an increasingly unimportant role beyond this size. During this phase, collision cross sections are enhanced as gravitational focusing (Safronov, 1969) acts to bend the trajectories of bodies undergoing close encounters. Because gravitational focusing becomes more effective as bodies grow larger, a period of runaway growth occurs (Wetherill & Stewart, 1989; Kokubo & Ida, 1996; Barnes et al., 2009) and a power law spectrum of masses develop. Eventually, the largest bodies (known as oligarchs) dynamically heat the remaining planetesimals, severely limiting further growth (Kokubo & Ida, 1998). The final outcome of this phase is a bimodal population of dynamically cold oligarchs, surrounded by dynamically hot, difficult to accrete residual planetesimals. Lines of evidence suggest that the asteroid belt (Bottke et al., 2005; Morbidelli et al., 2009), Kuiper belt (Duncan et al., 1989; Levison et al., 2008; Sheppard & Trujillo, 2010) and the Oort cloud (Levison et al., 2011) are largely composed of the leftovers of this stage of planet formation. Tidal interactions between protoplanets and the gaseous disk keep eccentricities and inclinations low until the gas disk dissipates, typically over the course of a few Myr (Mamajek, 2009). On a timescale of roughly \(10^{5}\) yr, gravitational perturbations trigger an instability which involves large scale oscillations of the eccentricity and inclinations of bodies as they strongly interact through secular resonances (Chambers and Wetherill, 1998). As a consequence of the instability, the oligarchics are no longer on isolated, stable orbits and coalesce to form Earth-sized planets through a series of extremely energetic giant impacts (Kokubo and Ida, 2002; Raymond et al., 2005, 2006). Due to the relative ease of modeling the early dust coagulation phases and the final giant impact phase, these steps in the terrestrial planet formation process have received most of the attention in the literature. The planetesimal accretion phase, which we will focus on in this paper, is more difficult and expensive to model because there are too many particles to directly track with traditional N-body codes, while the gravitational interactions between the few massive bodies produced by the runaway growth phase (Ida and Makino, 1993; Kokubo and Ida, 1995, 1998) render statistical methods inappropriate. Due to computational expense, N-body simulations of planetesimal accretion are usually modeled in a narrow ring (Kokubo and Ida, 1996, 1998), and the results and timescales are then scaled to suit whatever relevant orbital period is being studied. N-body simulations of terrestrial planet assembly typically begin with a series of neatly spaced oligarchs, whose mass varies smoothly with orbital period. As we will show in this paper, oligarchic growth does not scale to arbitrarily short orbital periods. Given that Systems of Tightly-packed Inner Planets (STIPs) appear to be a common outcome of planet formation (Latham et al., 2011; Lissauer et al., 2011; Rowe et al., 2014), understanding exactly how solids accumulate at short orbital periods is crucial. Although gas-disk driven migration of the planets themselves is often invoked to explain the observed architectures (Izidoro et al., 2017, 2021), we will focus on an in-situ model in this paper. That is, once the planetesimals themselves form, they largely stay in place, and any subsequent large-scale movement of the solids are the result of mutual gravitational interactions. The focus of this work will be to understand how the outcome of the planetesimal accretion process scales with orbital period by using a high-powered N-body code to directly follow the growth and evolution of the planetesimals across a wide range of orbital periods (1 to 100 days). In doing so, we will assess whether the typical initial conditions (fully formed, evenly spaced protoplanets, e.g. Raymond et al. (2006)) used in studies of terrestrial planet formation are actually appropriate for understanding STIPs. In section 2 we provide an overview of the theory behind planetesimal accretion and show that assumptions used to derive the well-known modes of growth are only valid at sufficiently long orbital periods. We then motivate the need for N-body simulations to study this problem and describe the code used, along with how our initial conditions were constructed in section 3. In section 4, we present a parameter study of planetesimal accretion using a series of simulations of narrow annuli that exhibit both oligarchic and non-oligarchic growth. In section 5 we present a set of simulations starting with a wide planetesimal disk and demonstrate that a transition between accretion modes occurs at moderately small orbital periods. Next, we assess the impact of simplifications made to our collision model on this result in section 6. In section 7, we discuss the implications of this multimodal accretion behavior throughout the disk for planet formation models and conclude. ## 2 Overview of planetesimal accretion ### Oligarchic and Runaway Growth We begin our analysis by considering a disk of equal mass planetesimals with radius \(r_{\rm pl}\), mass \(m_{\rm pl}\) and surface density \(\Sigma_{\rm pl}\). The collision rate in the vicinity of an orbit defined by Keplerian frequency \(\Omega\) can be written as \(n\Gamma v\), where \(n=\Sigma_{\rm pl}\Omega/2m_{\rm pl}v\) (where we have assumed that the scale height of the planetesimal disk goes as \(2v/\Omega\)). \(\Gamma\) describes the effective collision cross section and \(v\) is the typical encounter velocity between planetesimals. For a swarm of planetesimals on randomly oriented orbits, \(v\) is typically taken to be the rms velocity, \(\left<v^{2}\right>^{1/2}\), which can be related to the eccentricity and inclination distribution \((e,i)\) in the following way (Lissauer and Stewart, 1993): \[\left<v^{2}\right>^{1/2}=\left(\frac{5}{4}\left<e^{2}\right>^{1/2}+\left<i^{ 2}\right>^{1/2}\right)v_{\rm k}. \tag{1}\] The dynamical interactions between growing planetesimals can be somewhat simplified by scaling the orbital elements of the bodies by the Hill radius \[r_{\rm h}=\left(\frac{m_{\rm pl}}{3M_{*}}\right)^{1/3}, \tag{2}\] where \(M_{*}\) is the mass of the central star. The Hill radius of a body describes the size scale over which the gravity of the growing planetesimal dominates over the gravity of the star. Using equation 2, the eccentricity, inclination and separation between orbiting bodies can be defined as \[e_{\rm h}=\frac{ae}{r_{\rm h}},\,i_{\rm h}=\frac{ai}{r_{\rm h}},\,\tilde{b}=\frac{ a_{2}-a_{1}}{r_{\rm h}}. \tag{3}\] Using this formalism, \(e_{\rm h}\) and \(i_{\rm h}\) describe the radial and vertical excursions of an orbiting body in units of its Hill radius. For \(e_{\rm h}>1\), the random velocity dispersion dominates over the shearing motion across a separation of \(1r_{\rm h}\) and encounters can be treated with a two-body formalism. Assuming that every collision results in a perfect merger, the growth rate of a planetesimal is given by \[\frac{1}{M}\frac{dM}{dt}=\frac{\Sigma\Omega}{2m_{\rm pl}}\Gamma. \tag{4}\] In the case where the collision cross section depends only on the physical size of the planetesimals, the growth scales sub-linearly with mass and the mass distribution is expected to evolve in an "orderly" fashion, in which mass ratios between bodies tend toward unity. However, bodies larger than \(\sim 100\) km in size are expected to exert a significant gravitational force on each other during encounters and the collision cross section depends on both the size of the bodies and their encounter velocities. In this case, \[\Gamma=\pi r_{\rm pl}^{2}\left(1+v_{\rm esc}^{2}/v^{2}\right) \tag{5}\] (Safronov, 1969), where \(v_{\rm esc}\) is the escape velocity from the two bodies at the point of contact. In the limit that \(v_{\rm esc}\gg v\), it can be shown that \(dM/dt\propto M^{4/3}\), which implies a runaway scenario where growth accelerates with mass. This mode of growth was confirmed with N-body simulations by Kokubo & Ida (1996) and appears necessary to construct protoplanets within the lifetime of a protoplanetary disk (Lissauer, 1987), although one should note that pebble accretion (Lambrechts & Johansen, 2012, 2014; Bitsch et al., 2015) is a viable alternative scenario. Due to the velocity dependence of the gravitational focusing effect, it is not clear how ubiquitous this mode of growth is. In particular, encounter velocities at short orbital periods will be rather large (because \(v\sim v_{\rm k}\)) and the \(v_{\rm esc}\gg v\) condition may not always be satisfied. The effect that a dynamically hot disk has on runway growth will be examined in detail in section 4. An important feature is missing from the model described above, which limits its applicability at late times. Gravitational stirring, which converts Keplerian shear into random motion, raises the typical encounter velocity between planetesimals over time (Weidenschilling, 1989; Ida, 1990) and diminishes the effectiveness of gravitational focusing. As the mass spectrum of the system evolves away from uniformity, these velocity differences become even more pronounced. As the system evolves, it tends toward a state of energy equipartition where \(v\sim m^{1/2}\). For a system of equal mass bodies in which encounters are driven by random motions rather than Keplerian shear (dispersion dominated), the timescale for gravitational stirring is described by the two-body relaxation time (Ida & Makino, 1993) \[t_{\rm relax}=\frac{v^{3}}{4\pi nG^{2}m_{\rm pl}{}^{2}\ln\Lambda}, \tag{6}\] where \(\ln\Lambda\) is the Coulomb logarithm, typically taken to be \(\approx 3\) for a planetesimal disk (Ida, 1990; Stewart & Ida, 2000). Despite the fact that the behavior of gravitational stirring is well-described by a two-body formula, (Ida & Makino, 1993) found that the stirring in a planetesimal disk is actually driven by close encounters, which requires a three-body formalism. As we will show in section 4, gravitational stirring effectively shuts off when the Hill sphere of a body becomes comparable to its physical size. In this case, close encounters tend to result in collisions, and the main pathway for energy exchange between planetesimals and growing protoplanets is unable to operate. Kokubo & Ida (1998) showed that the runaway growth process described above is actually self-limiting. As the runaway bodies develop, they become increasingly effective at dynamically heating the remaining planetesimals, which diminishes the gravitational focusing cross sections and throttles the growth rate. Around the time that the mass of the runaway bodies exceeds the mass of the planetesimals by a factor of \(\sim 50-100\)(Ida & Makino, 1993) a phase of less vigorous "oligarchic" growth commences, in which the largest bodies continue to accrete planetesimals at similar rates, independent of mass. Regardless of the mechanism that eventually limits the growth of the planetary embryos, a maximum estimate for the masses produced during this stage of accretion can be obtained using the initial solid surface density profile. A growing protoplanet is expected to accrete material within a distance \(b=\tilde{b}r_{\rm h}\) of its orbit. The total mass of planetesimals within this distance is then \(2\pi a\left(2\tilde{b}r_{\rm h}\right)\Sigma_{\rm pl}\), where \(a\) is the semimajor axis of the growing protoplanet. The "isolation mass" of the protoplanet can then be obtained by setting the protoplanet mass equal to the total mass of planetesimals within accretionary reach such that \[M_{\rm iso}=4\pi a^{2}\tilde{b}\left(\frac{M_{\rm iso}}{3M_{*}}\right)^{1/3} \Sigma_{\rm pl}. \tag{7}\] Solving for \(M_{\rm iso}\) gives \[M_{\rm iso}=\left[\frac{\left(2\pi a^{2}\Sigma_{\rm pl}\tilde{b}\right)^{3}}{3M_{* }}\right]^{1/2}. \tag{8}\] For bodies on circular, non-inclined orbits, \(\tilde{b}=2\sqrt{3}\) is the smallest orbital separation that produces a non-negative Jacobi energy and permits a close encounter (Nakazawa & Ida, 1988). This value of \(\tilde{b}\) is typically used to calculate the final isolation mass of a protoplanet because oligarchic growth tends to maintain near-circular orbits for the growing protoplanets. The picture described above relies upon a crucial assumption, which is that the mass distribution evolves slowly enough for gravitational stirring to maintain energy equipartition. In other words, the relaxation timescale must remain short relative to the collision timescale. For typical conditions near the terrestrial region of the solar system, this timescale condition is satisfied. Due to the steep dependence of the relaxation time on encounter velocity, however, this condition can easily be violated at shorter orbital periods. In figure 1, we show the ratio between the relaxation and collision timescale for a population of equal-mass planetesimals as a function of orbital period. Here, the encounter velocity is described by equation 1. For simplicity, we assume that \(\left\langle e^{2}\right\rangle^{1/2}=2\left\langle i^{2}\right\rangle^{1/2}\)(Ida et al., 1993) and that the eccentricity dispersion is constant with orbital period. The coloring indicates the ratio between \(t_{\rm relax}\) and \(t_{\rm coll}\). The dashed lines denote the eccentricity at which the random encounter velocity (calculated according to equation 1) is equal to the mutual escape velocity of the bodies. This is shown for planetesimals with an internal density of 3 g cm\({}^{-3}\) ranging from 10 to 200 km in size. A physically realistic value of \(e_{\rm h}\) for a population of planetesimals is going to depend on the structure of the gaseous disk (which we will address in section 5). However, the eccentricity dispersion will typically increase over time until \(\left\langle v^{2}\right\rangle^{1/2}=v_{\rm esc}\) and this is often used to construct the initial conditions (e.g. Barnes et al. (2009)). Therefore, the curves in this figure should be interpreted as upper limits. The timescale criterion for oligarchic growth is only satisfied in regions where the disk is sufficiently dynamically cold and the orbital period is sufficiently long. In sections 4 and 5 we will explore the behavior and outcome of planetesimal accretion in regions where this criterion is _not_ satisfied. ### Planetesimal Size and Extent of Hill Sphere In the formalism described above, the mass and velocity distribution of the bodies are both a function of time. Due to the interdependence of these quantities, it is not clear whether the ratio between the relaxation and collision timescales will remain constant as the oligarchs develop. In the case of the solar system, \(t_{\rm relax}\ll t_{\rm coll}\) likely continued to remain true, otherwise runaway growth would have consumed all of the small bodies and there would be nothing left to populate the asteroid or Kuiper belt. In the case where \(t_{\rm coll}\ll t_{\rm relax}\), however, it is not clear how the system might evolve. An insight into the expected behavior in this regime can be gained by defining a dimensionless parameter, \(\alpha\), which is the ratio between the physical size of a body and its Hill radius, \(r_{\rm h}\) \[\alpha=\frac{r_{\rm pl}}{r_{\rm h}}=\frac{1}{a}\left(\frac{9M_{*}}{4\pi\rho_{ \rm pl}}\right)^{1/3}, \tag{9}\] where \(a\) is the semimajor axis of the body and \(\rho_{\rm pl}\) is its bulk density. Assuming that the bulk density stays constant as bodies collide and grow, and that no large-scale migration occurs, the scaling of both \(r_{\rm pl}\) and \(r_{\rm h}\) as \(m_{\rm pl}^{1/3}\) means that \(\alpha\) will be constant with time. For a composition of ice and rock, \(\alpha\) is small for any presently populated region of the solar system (\(\alpha\sim 10^{-2}\) near Figure 1: The ratio between the two-body relaxation and collision timescale for a population of equal-mass planetesimals with an internal density of 3 g cm\({}^{-3}\). The dashed curves show the value of \(e_{\rm h}\) for which the random velocity dispersion is equal to the escape velocity of the planetesimals for a range of sizes. Only in regions where \(t_{\rm relax}\gg t_{\rm coll}\) can the velocity distribution respond to changes in the mass of the bodies such that oligarchic growth can operate. This condition is no longer satisfied for a dynamically hot disk at sufficiently short orbital periods. Earth and \(\alpha\sim 10^{-4}\) in the Kuiper belt). As one moves closer to the sun, \(\alpha\) becomes larger than 1, which implies that the Hill sphere of a body becomes smaller than its physical size. As an additional note, the Roche limit of the central body and the distance at which \(\alpha=1\) are equivalent for a rigid spherical body. After applying a hydrostatic correction to the Roche limit, \(a_{\rm Roche}\) is equivalent to 0.6 times this distance. This accretion mode should therefore be relevant for planetary ring systems, which is a topic that deserves further study using high resolution N-body techniques. The magnitude of \(\alpha\) controls the relative importance of gravitational scattering and collisions in driving the evolution of the planetesimal disk. When \(\alpha\) is small, most close encounters will result in a gravitational interaction, moving the system toward a state of relaxation. If, however, the Hill sphere is largely filled by the body itself, these same encounters will instead drive evolution of the mass spectrum. Because \(\alpha\) stays constant with mass, the boundary in the disk where collisions or gravitational encounters dominate, will stay static with time. We also introduce a second dimensionless quantity, which relates the physical size of the bodies to the velocity state of the system \[\beta=\frac{r_{\rm pl}}{r_{\rm g}}. \tag{10}\] where \(r_{\rm g}=Gm_{\rm pl}/v^{2}\) is the gravitational radius of a body (see eq 4.1 of Ida (1990)). Encounters between bodies inside of a distance of \(r_{\rm g}\) result in significant deflections of their trajectories. It should be noted that the gravitational focusing enhancement factor \(v^{2}/v_{\rm esc}^{2}\) is equal to 1 for \(\beta=1\). In the case where \(r_{\rm g}\) is smaller than the size of a planetesimal, the gravitational focusing enhancement factor will be between 0 and 1 and the collision cross section is mostly set by the geometric value. For very large values of \(r_{\rm g}\) (\(\beta\ll 1\)), the effective collision cross section is almost entirely set by gravitational scattering. These scaling considerations motivate the range of parameters we choose for the numerical experiments presented in the next section, where we aim to understand where and when runaway and oligarchic growth can operate. In figure 2, we show a schematic which relates \(\alpha\) and \(\beta\) to the geometry of a two-body encounter. For large values of \(\alpha\), the Hill radius of a body (dashed circle) becomes comparable to its physical radius (solid circle). As \(\beta\) increases, the trajectory of a body undergoing a flyby is less affected by the encounter. ## 3 Numerical Methods We use the tree-based N-body code ChaNGa to model the gravitational and collisional evolution of a swarm of planetesimals. ChaNGa is written using the CHARM++ parallel programming language and has been shown to perform well on up to half a million processors (Menon et al., 2015) and can follow the evolution of gravitationally interacting collections of up to billions of particles. Using a modified Barnes-Hut tree with hexadecapole expansions of the moments to approximate forces, ChaNGa integrates the equations of motion using a kick-drift-kick leapfrog scheme. For all of the simulations presented in this paper, we use a node opening criteria of \(\theta_{\rm BH}=0.7\). Additional information about the code is available in (Jetley et al., 2008; Menon et al., 2015). Using the neighbor-finding algorithm in ChaNGa, originally designed for SPH calculations, we have recently implemented a solid body collision module in the code. This work is largely based on the solid-body collision implementation in PKDGRAV, which is described in Richardson (1994) and Richardson et al. (2000). To summarize, imminent collisions are detected during the "drift" phase by extrapolating positions of bodies forward in time, using the velocity calculated at the opening "kick". For each body, any neighboring particles which fall within a search ball of radius \(2\Delta Tv+2r_{\rm pl}\), where \(\Delta T\) is the current timestep size for the particle and \(v\) is magnitude of its heliocentric velocity, are tested for an imminent collision. In the case that a collision is detected, the particles are merged into a sin Figure 2: A diagram detailing how varying the values of \(\alpha\) and \(\beta\) affect the geometry of a two-body encounter. The solid circles represent the physical radius of a body and the dashed circles represent the Hill radius. As \(\alpha\) is increased, the Hill radius and the physical radius become comparable in size. As \(\beta\) is increased, the trajectory of a passing body for a fixed impact parameter is less affected. gle larger body, which is given the center of mass position and velocity of the two parents. Resolving a collision can produce another imminent collision, so collisions are handled one-by-one and another full collision check is run after the previous event is resolved. For a more detailed description of the collision module in ChaNGa, see (Wallace & Quinn, 2019). Particles are advanced on individual timesteps chosen as a power of two of a base timestep. The timestep for an individual particle is based on an estimate of the gravitational dynamical time determined by the minimum of \(\sqrt{d_{\rm node}^{3}/(G(M_{\rm node}+m_{\rm pl}))}\) across all nodes in the tree that are accepted by the Barnes-Hut opening criterion. Here \(d_{\rm node}\) is the distance from the planetesimal to the center of mass of the tree node and \(M_{\rm node}\) is the total mass of the tree node. For nearby particles \(M_{\rm node}\) is replaced with the mass of the nearby particle. ## 4 Narrow annulus simulations We begin by exploring the outcome of planetesimal accretion in different parts of the \((\alpha,\beta)\) parameter space. The choices of \(\alpha\) and \(\beta\) are motivated by two questions raised in section 2. 1) Does runaway growth still operate when the condition that \(v\ll v_{\rm esc}\) is not satisfied? 2) How does planetesimal accretion proceed when the planetesimals themselves occupy a significant fraction of their Hill spheres? To answer these questions, we run a series of simulations in which a narrow annulus of planetesimals orbits a star. The values of \(\alpha\) and \(\beta\) are varied individually. 4000 planetesimals with individual masses of \(8.37\times 10^{-5}M_{\oplus}\) are placed with semimajor axes drawn from a uniform distribution between 0.95 and 1.05 AU about a 1 \(M_{\odot}\) star. In total, the disk contains \(\sim\) 0.33 \(M_{\oplus}\) of material. The argument of perihelion, \(\omega\), longitude of ascending Figure 3: The initial (blue) and final (orange) states of the narrow annulus simulations described in section 4. Relative masses of the bodies are indicated by point size. In the case of large \(\alpha\), almost no residual planetesimal population remains. Regardless of the initial choice of \(\beta\), the protoplanets that form attain similar eccentricities. node, \(\Omega\), and mean anomaly, M, for each body is drawn from a uniform distribution \(\in[0,2\pi)\). The inclinations and eccentricities are drawn from a Rayleigh distribution with \(\left<i^{2}\right>=1/2\left<e^{2}\right>\) (Ida et al., 1993). In the "fiducial" case, we give the bodies a bulk density of 3 g cm\({}^{-3}\) (for a radius of 341 km), and \(\left<e^{2}\right>^{1/2}=4e_{\rm h}\), which corresponds to \(\alpha=3.6\times 10^{-2}\) and \(\beta=3.4\times 10^{-3}\). These parameters are chosen to match the initial conditions of Kokubo & Ida (1998), which gave rise to oligarchic growth. To vary the value of \(\alpha\), we alter the bulk density of the particles. In the high-\(\alpha\) case, the bulk density is reduced by a factor of \(\sim 7100\), which produces \(\alpha=1\). This corresponds to a bulk density of \(4.2\times 10^{-4}\) g cm\({}^{-3}\) and a radius of 6,500 km. Although this is most certainly unphysical, the purpose of this modification is to have a planetesimal completely fills its Hill sphere so that no strong gravitational scattering should occur. To vary \(\beta\), the eccentricity dispersion is increased. For the high-\(\beta\) case, \(\left<e^{2}\right>^{1/2}\) is increased to \(1500e_{\rm h}\), which corresponds to \(\beta=15,000\). This is the largest value of \(\beta\) that permits a particle at 1 AU to still have an apocenter and pericenter distance that lies within the boundaries of the disk. This choice of \(\beta\) places the system firmly in the \(v>v_{\rm esc}\) regime, while still allowing growth to occur in a reasonable number of timesteps. In all cases, the simulations are evolved with a base timestep of 1.7 days, which corresponds to 3% of an orbital dynamical time \(\sqrt{a^{3}/GM_{*}}\). Due to the vastly differing growth timescales in each case, a simulation is stopped when the growth of the most massive body flattens out. In figure 3, we show the a-e distribution of bodies in the initial (blue) and final (orange) snapshots from each of the 4 simulations. The size of the points indicates the relative masses of the bodies. Only with small \(\alpha\) (case c, d) does a residual population of dynamically hot planetesimals develop. The lack of high eccentricity planetesimals (relative to the protoplanets) in the large \(\alpha\) (case a, b) simulations suggests that most encounters result in accretion rather than scattering. For Figure 4: The final state of the mass distributions for the narrow annulus simulations described in section 4. For small \(\alpha\), a few embryos form alongside a power law tail of planetesimals. For larger values of \(\alpha\), the mass distribution stops being bimodal. As in the previous figure, the initial choice of \(\beta\) does not appear to have any meaningful impact on the end result. Figure 5: Top: The evolution of the ratio between the maximum and mean mass for the four simulations presented in section 4. The runaway growth phase can be identified by a positive trend in this ratio. For all values of \(\alpha\), an increase in \(\beta\) has the effect of delaying runaway growth. Bottom: The evolution of the maximum (solid lines) and mean (dashed lines) shown individually. large \(\beta\) (case b, d), the growing protoplanets and end up in a dynamically cool state, relative with the initial conditions. This is due to kinetic energy being lost as particles inelastically collide. One last point we note is the difference between the eccentricities of protoplanets in the large \(\alpha\), large \(\beta\) (case b) and the small \(\alpha\), large \(\beta\) (case d) simulation. The dynamically cooler result of the former case is likely due to the dominant role that inelastic collisions play here. In figure 4, we show the mass distribution of bodies from the final snapshot in each of the four simulations. In addition to leaving fewer residual planetesimals, the large \(\alpha\) (case a, b) simulations produce embryos that are a factor of a few larger. Despite the vastly different encounter velocities of each population of bodies, the initial size of \(\beta\) (so long as bodies remain in the dispersion-dominated regime) appears to have no significant effect on the final distribution of masses. For reference, the boundary between shear and dispersion-dominated encounters (\(e_{\rm h}=1\)) lies around \({\rm e}=4\times 10^{-4}\) for the planetesimal mass we have chosen. The eccentricity at which \(\left\langle v^{2}\right\rangle^{1/2}=v_{\rm esc}\) for the planetesimals is around \(10^{-2}\). To investigate whether any of these planetesimal rings underwent runaway growth, we examine the time evolution of the maximum and mean masses in each simulation. The ratio \(M/\left\langle m\right\rangle\) is plotted in the top panel of figure 5. Here, a positive slope indicates that the quantities are diverging (i.e. the growth rate is accelerating with mass). This behavior is evident in all four cases although the small \(\alpha\) simulations eventually reach a stage where the curves turn over as the planetesimal supply depletes. Even with a large \(\beta\), where the effective collision cross section is nearly equal to the geometric value, runaway growth still appears to operate. The ubiquity of the early positive trends in this figure suggests that as bodies collide and grow, the relative difference in gravitational focusing factors between bodies is what drives the system towards runaway growth, no matter how close the collision cross sections lie to the geometric value. Although larger encounter velocities lengthen the growth timescales, runaway growth appears to be inevitable, so long as gravity is the dominant force in the system. For large \(\alpha\) (case a, b), the curves in this figure eventually turn over and begin to decline. In the bottom panel of figure 5, we separately show the evolution of the maximum (solid lines) and mean (dashed lines) mass for each case. Here, it is evident that the turnover in \(M/\left\langle m\right\rangle\) is driven by an increase in the average mass as the planetesimal population becomes depleted. For small \(\alpha\), one would expect that planetesimal accretion should also eventually come to a halt as the growth timescale lengthens due to the planetesimal surface density decreasing and the residual bodies being scattered onto high eccentricity orbits with negligible gravitational focusing factors. Many more timesteps, however, would be required to reach this point. Additionally, these results suggest that the value of \(\alpha\), which is a function of only the initial conditions, controls the qualitative outcome of accretion. Across most of a planet-forming disk, \(\alpha\) is small, and frequent gravitational encounters between the growing bodies will facilitate oligarchic growth. In the dispersion-dominated regime, close encounters drive the stirring between planetesimals and embryos (Weidenschilling, 1989; Ida, 1990). When \(\alpha\ll 1\), the planetesimal fills only a small portion of its Hill sphere and the majority of close encounters result in viscous stirring, rather than accretion. In the opposite regime, we observe that runaway growth still occurs, but nearly all of the planetesimals are consumed by the forming protoplanets, rather than being scattered onto higher eccentricity orbits, where they would otherwise remain as a remnant of the early stages of planet formation (Kokubo & Ida, 1998, 2000). ## 5 Full Disk Simulation ### Initial Conditions Motivated by the qualitative dependence of accretion on \(\alpha\), we next investigate whether this highly efficient, non-oligarchic growth should be expected to operate near the innermost regions of a typical planet-forming disk. Given that N-body simulations of short-period terrestrial planet formation typically begin with a chain of neatly-spaced, isolation mass (see Kokubo & Ida (2000) eq. 20) protoplanets, it is pertinent to determine whether the high \(\alpha\) growth mode we revealed in the previous section invalidates this choice of initial conditions. Given the dearth of short-period terrestrial planets observed around M stars (e.g. TRAPPIST-1 (Gillon et al., 2016, 2017; Agol et al., 2021)), we chose to model the evolution of a series of wide planetesimal disks, which span from 1 to 100 days in orbital period, orbiting a late-type M star of mass 0.08 \(M_{\odot}\). For a population of planetesimals with a bulk density of 3 g cm\({}^{-3}\), this orbital period range corresponds to \(\alpha\in(0.7,0.05)\). By simultaneously modeling a broad range of orbital periods, we can determine the critical value of \(\alpha\) that divides these two modes of accretion, and also explore how the oligarchic/non-oligarchic accretion boundary affects the resulting distribution of protoplanets. Four wide-disk simulations are run in total (see table 1). In each case, the solid surface density follows a power law profile \[\Sigma(r)=10\ \mathrm{g\ cm^{-2}}\times A\left(\frac{M_{*}}{M_{\odot}}\right) \left(\frac{r}{1\ \mathrm{AU}}\right)^{-\delta} \tag{11}\] where \(M_{*}\) is the mass of the central star, 10 g cm\({}^{-2}\) is the surface density of the minimum-mass solar nebula (Hayashi, 1981, MMSN) at 1 AU, \(A\) is an enhancement factor and \(\delta\) is the power law index. In the first case (fdHi), we model a disk that follows a MMSN power law slope (\(\delta=1.5\)), with the overall normalization enhanced by a factor of 100. This choice of normalization for the solid surface density profile appears necessary in order to reproduce many observed short period terrestrial worlds in-situ (Hansen & Murray, 2012). Many recent planetesimal formation models invoke streaming instabilities, which first require sufficiently large amounts of solid material concentrating at preferential locations in the disk. This can occur via zonal flows (Johansen et al., 2009; Simon et al., 2012), vortices (Klahr & Bodenheimer, 2003), or through mechanisms that produce a pressure bump in the gas disk, such as an ionization (Lyra et al., 2008) or condensation front (Brauer et al., 2008; Drkazkowska et al., 2013), or even the perturbations from an existing planet (Shibaike & Alibert, 2020). Drkazkowska & Dullemond (2018) showed that evolution of the snow line boundary can cause planetesimal formation over a significant area of the disk, producing mass concentrations at least as large as the ones we use here. We argue that this is a particularly attractive mechanism for widespread planetesimal formation around low-mass stars, as their extreme pre-main sequence evolution is particularly conducive to significant movement of the condensation fronts (Baraffe et al., 2015). Additionally, we vary the power law index (fdHiShallow, fdHiSteep) and overall normalization (fdLo) of \(\Sigma(r)\). Although there is no way to reliably measure the uncertainty on the MMSN power law slope, Chiang & Laughlin (2013) applied a similar analysis to a sample of Kepler multiplanet systems and found a variation of \(\sim 0.2\). Sub-millimeter observations of the outer regions of cold protoplanetary disks find that \(\delta\) can be as low as 0.5 (Mundy et al., 2000; Andrews et al., 2009, 2010). Therefore, we vary \(\delta\) by a value of 1.0 relative to the MMSN value for the fdHiShallow and fdHiSteep simulations. In all cases, the eccentricities and inclinations of the bodies are randomly drawn from a Rayleigh distribution, with \(\left\langle e^{2}\right\rangle^{1/2}=2\left\langle i^{2}\right\rangle^{1/2}=e_ {\mathrm{eq}}\)(Ida & Makino, 1993). Following Kokubo & Ida (1998), the value of \(e_{\mathrm{eq}}\) is chosen such that the timescales for viscous stirring and aerodynamic gas drag on the planetesimals are in equilibrium. Although this approach assumes that these two mechanisms are in balance, there is nothing preventing planetesimal accretion from getting underway before the disk is sufficiently hot to be limited by gas drag. However, as we showed in the previous section, the initial dynamical state of the planetesimals does not seem to affect the outcome of accretion, so it is safe to assume that the resulting distribution of protoplanets would remain unchanged had we started with a colder disk. The viscous stirring timescale is given by Ida & Makino (1993) as \[t_{\mathrm{vs}}=\frac{\left\langle e^{2}\right\rangle}{d\left\langle e^{2} \right\rangle/dt}\approx\frac{1}{40}\left(\frac{\Omega^{2}a^{3}}{2Gm_{\mathrm{ pl}}}\right)^{2}\frac{4m_{\mathrm{pl}}\langle e^{2}\rangle^{2}}{\Sigma a^{2} \Omega}, \tag{12}\] where \(\Omega\), \(a\) and \(e\) are the orbital frequencies, semi-major axes and eccentricities of the individual planetesimals, respectively. In the Stokes regime, where the mean-free path of the gas is much smaller than the solid particles, the gas can be treated as a fluid and the drag timescale is given by Adachi et al. (1976) as \[t_{\mathrm{s}}=\frac{2m_{\mathrm{pl}}}{C_{\mathrm{D}}\pi r_{\mathrm{pl}}^{2} \rho_{\mathrm{g}}v_{\mathrm{g}}}, \tag{13}\] where \(C_{\mathrm{D}}\) is a drag coefficient of order unity, \(\rho_{\mathrm{g}}\) is the local gas volume density and \(v_{\mathrm{g}}\) is the headwind velocity of the gas experienced by the planetesimal. The local gas volume density is given by \[\rho_{\mathrm{g}}=\frac{\Sigma_{\mathrm{g}}}{\sqrt{2\pi}h_{\mathrm{g}}}\exp \left[-z^{2}/\left(2h_{\mathrm{g}}^{2}\right)\right], \tag{14}\] where \(\Sigma_{\mathrm{g}}\) is the gas surface density (taken to be 240 times the solid surface density (Hayashi, 1981), \(h_{\mathrm{g}}=c_{\mathrm{s}}/\Omega\) is the local gas scale height and \(z\) is the height above the disk midplane. The sound speed profile is given by \(c_{\mathrm{s}}=\sqrt{k_{\mathrm{B}}T(r)/\left(\mu m_{\mathrm{H}}\right)}\), where \(k_{\mathrm{B}}\) is Boltzmann's constant, \(T(r)=T_{0}\left(r/1\mathrm{AU}\right)^{-q}\), \(\mu=2.34\) and \(m_{\mathrm{h}}\) is the mass of a hydrogen atom. For a protoplanetary disk around a typical M star, \(T_{0}=148\) K and \(q=0.58\)(Andrews & Williams, 2005). Finally, the headwind velocity of the gas, due to the fact that the gas disk is pressure supported, is given by \[v_{\mathrm{g}}=v_{\mathrm{k}}\left[1-\sqrt{qc_{\mathrm{s}}^{2}/v_{\mathrm{k}}^ {2}}\right], \tag{15}\] where \(v_{\mathrm{k}}\) is the local Keplerian velocity (see eq. 4.30 of Armitage (2020)). As in section 4, the argument of perihelion \(\omega\), longitude of ascending node \(\Omega\), and mean anomaly M for the planetesimals are drawn from a uniform distribution \(\in[0,2\pi)\). One should note that this choice for the gas disk profile almost certainly does not capture the wide range of possibilities in real planet-forming disks. On one hand, a larger initial gas surface density could act to completely remove solids via radial drift, rendering in-situ accretion of solids impossible. On the other hand, a more tenuous gas disk might render aerodynamic drag forces completely unimportant. In this case, the random velocity of the initial planetesimals should be close to their mutual escape velocity. As we showed in section 4, the initial dynamical state of the solids seems to have a very minimal effect on the final outcome of planetesimal accretion. In a similar vein to Hansen and Murray (2012), we choose to use a MMSN-like profile for the gas disk and instead vary the solid surface density profile to capture the range of mechanisms that might have acted to facilitate planetesimal formation in the first place. ### Gas Drag Force In addition to the mutual gravitational forces, a Stokes drag force due to the the gas disk is applied to each particle, following the prescription described in section 2.2.1 of Morishima et al. (2010). For the initial mass planetesimals, the Stokes number (\(\mathrm{St}=t_{\mathrm{s}}\Omega\)) at the inner and outer disk edge is roughly \(2\times 10^{5}\) and \(10^{7}\), respectively. For \(t_{\mathrm{s}}\gg 1\), bodies are decoupled from the gas and are only weakly affected by it. Because \(t_{\mathrm{s}}\sim m^{1/3}\), the Stokes number grows as planetesimal accretion proceeds, and the drag force plays an increasingly minor role. Although the aerodynamic gas drag is not expected to significantly alter the final protoplanet distribution, we include its effects here to be self-consistent with the initial conditions, which were constructed by balancing the effects of viscous stirring with gas drag. ### Timestepping Criterion In the case of the fdHi simulation, there are nearly 1 million particles, whose orbital periods vary by two orders of magnitude. Because the interaction timescales near the inner edge of the disk are exceedingly short, a fixed timestep size would required a prohibitively large number of steps to follow planetesimal growth throughout the entire disk. For this reason, we use a multi-tiered timestepping scheme, in which particles are placed onto the nearest power of two timestep based on their most recently calculated gravitational acceleration. This scheme is used on almost all works using ChaNGa, and is common among large-scale simulation codes. This more efficient scheme introduces two issues, however. Firstly, momentum is not completely conserved when bodies switch timestep tiers. The error introduced becomes particularly severe for a particle on an eccentric orbit, whose perihelion and aphelion distances straddle a timestep boundary. For a large collection of particles, this problem manifests itself as the development of a V-shaped gap in the a-e plane, centered on the boundary itself. To correct this problem, we introduce a slightly modified timestepping criterion, which is based on the expected gravitational acceleration of the particle at pericenter. Only in the case of a close encounter with another planetesimal (in which the acceleration is no longer dominated by the star) is the timestep allowed to reduce based on the original instantaneous criterion. A second issue is introduced when two particles on different timesteps undergo a collision. As in the previous case, momentum is not completely conserved because the most recent 'kick' steps did not happen simultaneously for these bodies. Early in the simulation, we find that this problem tends to trigger runaway growth at the timestep boundaries first. This issue carries itself forward through the embryo formation phase, and protoplanets tend to form at the boundaries. To correct this issue, we ignore collisions between bodies on different timesteps early in the simulation. We find that preventing multi-timestep collisions until after the maximum mass grows by a factor of 10 prevents any artifacts from developing at the timestep boundaries, while also minimizing the number of'skipped' collisions. In the case of the fdHi simulation, only about 20 collisions out of an eventual 900,000 are ignored. To verify that this timestepping scheme does not affect protoplanet growth, we tested an annulus of growing planetesimals with both fixed steps and our two-phase variable timestepping scheme. The results of these tests are shown in appendix A. ### Results The timescales for embryo formation depend on the chosen surface density profile, along with the local orbital timescale. Protoplanets form first at the inner edge of the disk, where the dynamical timescales are short. Growth proceeds in an inside-out fashion, with the outermost regions of the disk completing the protoplanet assembly phase last (as an example, see figure 1 of Kokubo and Ida (2002). This radial timescale dependence is not typically accounted for in planet formation simulations a notable exception being Emsenhuber et al. (2021, 2021), and appears to be an important component to forming realistic solar system analogs (Clement et al., 2020). As with the narrow annulus simulations, we stop the integration once the masses of protoplanets in the outermost region of the disk reach a steady value. In table 1, we summarize the outcomes of the four "full disk" cases. We show the final state of the "fdHi" simulation in figure 6. In the top panel, the initial (contours) and final (points) state of the simulation is shown in the orbital period-eccentricity plane. The size of the points indicates the relative mass of the bodies. In the bottom panel, the final masses of the bodies (in units of feeding zone size \(\tilde{b}\)) are shown as a function of orbital period. The y-values in the bottom panel of figure 6 are calculated by solving equation 8 for \(\tilde{b}\), and inputting the initial surface density and final particle mass into the expression. In other words, \(\tilde{b}\) is describing the size of the annulus that must be cut out of the planetesimal disk in order to produce a protoplanet of the current mass. By plotting the derived value of \(\tilde{b}\) as a function of orbital period, differences in the dynamical interactions at different locations of the disk are made more clearly visible. The feeding zone size \(\tilde{b}=2\sqrt{3}\) permitted by bodies on circular, non-inclined orbits (Nakazawa & Ida, 1988) is shown by the horizontal dashed line. In typical oligarchic growth simulations (Kokubo & Ida, 1998), protoplanets tend to space themselves apart by \(\tilde{b}=10\), although it should be noted that they do not consume all of the planetesimals within this distance. A qualitative shift in the protoplanet and planetesimal distribution is visible inside of \(\sim 60\) days. Interior to this location, there are very few remaining planetesimals and the embryos formed have larger feeding zones. Exterior to the boundary, the residual planetesimal population is much more visible, and protoplanets more closely follow \begin{table} \begin{tabular}{l c c c c c c c} Simulation Name & \(m_{\rm pl}[M_{\oplus}]\) & \(N_{\rm pl}\) & A & \(\delta\) & \(M_{\rm PP}[M_{\oplus}]\) & \(T_{\rm int}[yr]\) & \(T_{\rm int,1}[yr]\) \\ \hline fdHi & \(8.37\times 10^{-6}\) & 903,687 & 100 & 1.5 & 1.00 & 456 & 16,377 \\ fdHiSteep & \(8.37\times 10^{-6}\) & 903,687 & 100 & 2.5 & 1.19 & 456 & 16,377 \\ fdHiShallow & \(8.37\times 10^{-6}\) & 903,687 & 100 & 0.5 & 1.08 & 456 & 16,377 \\ fdLo & \(8.37\times 10^{-6}\) & 45,185 & 1 & 1.5 & \(1.77\times 10^{-3}\) & 3,713 & 133,651 \\ \hline \end{tabular} _Note:_ A summary of the four ‘full disk’ simulations presented in section 5. \(m_{\rm pl}\) and \(N_{\rm pl}\) are the initial masses and number of planetesimals. \(A\) and \(d\) is the initial power law normalization and slope of the solids in the disk. \(M_{\rm PP}\) is the maximum protoplanet mass at the end of the simulation and \(T_{\rm int}\) is the amount of time each simulation was integrated. \(T_{\rm int,1}\) is the integration time scaled by a factor of \(f^{2}\), which accounts for the fact that the accretion timescale has been shortened by the inflated collision cross sections. \end{table} Table 1: Summary of Full Disk Simulations Figure 6: The final state of the fdHi simulation. In the top panel, the contours denote the initial period-eccentricity distribution of the planetesimals. Point sizes indicate the relative masses of bodies at the end of the simulation. In the bottom panel, we show the feeding zone width (see equation 8) required to produce the final masses of the bodies. The dashed line indicates the feeding zone size expected for bodies on circular orbits. For the shorter period bodies, the feeding zone size exceeds this expected value, which indicates that oligarchic growth is not operating here. This boundary occurs near roughly 60 d in orbital period. Figure 7: The time evolution of the planetesimal surface density (in units of the total solid surface density) in the fdHi simulation. Each curve represents a radial slice of the disk. The time is measured in units of the local accretion timescale at the center of each radial zone. The colors represent the orbital period at the center of each zone. the \(\tilde{b}=2\sqrt{3}\) line. This suggests that the transition between the low \(\alpha\) and high \(\alpha\) accretion modes seen in section 4 is happening near this location. In section 4, we postulated that the increased importance of inelastic damping in the inner, non-oligarchic growth region of the disk should lower the overall eccentricity of the protoplanets there. This behavior is not immediately apparent in the top panel of figure 6. In fact, the opposite appears to be true. There are, however, a couple of factors in the wide disk simulations that could make this extra dynamical cooling mechanism difficult to see. Firstly, the initial eccentricity distributions of the inner and outer disk are different because of the dependence of the viscous stirring and gas drag timescales on orbital period. The mean eccentricity at the outer disk edge is 4x larger than at the inner disk edge. Additionally, the protoplanet formation timescales for the inner and outer disk are vastly different, making a comparison between these regions at the same moment in time somewhat inappropriate. A quick back of the envelope calculation yields \(\left\langle e^{2}\right\rangle^{1/2}=0.05\) for a population of \(\sim 1M_{\oplus}\) bodies with a random velocity dispersion equal to their mutual escape velocity. It is therefore likely the case that the innermost protoplanets have had ample time to self-stir. To ensure that the boundary seen around 60 days in orbital period is not simply a transient product of the inside-out growth throughout the disk, we examine the time evolution of \(\sigma/\Sigma\), which compares the planetesimal and total solid surface density at multiple orbital Figure 8: The final state of the full disk simulations listed in table 1. Point sizes indicate mass relative to the largest body in each simulation. Figure 9: Feeding zone width (see equation 8) required to produce the final masses for the protoplanets from the simulations listed in table 1. For the fdHi, fdHiSteep and fdHiShallow simulations, we have included the results from five separate iterations of the simulations, each using a different random number seed. The horizontal dashed line indicates \(\tilde{b}=2\sqrt{3}\). Despite the vastly different initial solid surface density profiles, the feeding zone width reaches the circular orbit value around \(\sim\) 60 days in all cases. periods. In figure 7, the value of \(\sigma/\Sigma\) is plotted as a function of time in 10 orbital period bins, each with a width of 10 days. To determine whether the evolution of the planetesimal surface density behaves self-similarly across the disk, we normalize the time values in each bin by the local accretion timescale at the beginning of the simulation, which is given by \[t_{\rm acc}=(n\Gamma v)^{-1}=\left(\frac{\Sigma_{0}\Omega}{2m_{\rm pl}}\Gamma \right)^{-1}, \tag{16}\] where we have assumed the local number density of particles n is set by the surface density and the local scale height of the disk (see section 2). The effective collision cross section is set by gravitational focusing and is given by equation 5. The color of the curves indicate the orbital period bin which is being measured. From about 40 to 100 days in orbital period, the planetesimal surface density follows a similar trajectory as accretion proceeds. Interior to about 40 days, \(\sigma\) actually decays more slowly. In other words, growth is actually fueled less vigorously by planetesimals in this region. This highlights the fact that accretion proceeds in a qualitatively different way in the inner disk. For the outer disk, gravitational focusing tends to facilitate collisions between protoplanets and preferentially smaller bodies. At short period, however, all close encounters result in a collision, regardless of mass. In a rather counterintuitive fashion, planetesimals in the inner disk actually persist for longer. In section 5.5, we examine the assembly history of the embryos and show that there is much less of a preference for planetesimal-embryo collisions at short period as well. In the inner disk, this value asymptotes to zero as the planetesimal population entirely depletes. In the outer disk, dynamical friction between the embryos and planetesimals eventually throttles subsequent accretion and leaves \(\sim 10\) percent or more of the mass surface density as planetesimals. It should be noted that in a typical oligarchic growth scenario, where protoplanets space themselves apart by 10 \(r_{\rm h}\) and settle onto circular orbits (giving \(\tilde{b}=2\sqrt{3}\)), roughly 30 percent (\(2\sqrt{3}/10\simeq 0.3\)) of the planetesimals should remain out of reach of the protoplanets. Next, we investigate how the resulting planetesimal and protoplanet distribution changes as we vary the initial solid surface density profile. The final orbital period-eccentricity state of the particles in the fdHi, fdHiSteep, fdHiShallow and fdLo simulations are shown in figure 8, with point sizes indicating the relative masses of the bodies. In all cases, the inner disk is largely depleted of planetesimals, while the outer disk contains a bimodal population of planetesimals and embryos, with a clear separation in eccentricity between the two. Despite having significantly different masses, the semimajor axis-eccentricity distributions of the planetary embryos formed in all simulations are remarkably similar. This is likely due to the fact that inelastic collisions play a more significant role where the solid surface density is highest, which offsets the fact that the initial bodies started off in a dynamically hotter state (due to the increased effectiveness of viscous stirring). The only exception to this is the fdLo simulation, where the resulting eccentricities are a couple orders of magnitude smaller. Inelastic damping likely plays an even more significant role here, due to the much larger masses of the initial planetesimals. In figure 9, we plot the masses of the resulting protoplanets and planetesimals in all four simulations in units of \(\tilde{b}\) (see equation 8). To make the trend between \(\tilde{b}\) and orbital period more clear, we ran four more versions of the fdHi, fdHiSteep and fdHiShallow simulations using different random number seeds and included these in the figure as well. As mentioned previously, \(\tilde{b}=2\sqrt{3}\) (indicated by the horizontal dashed line) is the feeding zone width that a body on a circular orbit will have. In all four simulations, the feeding zone width exceeds the minimum value in the inner disk and approaches \(2\sqrt{3}\) around \(\sim 40\) to 60 days. The orbital periods at which this transition occurs are quite similar between simulations, despite the vastly different solid surface density profiles used. This indicates that the boundary between accretion modes is driven entirely by the local value of \(\alpha\), and also supports our conclusion that planetesimal accretion is largely complete everywhere in the disk. ### Assembly History of Embryos Further insight regarding the difference between the short vs long period accretion modes can be gained by looking at the growth history of the planetary embryos. Because all collisions are directly resolved by the N-body code, a lineage can be traced between each planetary embryo and the initial planetesimals. For the fdHi, fdHiSteep and fdHiShallow simulations, protoplanets gain a factor of \(\sim 10^{6}\) in mass relative to the initial planetesimals. For the fdLo simulation, this growth factor is nearly a thousand times smaller, which produces rather shallow and noisy collision histories. For this reason, we choose to exclude the fdLo simulation from our analysis in this section. We begin by investigating the "smoothness" of the accretion events that give rise to each embryo. Drawing from a common technique used for cosmological simulations of galaxy formation, we divide growth events for a given body into "major" and "minor" mergers (Kauff mann & White, 1993; Murali et al., 2002; L'Huillier et al., 2012). Here, we define minor events as any collision involving an initial mass planetesimal, while major events consist of a merger between any two larger bodies. In figure 10, we retrieve the collision events for all bodies in all five iterations of the fdHi, fdHiSteep and fdHiShallow simulations and plot the total mass fraction attained through minor merger (smooth accretion) events as a function of the final mass of the body. Here, we define a minor merger to be any collision involving a planetesimal with \(m<100m_{0}\). The color of the points indicates the final orbital period of the body. Beyond \(\sim\) 20 to 30 days in orbital period, minor mergers make up a significant fraction of the final mass of a body. Interior to this, the smooth accretion fraction drops significantly and the mass contribution to minor mergers can vary by over an order of magnitude. The variation in smooth accretion fraction with mass for the short period bodies suggests that the planetesimal and embryo populations interact differently than those in the outer disk. Exterior to the accretion mode boundary, the growing embryos continue to accrete planetesimals while avoiding each other as they near their final mass. Inside the boundary, however, any and all bodies collide with each other, and the occasional embryo-embryo collision tends to dominate growth and drive down the smooth accretion fraction. Gravitational scattering between embryos and planetesimals is a key ingredient for orbital repulsion (Kokubo & Ida, 1998), and so a lack of gravitational scattering in the inner disk should prevent the embryos from settling onto neatly-spaced, isolated orbits. As we showed in figure 9, the embryos in the inner disk appear to reach well beyond the typical feeding zone size predicted by an oligarchic growth model. Figure 10 suggests that the extra mass here obtain comes from mergers with the other nearby embryos. Another line of evidence pointing to a lack of gravitational scattering and orbital repulsion in the inner disk can be seen in figure 11. Here, we measure the initial orbital period distribution of bodies used to construct each embryo and calculate its mean \(\langle P_{\rm acc}\rangle\). Point sizes indicate the relative final masses of bodies. As in the previous figure, we have included data from all five versions of the fdHi, fdHiSteep and fdHiShallow simulations. For each body, this quantity is then compared with its final orbital period. In this figure, bodies that closely follow the gray dashed line still reside close to their initial feeding zones. On the other hand, bodies further from the dashed line must have experienced a strong gravitational scattering event during or after their accretion has completed. For all of the high surface density simu Figure 10: For all bodies with \(m>100m_{0}\) at the end of the high surface density simulations, the fraction of their total mass attained through mergers with initial mass planetesimals (smooth accretion) as a function of total mass. Colors indicate the orbital periods of the bodies in the final simulation snapshot. Bodies interior to \(\sim\) 20 to 30 days attain up to an order of magnitude less of their mass through minor merger events, while accretion of planetesimals plays a much more significant role at longer orbital periods. lations, the accretion zones and present positions of the embryos appear to diverge beyond \(\sim 20\) days in orbital period. Coupled with the strong decrease in smooth accretion fraction for bodies in this region (seen in figure 10), it appears as if the relative importance of collisions and gravitational scattering seems to shift around \(\sim 20\) days. For the shortest period bodies, growth events are sudden and stochastic, often involving collisions between bodies of comparable mass. For longer period bodies, a significant amount of growth is driven by accretion of smaller planetesimals. We postulate that this qualitative difference is driven by the role that embryo-embryo close encounters play in the inner and outer disk. In the inner disk, these encounters tend to result in a merger, which drives down the smooth accretion fraction. In the outer disk, these encounters tend to result in a scattering event which moves bodies away from their initial feeding zones. We find that the accretion zone shapes for the longer period bodies are also much more smooth and unimodal, which suggests that scattering tends to occur after accretion has largely completed. ## 6 Simplifying Assumptions ### Collision Cross Section In all cases shown so far, the boundary between the oligarchic growth and the highly-efficient short period accretion region lies between 40 and 70 days in orbital period. As discussed in section 2.2, the mode of accretion is set entirely by the local value of \(\alpha\), which scales with both distance from the star and the bulk density of the planetesimals (see equation 9). Because we chose to artificially inflate the collision cross section of the particles in our simulations by a factor of \(f\), the bulk densities of the particles are reduced, and the accretion boundary is shifted outward. However, the scaling relation between \(\alpha\) and \(\rho\) (\(\alpha\sim\rho_{\rm pl}^{-1/3}\)) can be used to predict where this accretion boundary should lie in a disk with realistic-sized planetesimals. The simulations presented in this paper use a collision cross section enhancement factor of 6, which moves the boundary outward in orbital period by a factor of approximately 15 (For a fixed value of \(\alpha\), equation 9 gives \(a\sim r_{\rm pl}\sim f\) and therefore \(P_{\rm orbit}\sim f^{-3/2}\)). One would therefore expect the accretion boundary to lie between 3 and 5 days in orbital period for 3 g \(cm^{-3}\) bodies. Although a simulation with \(f=1\) is not computationally tractable, we can test whether the accretion boundary moves in the way we expect by modestly changing the value of \(f\). In figure 12, we compare the fdHi simulation to a nearly identical run using \(f=4\). In the top panel, we show the feeding zone width required for each particle to attain its present mass. As in figure 9, we indicate the feeding zone size expected for oligarchic growth with a horizontal dashed line. In the bottom panel, the value of \(\alpha\) as a function of orbital period is shown for 3 g \(cm^{-3}\) bodies with an artificial radius enhancement of \(f=1\), 4 and 6. The horizontal dashed line indicates the empirical value of alpha below which the accretion mode switches to oligarchic. Comparing the top and bottom panels, the intersection of the feeding zone width seen in our simulations and the feeding zone width predicted by oligarchic growth matches well with the orbital period at which \(\alpha\sim 0.1\) for both values of \(f\). Also shown by the shaded region are the expected \(\alpha\) values for realistic-sized bodies with \(\rho_{\rm pl}\) between 1 and 10 g cm\({}^{-3}\). Although the removal of the cross section enhancement greatly reduces the size of the non-oligarchic region, it still should be expected to cover a portion of the disk where planetesimals might be expected to form (Mulders et al., 2018) for a wide range of \(\rho_{\rm pl}\). ### Collision Model For the simulations presented in this work, every collision results in a perfect merger between pairs of bodies, with no loss of mass or energy. Although simpler and less computationally expensive to model, allowing every collision to produce a perfect merger might result in overly efficient growth, particularly in the innermost region of the disk where the encounter velocities Figure 11: The relative separation between the final period of a body and the mean orbital periods of its accretion zone at the end of the fdHi, fdHiSteep and fdHiShallow simulations. The marker type and color denotes the simulation used, while the marker sizes indicate the relative masses of bodies. In each case, all five iterations of the simulations are plotted simultaneously. are largest. Given that we have just shown that a distinctly non-oligarchic growth mode emerges in the inner disk when the collision timescale is short relative to the gravitational scattering timescale, one might be concerned that a more realistic collision model would act to lengthen the growth timescale enough for this condition to no longer be true. In the outer regions of the disk where oligarchic growth still operates, more realistic collision models have been shown to simply lengthen the timescale for planetary embryos to form (Wetherill & Stewart, 1993; Leinhardt & Richardson, 2005). A proper way to handle this would be to allow for a range of collision outcomes, based on a semianalytic model (see Leinhardt & Stewart (2012)). However, resolving collisional debris, or even prolonging growth by forcing high-velocity pairs of bodies to bounce is too expensive to model, even with ChaNGa. To test whether a more restrictive collision model should alter the growth mode of the inner disk, we ran a smaller scale test using a more restrictive collision model. In this case, a collision can result in one of two outcomes: if the impact velocity is smaller than the mutual escape velocity of the colliding particles, defined as \[v_{\rm mut,esc}=\sqrt{\frac{2G(m_{1}+m_{2})}{r_{1}+r_{2}}}, \tag{17}\] where \(m_{1},m_{2}\) and \(r_{1},r_{2}\) are the masses and radii of colliding particles 1 and 2, then the bodies merge. For impact velocities larger than \(v_{\rm mut,esc}\), no mass is transferred, and the bodies undergo a completely elastic bounce. Because the accretion outcome is all or nothing, this model should restrict growth more than a partial accretion model (Leinhardt & Stewart, 2012). Below, we will show that the bounce-merge model does not meaningfully affect the outcome of the inner disk's planetesimal accretion phase, and so a more realistic partial accretion model should do the same. To compare the outcome of the two collision models, we have chosen to use the initial conditions from the fdLo simulation, but have truncated the disk beyond 3 days in orbital period. This offsets the increased computational cost of the more restrictive collision model, while still allowing the disk to evolve in the region where Figure 12: In the top panel, we show the required feeding zone sizes to produce the masses of the bodies seen at the end of the fdHi and fdHi4 simulations. The bottom panel shows the variation of \(\alpha\) with orbital period for the bodies used in each case (solid curves). The orbital period at which \(\alpha\simeq 0.1\) matches well with the location at which \(\dot{b}\) exceeds \(2\sqrt{3}\) This is highlighted by the vertical dashed lines. The shaded region in the bottom panel show the values of alpha for realistic-sized (\(f=1\)) planetesimals with bulk densities between 10 and 1 g cm\({}^{-3}\). Figure 13: A comparison between the innermost region of the fdLo (orange) simulation, and a second version using a bounce- merge collision model (green). In the top panel, the period-eccentricity state of the particles is shown, with marker sizes indicating relative mass. The blue points represent the initial state of the simulations. The bottom panel compares the final differential mass distributions of the bodies. mergers would be most difficult to achieve. For the initial conditions we have chosen, the typical encounter velocity (defined by \(v_{\rm enc}=\left\langle e^{2}\right\rangle^{1/2}v_{\rm k}\), where \(v_{\rm k}\) is the local Keplerian velocity) is about 25 percent larger than \(v_{\rm mut,esc}\). Because the encounter velocities follow a Gaussian distribution, there should still be a small subset of collisions that still meet the merger criteria to occur early on. In addition, \(v_{\rm mut,esc}\) becomes larger as the bodies grow and the merger criteria should become easier to meet as the system evolves. For these reasons, one would expect the inhibition of growth due to the more restrictive collision model to be temporary. In figure 13, we compare the outcomes of the simulations, one with mergers only (shown in orange) and one with the bounce-merge model (shown in green). The blue points in the top panel show the initial conditions used for both cases. Although the bounce-merge simulation takes much longer to reach the same phase of evolution, the resulting orbital properties are indistinguishable from the merger-only case. Performing a Kolmogorov-Smirnov test on the two mass distributions yields a p-value of \(2\times 10^{-5}\), which tells us that the two mass distributions are quite firmly statistically different. If we remove the initial mass planetesimals, a KS test yields a p-value of 0.1, which suggests that the distributions are statistically similar. Because the remaining planetesimals only make up about 0.1 percent of the total mass of the disk, we conclude that the embryo populations are nearly indistinguishable, while the bounce-merge model produces a small amount of residual planetesimals. To investigate the differences in growth between the two collision models early on, we show the time evolution of the ratio between the maximum and mean mass in figure 14. In both cases, this ratio first increases, which indicates that runaway growth still operates, regardless of the collision model used. In the bounce-merge case, the mass ratio peaks at a higher value, while also undergoing a longer runaway growth phase. This suggests that the mass distribution becomes much less unimodal during this growth process, but as figure 13 shows, this does not affect the resulting embryos or allow for a residual planetesimal population. As a final note, Childs & Steffen (2022) found that a more realistic collision model also enhanced radial mixing in their simulations. Upon calculating the planetesimal accretion zones using the same method as was done to produce figure 11, we find that the embryos in the bounce-merge simulation annulus do have modestly wider accretion zones than those produced in the merger-only simulation. ## 7 Summary and Discussion In this work, we have demonstrated that planetary embryo growth can simultaneously operate in two distinct modes in a planet-forming disk. In the first mode, gravitational feedback from the growing embryos heats the remaining planetesimals and results in a dynamically cold population of embryos with a modest amount of residual planetesimals. This corresponds to the "oligarchic growth" case revealed by (Kokubo & Ida, 1998), which is often used as a starting point for late-stage accretion models (e.g. Kokubo & Ida (2002); Raymond et al. (2005, 2006)). In the second mode, the gravitational feedback does not play a significant role, embryos quickly sweep up all planetesimals, and grow larger and less uniformly spaced than those produced by oligarchic growth. We have demonstrated the outcome of both accretion modes through a simple parameter study using a narrow annulus of planetesimals (section 4). The initial planetesimal distribution can be described in terms of two dimensionless constants, \(\alpha\) and \(\beta\), which describe the ratio between the physical radius of the planetesimals and the Hill (\(r_{\rm h}\)) and gravitational (\(r_{\rm g}\)) radius, respectively. For a fixed planetesimal composition, \(\alpha\) scales with the Figure 14: The evolution of the ratio between the maximum and mean mass of the simulations shown in figure 13. In both cases, the system first evolves through a phase of runaway growth, before the massive bodies consume the smaller bodies, driving down the mean mass. With the bounce-merge model, the mass ratio takes much longer to begin decreasing. orbital period and \(\beta\) scales with the level of dynamical excitation of the disk. We showed that \(\alpha\ll 1\) leads to oligarchic growth, while an \(\alpha\) close to unity produces this newly revealed non-oligarchic growth mode (see figure 3). Within this non-oligarchic mode, we find that the resulting masses and eccentricities of the embryos come out very similar, regardless of the initial value of \(\beta\). So long as the density of the bodies do not significantly change as their mass distribution evolves, this ratio is set entirely by the distance from the star. Because both the physical and Hill radii of the bodies grows as \(M^{1/3}\), the growth mode boundary remains stationary in the disk during the planetesimal accretion process. We have verified that the growth boundary location does not strongly depend of the solid surface density distribution by testing the outcome of the planetesimal accretion process for a variety of solid profiles. Although altering the surface density does affect the resulting masses of the embryos, the location of the boundary separating the growth modes is remarkably similar among all of our simulations. In addition, the sizes of the feeding zones, along with qualitative differences in the accretion history of embryos on both sides of the boundary (see figures 10 and 11) provide further evidence to suggest that oligarchic growth is not operating in the inner disk. Finally, we have examined how our assumption of perfect accretion, along with the collision cross section enhancement used, might alter our results. We verified that these modifications, meant to make the simulations less computationally expensive, would still allow for the emergence of this non-oligarchic growth mode. We showed that a much more restrictive collision model, in which only low-velocity collisions produce a merger, still allows for this growth mode to occur at the innermost part of the disk, where encounter speeds are most vigorous. In a real planet-forming disk, partial accretion events should allow growth to happen more quickly than what was seen in this test case (see figure 8 of Leinhardt et al. (2015)), so this growth mode should certainly still occur. We also showed that the collision cross section enhancement moves the accretion boundary outward. We verified this by deriving a scaling relation between the boundary location and the bulk density of the planetesimals, and showing that the boundary moves to the predicted location when running a simulation with a slightly smaller inflation factor. For rocky planetesimals with a realistic bulk density, 3 g cm\({}^{-3}\), our results suggest that this boundary should lie around 5 days in orbital period. ### Connections to Satellitesimal Accretion To date, there have been no other studies of planetesimal accretion with such a large value of \(\alpha\). Typically, it is assumed that \(\alpha\ll 1\) (e.g. Lithwick (2014)), which is certainly true for material at and beyond the Earth's orbit. However, a value of \(\alpha=1\) corresponds to the Roche limit of a three-body system, and so one might wonder this high-\(\alpha\) accretion mode might be relevant for a circumplanetary accretion. There is a collection of previous works which use N-body methods to examine in-situ satellitesimal accretion (Ida et al., 1997; Richardson et al., 2000; Kokubo et al., 2000; Ida et al., 2020), although some of these simulations involve a complex interaction between spiral density waves formed inside of the Roche limit and the material exterior to it, making the dynamics driving accretion distinctly non-local, in contrast to what we have presented in this work. Ida et al. (1997) was able to form 1-2 large moons just exterior to the Roche limit, depending on the extent of the disk with very little satellitesimal material left over. The widest disk they modeled extended out to \(\alpha=0.5\). Qualitatively, this result is very similar to the short period planetesimal accretion mode observed in our simulations. Ida et al. (2020) modeled a much wider satellitesimal disk, which extends out to about \(\alpha\approx 0.05\). Inside to the \(\alpha=0.1\) accretion boundary (which lies near 15 planetary radii in figure 1 of Ida et al. (2020)), bodies grow beyond the isolation mass, while the opposite is true on the other side of the boundary. In addition, a residual population of satellitesimals is still present beyond the boundary, which suggests that oligarchic growth is indeed operating only on the far side. Presently, the implications that this non-oligarchic accretion mode has for the formation of short-period terrestrial planets, and whether the accretion boundary would leave any lasting imprint on the final orbital architecture, is unclear. The extreme efficiency of planetesimal accretion at the inner edge of the disk suggests that no residual populations of small bodies should be expected to exist here. A crucial point that our results do highlight is that the initial conditions used for most late-stage planet formation simulations are overly simplistic. Clement et al. (2020) recently simulated planetesimal accretion in a disk extending from the orbit of Mercury to the asteroid belt and found that the disk never reaches a state in which equally-spaced, isolation mass embryos are present everywhere simultaneously. Instead, different annuli reach a 'giant impact' phase at different times, preventing the onset of a global instability throughout the entire disk, as is common in classic terrestrial planet formation models (Chambers and Wetherill, 2001; Raymond et al., 2009). To connect these accretion modes to the final orbital architecture, and to ultimately determine what implications an in-situ formation model has for the growth of STIPs, we will evolve the final simulation snapshots presented here with a hybrid-symplectic integrator for Myr timescales. The final distribution of planets formed, along with composition predictions generated by applying cosmochemical models to our initial planetesimal distributions and propagating compositions through the collision trees, will be examined in a follow-up paper. ## Acknowledgements We would like to thank the anonymous referee for their thorough and careful comments which greatly improved the quality of this manuscript. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. SCW and TRQ were were supported by National Science Foundation grant number AST-2006752. We acknowledge the people of the Dkhw'Duw'Absh, the Duwamish Tribe, the Muckleshoot Tribe, and other tribes on whose traditional lands we have performed this work. Astropy (Astropy Collaboration et al., 2013), ChaNGa (Jetley et al., 2008; Menon et al., 2015), Matplotlib (Hunter, 2007), NumPy (van der Walt et al., 2011), Pandas (Wes McKinney, 2010), PYNBODY (Pontzen et al., 2013) ## Appendix A Appendix A: Robustness of timestepping scheme As described in section 3, ChaNGa evolves the motions of the particles in the planetesimal disk using a multi-tiered timestepping scheme. Due to the extremely short dynamical timescale at the inner edge of the disk, the outer disk would require a prohibitive number of timesteps to reach the protoplanet phase using a fixed timestep scheme. To circumvent this, particles are evolved in discrete power-of-two timestep groups. In the event that a collision occurs between two particles on different timesteps, a slight error is introduced to the energy and angular momentum of the merged particle. Due to the nonlinear nature of the runaway growth phase, this slight error tends to trigger more subsequent collisions at the timestep boundary in the disk, and causes protoplanets to preferentially form at the boundaries. To circumvent this issue, we prohibit particles on different timesteps from merging until the runaway growth phase is well underway. For the fdHi simulation, multi-tiered mergers are not allowed during the first thousand steps. To verify that this technique does not alter the resulting protoplanet distribution in any meaningful way, we ran two test simulations of the inner part (1 to 4 days in orbital period) of the disk from the fdHi simulation. In the first case, the aforementioned timestepping scheme is used. In the second case, all particles are evolved on the timestep appropriate for the inner edge of the disk. In figure 15, we compare the final period-eccentricity state and final mass distributions to each other. There do not appear to be any differences between the two protoplanet distributions, particularly near the timestep boundary at 2 days. In addition, the masses of both the protoplanets and the remaining growing planetesimals are indistinguishable. In this case, a KS test of the two mass distributions yields a p-value of \(\sim 0.34\). We therefore conclude that the timestepping scheme used in this work does not alter the growth of the protoplanets in any meaningful way.
2304.10713
Axial correlation revivals and number factorization with structured random waves
We advance a general theory of field correlation revivals of structured random wave packets, composed of superpositions of propagation-invariant modes, at pairs of planes transverse to the packet propagation direction. We derive an elegant analytical relation between the normalized intensity autocorrelation function of thus structured paraxial light fields at a pair of points on an optical axis of the system and a Gauss sum, thereby establishing a fundamental link between statistical optics and number theory. We propose and experimentally implement a simple, robust analog random wave computer that can efficiently decompose numbers into prime factors.
Xin Liu, Chunhao Liang, Yangjian Cai, Sergey A. Ponomarenko
2023-04-21T02:54:36Z
http://arxiv.org/abs/2304.10713v1
# Axial correlation revivals and number factorization with structured random waves ###### Abstract We advance a general theory of field correlation revivals of structured random wave packets, composed of superpositions of propagation-invariant modes, at pairs of planes transverse to the packet propagation direction. We derive an elegant analytical relation between the normalized intensity autocorrelation function of thus structured paraxial light fields at a pair of points on an optical axis of the system and a Gauss sum, thereby establishing a fundamental link between statistical optics and number theory. We propose and experimentally implement a simple, robust analog random wave computer that can efficiently decompose numbers into prime factors. A spectacular self-imaging effect, first experimentally discovered by Talbot [1] and later theoretically explained by Lord Rayleigh [2], occurs whenever a one-dimensional spatially periodic structure is illuminated by a freely propagating paraxial optical field. As was shown by Rayleigh [2], the periodic field pattern formed by the structure undergoes periodic axial revivals at the distances that are multiple integers of the Talbot length, \(z_{T}=2d^{2}/\lambda\), where \(d\) is a spatial period of the field pattern at the source and \(\lambda\) is the wavelength of light. To date, the classic Talbot-like revivals have been extensively studied for coherent and random optical waves in free space [3; 4; 5; 6], linear graded media [7] and nonlinear optical systems [8]. Moreover, space-time duality [9] implies the existence of a temporal analogue of the spatial Talbot effect for a periodic train of optical pulses propagating in a dispersive fiber [10; 11; 12]. The spatial and temporal Talbot effects have found numerous applications to X-ray imaging [13], optical metrology and spectroscopy [14], as well as most recently to temporal cloaking [15]. Thanks to a mathematical analogy between the paraxial wave equation in optics and Schrodinger equation in quantum mechanics, Talbot recurrences have been explored in atom optics [16; 17; 18], Bose-Einstein condensates [19], as well as in \(C_{70}\) fullerene molecule [20] and Rydberg wave packet [21] interferometry. Lately, wave pattern revivals of structured fields have attracted interest [22], and novel manifestations of spatial [23], temporal [24] as well as space-time [25] Talbot effects have been discovered for space-time wave packets for which spatial and temporal frequencies of individual monochromatic waves, composing the packets, are tightly coupled [26]. Further, aperiodic field correlations of an uncorrelated superposition of two Bessel beams at pairs of planes, transverse to the beam propagation direction in free space, have been shown to undergo perfect revivals over the Talbot distance [27]. Here we develop a general theory of axial correlation revivals of structured random waves composed of superpositions of diffraction-free modes of any kind. Our theory enables us to derive a remarkably simple analytical relation between the normalized intensity autocorrelation function of structured paraxial random light fields at a pair of points on an optical axis of the system and an incomplete Gauss sum, thereby establishing a fundamental link between statistical optics and number theory. We employ the discovered link to advance and experimentally implement an efficient protocol to decompose even fairly large numbers into prime factors using random light. Number factorization plays a prominent role in network systems and cyber security [28], as well as in optimization [29; 30]. It has also become a crucial ingredient to a host of promising physics-based protocols for information encoding [31], optical encryption [32], and all-optical machine learning [33]. Moreover, intriguing prime number links have emerged to quantum ladder states of many-body systems [34] and multifractality of light in photonic arrays undergirded by algebraic number theory [35]. Although quantum algorithms have enabled seminal advances in number factorization [36], the application of quantum mechanics to this problem requires the implementation of a complex quantum Hamiltonian, such that factoring even relatively small numbers can run into unexpected difficulties [37]. In addition, the entanglement of a large number of qubits is susceptible to pernicious decoherence effects [38]. Hence, the alternatives have been sought that rely on the physics of classical superpositions of coherent waves [39; 40; 41]. The latter, however, are very sensitive to external noise. In contrast, our protocol in volves classical random waves, which are robust against noise [42; 43], and its capacity is only limited by the pixel size of a spatial light modulator [44]. We consider a random wave packet and an associated ensemble of scalar random fields \(\{U\}\). In the space-frequency representation, the field of each ensemble realization can be expressed as \[U(\mathbf{r},z,\omega)=\sum_{\nu}a_{\nu}\Psi_{\nu}(\mathbf{r},z,\omega), \tag{1}\] where \(z\) and \(\mathbf{r}\) are the axial coordinate along and radius vector in the plane transverse to the packet propagation direction, respectively. Further, \(\{a_{\nu}\}\) is a set of uncorrelated random amplitudes which obey the second-order statistics as \[\langle a_{\nu}^{*}a_{\nu^{\prime}}\rangle=\lambda_{\nu}\delta_{\nu\nu^{ \prime}}. \tag{2}\] Here the angle brackets represent ensemble averaging and \(\{\lambda_{\nu}\geq 0\}\) specify powers of individual modes \(\{\Psi_{\nu}\}\). We now specify to a particular class of spatial modes, \[\Psi_{\nu}(\mathbf{r},z,\omega)=\psi_{\nu}(\mathbf{r},\omega)e^{i\beta_{\nu}( \omega)z}, \tag{3}\] and drop the irrelevant frequency variable \(\omega\) hereafter. We can infer from Eq. (3) that each mode, characterized by a propagation constant \(\beta_{\nu}\), defies diffraction. It follows that \(\{\Psi_{\nu}\}\) are either modes of a chaotic multimode (optical/acoustical/matter) waveguide, or \(\{\Psi_{\nu}\}\) belong to a special class of non-diffracting modes of the Helmholtz equation in free space. Next, we define the cross-spectral density of the ensemble at a pair of transverse planes \(z_{1}=const\) and \(z_{2}=const\) as \[W(\mathbf{r}_{1},z_{1};\mathbf{r}_{2},z_{2})=\langle U^{*}(\mathbf{r}_{1},z_{ 1})U(\mathbf{r}_{2},z_{2})\rangle. \tag{4}\] On substituting from Eqs. (1) through (3) into Eq. (4), we can readily conclude that \[W(\mathbf{r}_{1},\mathbf{r}_{2},\Delta z)=\sum_{\nu}\lambda_{\nu}\psi_{\nu}^{* }(\mathbf{r}_{1})\psi_{\nu}(\mathbf{r}_{2})e^{i\beta_{\nu}\Delta z}, \tag{5}\] where \(\Delta z=z_{2}-z_{1}\). The analysis of Eq. (5) reveals that the cross-spectral density of the field in any given transverse plane \(z_{1}=z_{2}=const\) remains the same due to the propagation invariance of the modes. Yet, even uncorrelated modes manifest axial dynamics due to the interference of phasors \(e^{i\beta_{\nu}\Delta z}\) when correlations in different transverse planes are examined. In particular, if the propagation constant is proportional to a polynomial in \(\nu\) with integer coefficients, \(\beta_{\nu}=(2\pi/z_{r})\sum_{s}c_{s}\nu^{s}\), \(c_{s}\in\mathcal{Z}\), and \(s\in\mathcal{N}\), \(W(\mathbf{r}_{1},\mathbf{r}_{2},\Delta z)\) self-images over multiples of a characteristic revival distance \(z_{r}\). This is a generic feature of axial correlations of the wave fields composed of discrete diffraction-free modes. Let us now focus on structured random sources generating uncorrelated superpositions of the so-called dark or antidark (DAD) diffraction-free beams of light, featuring dark notches or bright bumps against incoherent background [1; 8]. Instructively, the DAD beams have been recently shown to maintain structural stability even in random media [47]. We show (see Supplemental Material [44] for details) that the cross-spectral density of the DAD beam superposition ensemble at a given radial position \(r\) in any pair of transverse planes is given, up to an immaterial constant and overall phase factor, by the expression \[W_{\mathrm{DAD}}(\mathbf{r},\Delta z)\propto\sum_{m=1}^{M}[1\mp J_{0}(4\pi mr /d)]e^{-2\pi im^{2}\Delta z/z_{T}}, \tag{6}\] where \(J_{0}(x)\) is a Bessel function of the first kind and order zero, and \(-(+)\) on the r.h.s of the equation corresponds to a superposition of dark (antidark) beams, respectively. We display the average intensity and the magnitude of \(W_{\mathrm{DAD}}(r,\Delta z)\) in Fig. 1 for antidark (left panels) and dark (right panels) beams. We can observe in the figure that while the average intensity remains propagation-invariant, the two-plane correlations exhibit, in general, intricate dynamics revealing perfect periodic revivals. Specifically, we can infer from Eq. (S8) and observe in Fig. 1(c,d) that aperiodic DAD field correlations are perfectly reproduced at pairs of axial distances separated by multiple Talbot lengths. Further, two-point angular correlations of the ensemble of DAD beam superpositions show even more intriguing Talbot carpets, replete with full and fractional Talbot revivals [44]. Instructively, we can introduce the second-order degree of coherence of the Figure 1: Left/right: (Propagation-invariant) average intensity (a/b) and the magnitude of the cross-spectral density (c/d) of an ensemble of antidark/dark beam superposition, evaluated at a pair of points with coordinates \((\mathbf{r},z_{1})\) and \((\mathbf{r},z_{2})\), \(\Delta z=z_{2}-z_{1}\), using Eq. (S8) with \(M=5\) and \(d=0.4\) mm. At the multiples of the classic Talbot distance \(z_{T}\), aperiodic DAD field correlations are perfectly reproduced. fields [5] at a pair of points \((\mathbf{r},z_{1})\) and \((\mathbf{r},z_{1}+\Delta z)\) as \[\mu_{\mathrm{DAD}}(\mathbf{r},\Delta z)=\frac{W_{\mathrm{DAD}}(\mathbf{r}, \Delta z)}{W_{\mathrm{DAD}}(\mathbf{r},0)}. \tag{7}\] It follows at once from Eqs. (S8) and (S10) that provided \(\Delta z=nz_{T}\), \(n\in\mathcal{N}\), the fields are perfectly coherent parallel to the optical axis at the transverse planes separated by multiple revival distances, \(|\mu_{\mathrm{DAD}}(\mathbf{r},nz_{T})|=1\). This is a statistical signature of Talbot correlation revivals which holds for random wave packets composed of any propagation-invariant modes \(\{\Psi_{\nu}\}\). The presence of a quadratic exponential factor in Eq. (S8) hints at the possibility of employing correlations of structured random light fields to factor numbers. Indeed, we can establish a link between the normalized intensity autocorrelation function of the peak intensities of antidark (AD) beams \(I_{\mathrm{AD}}\), evaluated at different positions along the optical axis, and an incomplete Gauss sum of number theory. To this end, we define the former as \[g_{\mathrm{AD}}^{(2)}(0,\Delta z)=\frac{\left\langle I_{\mathrm{AD}}(0,z+ \Delta z)I_{\mathrm{AD}}(0,z)\right\rangle}{\left\langle I_{\mathrm{AD}}(0,z )\right\rangle^{2}}, \tag{8}\] and the latter as \[\mathcal{G}_{N}^{(M)}(p)=\frac{1}{M}\sum_{m=1}^{M}e^{-2\pi im^{2}N/p}, \tag{9}\] and derive the following fundamental relation (see Supplemental Material [44] for details): \[|\mathcal{G}_{N}^{(M)}(p)|^{2}=g_{\mathrm{AD}}^{(2)}(0,z_{T}N/p)-1. \tag{10}\] In Eqs. (9) and (S19), \(N\) is a number that we seek to factor and \(p\), \(p\in\mathcal{N}\), are trial prime integers that may or may not be factors of \(N\). The left-hand side of Eq. (S19) equals to unity whenever \(p\) is a factor of \(N\) and it oscillates rapidly, taking on small values otherwise. Remarkably, we have revealed a link between a number theoretic quantity, the incomplete Gauss sum and a fundamental measurable quantity of statistical optics, the intensity autocorrelation function of light fields. It follows that we can sample \(\mathcal{G}_{N}^{(M)}\) at discrete points by evaluating from experimental data (normalized) intensity-intensity correlations of light at pairs of points separated by the interval \(\Delta z=z_{T}N/p\) along the optical axis of the system. Next, we validate the proposed protocol by carrying out a number factorization experiment. In our experiment, a collimated, linearly polarized quasi-monochromatic beam of carrier wavelength \(\lambda=632.8\)nm, emitted by a He-Ne laser, illuminates a phase-only spatial light modulator (SLM) (Meadowlark Optics, \(1920\times 1200\)\(8\,\mu\)m\({}^{2}\) pixels). To structure random light beams, we encode the desired field distributions into holograms using complex amplitude encoding. The sought beams are then associated with the first diffracted order from the SLM; we refresh speckle patterns by refreshing the holograms, see Supplemental Material [44] for further details. In Fig. 2 we present two typical ensemble representations (speckle patterns of instantaneous intensities) of light at the two axial distances corresponding to \(p=3\) (top row, \(p\) being a factor of \(N\)) and \(p=4\) (bottom row) with \(M=5\) and \(N=1155\). We average over an ensemble of 2000 speckle patterns to evaluate the normalized intensity autocorrelation functions \(g_{\mathrm{AD}}^{(2)}(\mathbf{r},z_{T}N/p)\) with the help of Eq. (S16) of the Supplemental Material [44]. We can show analytically--see the Supplemental Material [44]--that whenever \(N=np\), \(n\in\mathcal{N}\), \(g_{\mathrm{AD}}^{(2)}(\mathbf{r},Nz_{T}/p)=2\) at any transverse location \(\mathbf{r}\), implying that \(|\mathcal{G}_{N}^{(M)}(p)|^{2}=1\) in Eq. (S19). This conclusion is well supported by our experimental results of Fig. 2(c) wherein slight deviations from the theory can be further suppressed by increasing the size of the ensemble. To demonstrate the capability of our protocol to factor large numbers, we show theoretical and experimental results, corresponding to left- and right-hand sides of Eq. (S19), respectively, as functions of \(p\) in Fig. 3, which are marked by red squares (theory) and blue dots (experiment). In Fig. 3(a), a relatively small number to be factorized is comprised of four adjacent primes (\(N=1155=3\times 5\times 7\times 11\)) whereas Fig. 3(b) exhibits the factorization of a large number composed of prime factors that are far apart from one another: \(N=570203=73^{2}\times 107\). The results clearly demonstrate that \(|\mathcal{G}_{N}^{(M)}(p)|^{2}\) attains unity (within the experimental accuracy) for all prime Figure 2: (a) (b) (d) Speckle patterns (instantaneous intensity) of the structured random light (AD beam with \(M=5\) and \(N=1155\)) and (c) the resulting normalized intensity autocorrelation functions. We choose \(p=3\) in (a)(b)(c) and \(p=4\) in (a)(d)(e), respectively. In the former case, \(p\) is a factor of \(N\), whereas it is not the case in the latter case. The normalized intensity autocorrelation functions \(g_{\mathrm{AD}}^{(2)}(\mathbf{r},z_{T}N/p)\) in (c) and (e) are evaluated by averaging over an ensemble of 2000 speckle patterns with the aid of Eq. (S16) of the Supplemental Material [44], which reduces to Eq. (8) on the optical axis. factors of \(N\). Further, the factors and non-factors of \(N\) are clearly discriminated by the threshold value of \(1/\sqrt{2}\), as anticipated [49]. Next, to ensure the accuracy of number factorization, we demonstrate that our protocol is able to detect spurious factors, the ghost factors such that the magnitude of the incomplete Gauss sum can attain values close to unity. To suppress any ghost factor bellow the threshold value, the magnitude of \(M\) must be chosen judiciously. Previous work on number factorization with incomplete Gauss sums suggests the criterion \(M\approx 0.659\sqrt[4]{N}\) for a given \(N\)[49]. To illustrate ghost number suppression in our protocol, we focus on a large number \(N=186623=431\times 433\), composed of two twin primes, and analyze the ghost factor \(432\). In Fig. 4, we display the theoretical and experimental results for \(|\mathcal{G}_{(431\times 433)}^{(M)}(432)|^{2}\) as functions of \(M\). We can infer from the figure that both curves fall off with \(M\), exhibiting slight oscillations at their large \(M\) tails. The curves intersect the horizontal line \(|\mathcal{G}|^{2}=1/\sqrt{2}\) at \(M=11.1\), implying that ghost factors are suppressed provided \(M\geq 12\). To verify this conclusion and impart intuition, we also plot \(|\mathcal{G}_{(431\times 433)}^{(12)}|^{2}\) as a function of \(p\) evaluated for \(M=12\) in the inset to the figure. We can readily conclude from the inset that the ghost factor \(432\) is clearly suppressed and only \(431\) and \(433\) are eligible prime factors of \(186623\). In conclusion, we have advanced a general theory of axial correlation revivals of structured random waves composed of superpositions of propagation-invariant modes. We have employed the developed theory to establish a fundamental link between statistical optical physics and number theory and have proposed and demonstrated a protocol to decompose numbers into prime factors using random light. It is worth noting that, to our knowledge, all previously reported classical physics inspired number factorization techniques relied on coherent superpositions of waves [39; 40; 41; 50]. The strict control of wave phases was achieved by employing carefully controlled sequences of nuclear spins [39], cold atoms [40], or femtosecond optical pulses [41], which, in turn, called for sophisticated experimental apparatuses [39; 40; 41] and cryogenic temperatures [40]. In contrast, our protocol requires only commercially available, table-top optical components and it employs cw random light, which is quite robust to source or environmental noise. It is then noteworthy that without making any attempt to optimize our procedure, we have been able to factor numbers as large as \(N=570203\) which is, at least, on par with current achievements by NMR (\(N=157573\)) [39] and cold atom (\(N=263193\)) [40] based protocols and an order of magnitude larger than the greatest number (\(N=19043\)) that has been factorized with coherent sequences of femtosecond pulses to date [41]. Moreover, the largest number amenable to factorization with coherent cw light beams has been mere 27 [50], so that our protocol enables the improvement of four orders of magnitude. As a matter of fact, the upper bound on the number that can be decomposed into primes with our protocol is set by the pixel size of a commercial SLM employed in our experiment (see Supplemental Material for further details [44]). As the emerging nanophotonics technology for structuring random light [51] matu Figure 3: Experimental number factorization with structured random light beams. The relevant parameters are as follows: (a) \(N=1155=3\times 5\times 7\times 11\) and \(M=5\), (b) \(N=570203=73^{2}\times 107\) and \(M=20\). The theoretical and experimental results are obtained from Eqs. (9) and (S19), respectively. Figure 4: Experimental analysis of ghost factor \(p=432\) suppression in the factorization of \(N=186623=431\times 433\) as function of the number of modes \(M\). Inset: Experimental results of factoring \(N=186623\) with \(M=12\). further rapid advances in this direction. In addition, we are confident that similar number factorization protocols, undergirded by the fundamental principles that we have outlined above, can be developed to operate with noisy acoustical or matter waves to be structured with the aid of the appropriate metasurfaces [52; 53]. The authors acknowledge financial support from the National Key Research and Development Program of China (2022YFA1404800, 2019YFA0705000), National Natural Science Foundation of China (12004220, 11974218, 12192254, 92250304), Regional Science and Technology Development Project of the Central Government (YDZX20203700001766), the China Postdoctoral Science Foundation (2022T150392), and the Natural Sciences and Engineering Research Council of Canada (RGPIN-2018-05497).
2306.07402
The economic trade-offs of large language models: A case study
Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. Large Language Models (LLMs) are a natural fit for this use case; however, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model's utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM - prompt engineering, fine-tuning, and knowledge distillation - using feedback from the brand's customer service agents. We find that the usability of a model's responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space.
Kristen Howell, Gwen Christian, Pavel Fomitchov, Gitit Kehat, Julianne Marzulla, Leanne Rolston, Jadin Tredup, Ilana Zimmerman, Ethan Selfridge, Joseph Bradley
2023-06-08T20:35:53Z
http://arxiv.org/abs/2306.07402v1
# The economic trade-offs of large language models: ###### Abstract Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. Large Language Models (LLMs) are a natural fit for this use case; however, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model's utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM - prompt engineering, fine-tuning, and knowledge distillation - using feedback from the brand's customer service agents. We find that the usability of a model's responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space. ## 1 Introduction Amidst increased automation, human agents continue to play an important role in providing excellent customer service. While many conversations are automated in text-based customer support, others are routed to human agents who can handle certain customer concerns more effectively. Agents often handle multiple conversations at once, consulting customer account information and brand policies while maintaining these conversations. As agents are expensive to staff, many companies are seeking ways to make their work more efficient. LivePerson's Conversation Assist,1 illustrated in Figure 1, accelerates agents by automatically generating suggestions that the agent can either send, edit and then send, or ignore. Conversation Assist can both reduce agent response time and improve response quality, as a well-trained model may provide more consistent, higher quality responses than inexperienced agents or agents adversely impacted by external factors. These benefits lead to greater cost savings and increased customer satisfaction (CSAT) scores, not to mention providing a supervisory mechanism that is critical for brand control and model improvement. Footnote 1: [https://developers.liveperson.com/conversati](https://developers.liveperson.com/conversati) Large Language Models (LLMs) are a natural fit for this technology, as they have achieved high performance on response generation tasks (Adiwardana et al., 2020; Hosseini-Asl et al., 2020; Zhang et al., 2020, inter alia), but they are expensive to train and serve. For example, the inference cost for each response using a distilled GPT-2 model and an Nvidia A100 GPU is \(\epsilon\).0011,2 while the inference cost using the GPT-3-based Davinci model through OpenAI's API is \(\epsilon\)1.10 (OpenAI, 2023b).3 Footnote 2: We found the Nvidia A100 GPU to be the most inexpensive option, with an Nvidia V100 GPU costing \(\epsilon\)0.0019 Footnote 3: Assuming a context and response length of 550 tokens. LLM economics and enterprise applications are highly fluid. First, individual partnership agreements may differ from the published API cost, and the rapid pace of innovation in the space will necessarily impact the cost of training and serving these models. Second, as brands vary widely, a useful agent assistance model must be customized to the brand's use case and performance requirements. We propose a simple and flexible cost framework that can be applied to various LLM and brand scenarios. This framework, Expected Net Cost Savings (ENCS), combines the probability and cost savings of an agent accepting or editing a response with the cost of generating the response. ENCS can be applied at the message level or in the aggregate. With one brand as a case study, we explore ENCS with various methods of model customization. Using feedback from the brand's customer service agents, we evaluated fine-tuning, prompt engineering, and distillation to adapt and optimize GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020; OpenAI, 2023a), and Coherence (Cohere, 2023a). These strategies can lead to an agent usage rate of 83% (including both direct use and editing) and an annual cost savings of $60,000 for our case-study brand - 60% of their total agent budget. We generalize this case study to a broader range of brands and models. We find that low perplexity correlates with the probability that an agent will use a response, and we extrapolate from this finding to use perplexity to estimate the ENCS for additional model customization strategies. We apply ENCS to each configuration, and while models, prices, and use cases will change over time, we expect that this framework can be continuously leveraged for decision making as technology evolves. ## 2 Related Work Transformers (Vaswani et al., 2017) have dominated response generation tasks: Dialog GPT (Zhang et al., 2020), Meena (Adiwardana et al., 2020), soloist(Peng et al., 2021), BlenderBot (Roller et al., 2021), PLATO-XL (Bao et al., 2022), LaMDA (Thoppilan et al., 2022), godel(Peng et al., 2022). Each of these approaches fine-tunes a large pre-trained LM to task-oriented dialog or chit chat using curated dialogs. In some cases, additional tasks, such as the discriminative training tasks of Thoppilan et al. 2022, are also used. When data is not available for fine-tuning, prompting with a single example has proven quite effective (Min et al., 2022), and for large enough models, prompting that demonstrates breaking tasks into discrete components (Wei et al., 2022) has performed on par with fine-tuned models (Chowdhery et al., 2022). The size of these LLMs plays a significant role in their high performance (Chowdhery et al., 2022), but in a deployed setting, this size can be quite costly. Quantization (Whittaker and Raj, 2001; Shen et al., 2020), pruning (Han et al., 2015, 2016) and knowledge distillation (Hinton et al., 2015; Sanh et al., 2019) are common strategies for size reduction with minimal impact to performance. Here we focus specifically on distillation using a language modeling task to reduce model size while simultaneously adapting the model to the data following Ryu and Lee (2020) and Howell et al. (2022). Response generation is difficult to evaluate holistically. Some have focused on relevance and level of detail (Zhang et al., 2020; Adiwardana et al., 2020; Thoppilan et al., 2022), humanness (Zhang et al., 2020; Roller et al., 2021) and overall coherence or interestingness (Bao et al., 2022; Thoppilan et al., 2022). In contrast, we follow Thoppilan et al. (2022) and Peng et al. (2022) who consider helpfulness and usefulness as broader measures of response quality, but we ground these judgements in the customer service use case by having real agents judge the usefulness of model outputs. ## 3 Expected Net Cost Savings (ENCS) ENCS combines model performance, model cost, and agent cost: If an agent saves time by using a model's response, then there is a cost savings. Figure 1: Conversation Assist as a system that returns can responses (left), compared with the product described in this paper, which generates suggestions from LLMs (right). More formally, ENCS is defined as the probability that a response is used (\(P(U)\)) multiplied by the savings in dollars for each used response (\(S_{U}\)), less the cost of generating that response (\(C\)), as in (1). (1) \(ENCS=P(U)*S_{U}-C\) Because agents are not limited to simply using a response as-is but may also choose to edit the response or ignore it altogether, equation (1) may be modified to account for the probability and savings associated with editing (\(P(E)\) and \(S_{E}\)) or the cost of ignoring (\(P(I)\) and \(S_{I}\))4 as well: Footnote 4: In most cases, \(S_{I}\) is a negative number, as reading a response and choosing not to use it would cost time and money. (2) \(ENCS=((P(U)*S_{U})+(P(E)*S_{E})+(P(I)*S_{I})-C\) We can estimate \(S\) from the agent's hourly rate (\(R\)), the average time it takes for agents to respond to a message without Conversation Assist (\(T_{r}\)), and the amount of time an agent spends for each accepted, edited, or ignored message (\(T_{x}\)). (3) \(S_{x}=R(T_{r}-T_{x})\) Figure 2 provides a toy example of this calculation. ### Simplifying Assumptions This model makes a number of simplifying assumptions. We assume that agents always have conversations to respond to or some other work to do. We exclude the problem of workforce optimization from our framework, noting that when fewer agents are needed to handle the conversational traffic, workforce can be reduced. We also exclude R&D cost, but return to this factor in section 5. Furthermore, we omit any discussion of the cost of an agent using an inappropriate or factually incorrect response. For the purposes of this model, we assume that agents read all suggestions carefully, but a deeper analysis of the risk and cost of these errors is a critical area for further study. ## 4 Case Study We focus on a single brand to evaluate the use of LLMs for Conversation Assist and explore the application of ENCS for making product decisions. We evaluate three model customization strategies using manual ratings from brand agents. We then evaluate how well these ratings relate to perplexity and use this to assess a larger set of models. Finally, we estimate ENCS and discuss the implications. ### Case Study Brand We partnered with a single brand, who we will refer to as Anonymous Retailer (AR), for this case study. AR's customer base includes both consumers and sellers who consign items through AR's platform. Because AR's agents are trained across different customer concern categories, they can provide expert feedback on a wide range of data. At the time of writing, AR has about 350 human agents who use LivePerson's chat platform. AR supports about 15,000 conversations per month, and uses chat bots for simple tasks and routing, while their human agents send 100,000 messages per month on average. In comparison, the average number of conversations per month for brands on LivePerson's platform is 34,000, with a median of 900 monthly conversations per brand and a standard deviation of 160. ### Data sets We constructed three datasets: brand-specific training, brand-specific test, and general training. We de-identified data, replacing each entity with a random replacement. For the test set, we manually ensured that the de-identification was internally consistent across the conversation for agent and consumer names, addresses, and order numbers. The brand-specific data comprises English customer service conversations from 2022 that include human agent and bot messages. We filtered these conversations to ensure that they had at least two agent turns, more human agent than bot messages, and a positive Meaningful Conversation Score.5 Footnote 5: For more information on Meaningful Conversation Score, we can use the same notation as in the previous section. Figure 2: A toy example of an ENCS calculation. From this filtered data, we randomly sampled 100,059 conversations to make up our training set. From the remainder, we curated a brand-specific test set by manually selecting 287 conversations where the customer's goal could be clearly established from the context of the conversation. We constructed the general training set from five additional retail brands whose product lines fall into similar categories as AR. We filtered and processed the data using the method described above and selected 70,000 conversations per brand, or used the entirety of the brand's data if there were fewer than 70,000 conversations. The total size of the general training set is 236,769 conversations. For more details on these datasets, see Appendix D. ### Model Customization We explored three standard model customization strategies: prompt engineering, fine-tuning, and knowledge distillation. Using these strategies, we tested eleven configurations (Table 1). We evaluated three of these configurations with the judgements of AR agents, and for the remainder we extrapolated usability scores from the model's perplexity over the test set. #### 4.3.1 Prompt Engineering Gpt-3We prompted the text-davinci-003 GPT-3 model (OpenAI, 2023), following OpenAI's best practices for prompt engineering (Shieh, 2022). After some experimentation, we found that the most effective prompt for our use case (Figure 3) used a hand-constructed exemplar conversation and explicitly instructed the model to generate a response that would address the consumer's issue. CohereFollowing Coherence's best practices (Cohere, 2023), we tested both verbose and concise prompts with the XLarge Coherence model (Cohere, 2023). Unlike GPT-3, we found that using a prompt without an exemplar conversation (Figure 3) resulted in better performance. #### 4.3.2 Fine-Tuning Gpt-2We fine-tuned GPT-2 (Radford et al., 2019) using a language modeling task over conversational data on either the brand-specific dataset or the general dataset described in section 4.2. We started with a learning rate of 0.00008 with a linear scheduler and no warm up steps and trained until perplexity plateaued. Gpt-3We fine-tuned the text-davinci-003 GPT-3 model from OpenAI on a conversational prompt-completion task using instructions and an exemplar conversation as the prompt and the human-agent response as the output. The dataset consisted of 50 random examples from the brand-specific training set. CohereWe fine-tuned Cohere's XLarge model with Cohere's API (Cohere, 2023) and a random subset of 50 conversations from the brand-specific dataset. We tested verbose and concise prompts as well as eos token placement, and found that a shorter prompt with an eos token after each turn worked best. #### 4.3.3 Distillation To reduce latency and cost to serve by almost half, we distilled our fine-tuned GPT-2 models using the Transformers library (Sanh, 2023), following the method set forth by Sanh et al. (2019) and the language modeling training task of Radford et al. (2019). For distillation, we used either the brand \begin{table} \begin{tabular}{l||c c c|c c c|c c} & \multicolumn{2}{c}{**Fine-tuning**} & \multicolumn{2}{c}{**Distillation**} & \multicolumn{2}{c}{**2nd Fine-tuning**} \\ **Model Name** & **Dataset** & **\# Convs** & **\# Steps** & **Dataset** & **\# Convs** & **\# Steps** & **Dataset** & **\# Convs** & **\# Steps** \\ \hline \hline GPT-2 bft bd\({}^{*}\) & brand & 100,059 & 15,000 & brand & 100,059 & 67,014 & & & \\ Cohere pe\({}^{*}\) & & & & & & & & \\ GPT-3 pe\({}^{*}\) & & & & & & & & \\ GPT-2 bft & brand & 100,059 & 15,000 & & & & & \\ GPT-2 bft bf bf & brand & 100,059 & 34,000 & brand & 100,059 & 67,014 & brand & 100,059 & 15,000 \\ GPT-2 gft bdft & general & 236,769 & 34,000 & brand & 100,059 & 67,014 & brand & 100,059 & 28,000 \\ GPT-2 gft gd bft & general & 236,769 & 34,000 & general & 236,769 & 1,264,352 & brand & 100,059 & 28,000 \\ GPT-2 XL gft gd bft & general & 236,769 & 120,000 & general & 236,769 & 1,264,352 & brand & 100,059 & \\ Cohere ft & brand & 50 & & & & & & \\ GPT-3 bft & brand & 50 & 4 epochs & & & & & \\ \end{tabular} \end{table} Table 1: Model adaptation configurations. \({}^{*}\) indicates that this model’s outputs were manually evaluated. _bft = fine-tuned on AR brand data, gft = fine-tuned on the general dataset, bd = distilled using AR brand data, gft = distilled using the general dataset, pe = prompt engineered._ specific or the general dataset. We started with a learning rate of 0.0005 using a linear scheduler and trained for 3 epochs. Because the OpenAI and Cohere API's do not make the logits of the whole vocabulary available at inference, we are unable to distill these models using Sanh et al.'s methodology. ### Metrics and Results #### 4.4.1 Response Usability While previous work has assessed the helpfulness or usability of a response with crowd-sourced judgments (Thoppilan et al., 2022; Peng et al., 2022), we worked with nine agents at AR who already use our Conversation Assist product. For each conversation and suggested response, we asked them whether they would use the suggested response as-is; edit it to change specific details, add to it, or remove parts of it; or ignore the suggestion altogether. The full annotation instructions are given in Appendix E. Table 2 shows annotated Response Usability (RU) scores for three models. Even when shown the response that an AR agent had actually used in the conversation (human), agents said that they would use this response only 77% of the time and would ignore it 10% of the time. This indicates a high level of personal preference among the agents, and sets a noteworthy upper limit on the usability we could expect from model outputs. Agents said that they would use the GPT-3 pe suggestion 69% of the time compared with GPT-2 bft bd and Cohere pe at only 57% and 58%, respectively. As the use rate increases, the edit rate and ignore rates both decrease, indicating that conversations resulting in editable prompts for some models can result in usable prompts for another model. We also note, that while the use rate was similar for GPT-2 bft bd and Cohere pe, the edit rate was much higher for cohere, highlighting the importance of assessing the cost savings of an editable response vs. ignoring the response entirely. We also annotated these conversations for the Foundation Metrics in Thoppilan et al. 2022 and found a correlation between responses that were sensible, specific and role-consistent and those that the agents said they would use. Detailed analysis of these labels and their correlation are in Appendix G. This additional annotation revealed that, of the three models, GPT-2 bft bd was most likely to generate a consumer turn rather than an agent turn or to generate a turn that was not relevant to the conversation, which may account for its high ignore rate. We also note that virtually all responses generated by the three models were labeled'safe' by the annotators. #### 4.4.2 Perplexity Adiwardana et al. (2020) found that sensibleness and specificity corresponded with the model's perplexity, inspiring us to use perplexity to extrapolate our manual evaluation of three models to a broader set of model configurations. After reproducing Adiwardana et al.'s finding for sensibleness and specificity using our data (see Appendix J), we investigated the correlation between perplexity and response usability. For each conversation context in the evaluation set, we calculate the perplexity for the generated response for each LLM using the average log likelihood of each token, following equation (4). \[PP(W)=\sqrt[N]{\frac{1}{P(w_{1},w_{2},...,w_{N})}} \tag{4}\] \begin{table} \begin{tabular}{l|c c c} **Model Name** & **\%Ignore** & **\%Edit** & **\%Use** \\ \hline \hline Human & 10 & 12 & 77 \\ GPT-2 bft bd & 28 & 16 & 57 \\ Cohere pe & 22 & 20 & 58 \\ GPT-3 pe & **17** & 14 & **69** \\ \end{tabular} \end{table} Table 2: The percentage of responses that agents said they would use, edit, or ignore. Five agents annotated each conversation, judgements are counted individually. Figure 3: Prompts used for Cohere and GPT-36 Using all annotated LLMs' suggested responses across all conversations in the evaluation set, we fit a set of linear regression models using the perplexity of the generated agent turn as our independent variable, and the probability of use, edit, and ignore as our dependent variables. Individual linear models trained on the output of a single LLM did not show statistical significance; however, models trained on the output of all LLMs did show significance in the F-statistic (p < 0.05 for P(edit), p < 0.001 for P(use) and P(ignore)). Extrapolating from these linear models allows us to illustrate potential cost savings for more models than we were able to annotate. These linear models predict the RU scores in Table 3. #### 4.4.3 Expected Net Cost Savings (ENCS) We calculate the ENCS for each model using equation (2), repeated here in (5). \[\begin{array}{ll}ENCS=((P(U)*S_{U})+(P(E)*S_{E})+(P(I)*S_{I})-C\\ \end{array}\] \(P(U)\), \(P(E)\), and \(P(I)\) are the frequency with which the LLM's response was accepted, edited, or ignored in the test set. \(S_{U}\), \(S_{E}\), and \(S_{I}\) are calculated assuming that an agent costs $10.00 per hour and averages 30 seconds per message without Conversation Assist. With Conversation Assist, we assume that the agent saves 25 seconds for each accepted response, 20 seconds for each edited response and spends an extra 5 seconds for each ignored response. We also assume that each response costs $0.002 to generate for a GPT-2 model, $0.0011 for a distilled GPT-2 model, $1.09 for the base model and $6.54 for a fine-tuned model through OpenAI's API and $0.25 for the base model and $0.50 for a fine-tuned model through Cohere's API.8 Footnote 8: We estimate GPT-2’s cost based on a latency of 19.57 milliseconds per inference for the full-sized model and 11.60 ms for the distilled model, and a cost of $3.53 per hour renting an Nvidia A100 GPU from GCP for 8 hours a day. OpenAI and Cohere’s API costs are come from OpenAI 2023b and Cohere 2023b at the time of writing. Using the RU scores in Tables 2 and 3, we estimate that AR's cost savings per message would be $4.47 using the GPT-2 bft bd model compared with $4.24 using GPT-3 pe, as detailed in Table 4. ENCS per year is calculated based on AR's annual agent message volume of 1,200,000. The factor with the largest impact on AR's cost savings is the usefulness of the predictions, as the best annotated model (GPT-3 pe)'s predictions are used or edited only 5% more often than the fastest (GPT-2 bft bd), while its cost was almost 100 times higher ($1.09 vs $0.0011). Despite this, the difference in ENCS between these two models is minimal and only amounts to about $3k per year. In general, the RU and ENCS are higher for the extrapolated results, which are somewhat less reliable, but they lead to one important insight: in this case, the inference cost for a fine-tuned GPT-3 model is too high for the customer to realize savings. ## 5 Beyond a single case study To decide which of these models will lead to the greatest ROI for a brand, we must consider the break-even point for each model based on the ENCS (which includes agent labor and model inference costs) as well as R&D cost and message volume. This can be visualized with Figure 4, which \begin{table} \begin{tabular}{l|c c} **Model Name** & **encs/message** & **encs/year** \\ \hline \hline GPT-2 bft bd & \(\epsilon\)4.47 & \$53,653 \\ Coherence pe & \(\epsilon\)4.58 & \$55,000 \\ GPT-3 pe & \(\epsilon\)4.24 & \$50,920 \\ \hline GPT-2 bft & \(\epsilon\)4.97 & \$59,687 \\ GPT-2 bft bft & \(\epsilon\)4.96 & \$59,527 \\ GPT-2 gft bdft & \(\epsilon\)4.99 & \$59,851 \\ GPT-2 gft bdft & \(\epsilon\)4.98 & \$59,786 \\ GPT-2 & \(\epsilon\)4.81 & \$57,668 \\ GPT-2XL gft & \(\epsilon\)4.90 & \$58,802 \\ Cohere ft & \(\epsilon\)4.62 & \$55,391 \\ GPT-3 bft & -\(\epsilon\)1.56 & -\$18,691 \\ \end{tabular} \end{table} Table 4: AR’s estimated cost savings per model using equation 2 and the usage rates in Table 2. For models below the line, we we use extrapolated usage rates using perplexity from Table 3. The assumptions used to calculate the ENCS are described in section 4.4.3. See Table 1 for descriptions and naming conventions for these models. \begin{table} \begin{tabular}{l|c|c c c} **Model Name** & **PPL** & **\% Ignore** & **\% Edit** & **\%Use** \\ \hline \hline GPT-2 bft & 4.27 & 20.8 & 16.8 & 62.4 \\ GPT-2 bft bd bft & 4.50 & 21.0 & 16.9 & 62.1 \\ GPT-2 gft bd bft & 4.05 & 20.7 & 16.7 & 62.6 \\ GPT-2 gft dd bft & 4.15 & 20.7 & 16.8 & 62.5 \\ GPT-2 & 7.08 & 22.7 & 17.8 & 59.5 \\ GPT-2 XL gft gd bft & 5.31 & 21.5 & 17.2 & 61.3 \\ Cohere ft & 1.93 & 19.3 & 16.0 & 64.7 \\ GPT-3 bft & 4.14 & 20.7 & 16.8 & 62.5 \\ \end{tabular} \end{table} Table 3: Average perplexity (PPL)7 and projected Response Usability (RU) scores. See Table 1 for descriptions and naming conventions for the models. shows that ROI is reached when the amount that labor cost is offset (green) intersects with the amount that has been spent on the model (red). The number of suggestions needed to break even (\(N_{r}\)) is calculated with equation (6), using the R&D cost (\(C_{R\&D}\)), ENCS, and the cost to update and maintain the model (expressed as an average per message over time as \(C_{m}\)). (6) \(N_{r}=\frac{C_{B\&D}}{(ENCS-C_{m})}\) Given that the difference in ENCS per message across the models explored in this paper is not large, low R&D cost is the main consideration to reach the fastest ROI. For a small brand sending 500,000 agent messages per year and saving about $24,000 per year with any of the models, reducing the upfront R&D cost would be critical. On the other hand, a large enterprise brand who will save $950,000 per year over 20 million messages, will break-even on any R&D cost fairly quickly. As a model with lower inference cost will offset high R&D cost more quickly and lead to more savings over a longer period of time, inference cost is a much more important factor for a brand with high traffic. In Appendix K, we provide a detailed example of the impacts of these costs. It is also worth noting that when choosing between in-house and third-party models, the difference in R&D and maintenance cost may not be as significant as one might expect. While an in-house model requires up-front investment to train and serve, OpenAI and Cohere's LLMs at the time of writing require a fair amount of effort to prompt engineer for the best performance and these prompts should be customized to some degree for different brands and scenarios. From a maintenance perspective, we similarly find that while an in-house model must be refreshed, prompts must also be redesigned as third-party providers update and release new models. Brands might also wish to consider factors that are not accounted for in this framework. Some brands would prefer to use an in-house model so that they can retain control over their data and protect their customer privacy by limiting access of their data to third-party vendors. An in-house model also provides more control over the model's suggestions, as well as control over when the model is updated or deprecated. Especially as technology develops, models become less expensive to train, and the performance of open-source models improves, these factors may carry even more weight. ## 6 Conclusion In this case study, we demonstrated the utility of LLMs for agent assistance products, exploring 3 model adaptation strategies across 11 model configurations. Based on feedback from real customer service agents, we found that bigger is not always better, as the distilled GPT-2 model resulted in greater cost-savings than GPT-3, despite lower quality responses, because, at the time of writing, its inference cost is so much lower. These results empower near-term decision-making for integrating models like these into production. However, with the rapidly shifting NLP landscape, a framework to assess the cost benefits of new technologies is critical to facilitate decisions about integrating them into products. The flexible framework presented in this paper, ENCS, enables NLP practitioners to invest in innovations that lead to tangible business benefits. We found that for this product, the impact of model quality far outweighs inference cost, pointing to the importance of continuing to push the state of the art, while considering practical expense. This framework empowers the NLP community to invest in the most cost-effective technology for their specific needs, even as that technology, its capability, and its pricing evolve. ## Ethics Statement To protect customer and agent privacy, the data used to train and evaluate models was fully anonymized by replacing all customer or agent names, addresses, phone numbers, or other personal identifiers with a random name or string. We also compensated agents for annotations in line Figure 4: Factors impacting when a brand will break even when using an agent assistance model. with their standard rate as agents at AR. While the tools described in this paper have the explicit goal of making agents' jobs easier, they - and specifically the lens of a cost savings analysis - have the potential to be used to motivate reductions in workforce, and we acknowledge the impact that this can have on the agents themselves. We also note that these tools can also improve the customer experience by reducing wait times, which can lead to fewer frustrated customers when they do interact with agents. ## Limitations In this study, we collected feedback on the usefulness of model responses from customer service agents at AR. These agents were recommended based on their availability and experience with Conversation Assist; however, we did not receive details about the agents such as their level of training or experience, which may have an impact on their preferences using the suggested responses. Furthermore, while agents in our study received a flat rate per judgment with no bonus or penalties to how they judged the response, some businesses have existing agent metrics (e.g. actual handle time, AHT targets, etc.) that could incentivize the agents to behave differently while performing their jobs. These metrics have the potential to exert pressure on agents in real-life situations to accept responses at a higher rate than in this study. The linear models in section 4.4.2 are based on the judgments of 5 agents on 3 LMM model outputs for 287 conversations. While they have shown a statistically significant relationship between usage rates and perplexity, this is a small pilot analysis. Additional data will be necessary to determine how well this generalizes. Our cost savings framework also makes a number of simplifying assumptions about workforce optimization. We've noted some of these assumptions in section 3.1, and they should be considered when leveraging this framework for different types of products. In addition, while the explicit goal of these models is to make agents' jobs easier, we expect from previous work studying vigilance tasks (Warm et al., 2008) that there can be an upper bound to how much cost could be saved with an excellent LLM, as there would be less benefit from the agent acting as a human in the loop as their vigilance wanes. ## Acknowledgements We want to thank the customer service agents at AR who provided judgements on the usability of responses, as well as Anna Folinsky and Daniel Gilliam who annotated the data for Foundation Metrics. We also appreciate engineering assistance that we received from Larry Cheng, Tyson Chihaya, Sid Naik, and Fazlul Shahriar.
2304.11419
Elastic Microphase Separation Produces Robust Bicontinuous Materials
Bicontinuous microstructures are essential to the function of diverse natural and synthetic systems. Their synthesis has been based on two approaches: arrested phase separation or self-assembly of block copolymers. The former is attractive for its chemical simplicity, the latter for its thermodynamic robustness. Here, we introduce Elastic MicroPhase Separation (EMPS) as an alternative approach to make bicontinuous microstructures. Conceptually, EMPS balances the molecular-scale forces that drive demixing with large-scale elasticity to encode a thermodynamic length scale. This process features a continuous phase transition, reversible without hysteresis. Practically, we trigger EMPS by simply super-saturating an elastomeric matrix with a liquid. This results in uniform bicontinuous materials with a well-defined microscopic length-scale tuned by the matrix stiffness. The versatility and robustness of EMPS is further demonstrated by fabricating bicontinuous materials with superior mechanical properties and controlled anisotropy and microstructural gradients.
Carla Fernández-Rico, Sanjay Schreiber, Hamza Oudich, Charlotta Lorenz, Alba Sicher, Tianqi Sai, Stefanie Heyden, Pietro Carrara, Laura De Lorenzis, Robert W. Style, Eric R. Dufresne
2023-04-22T14:32:29Z
http://arxiv.org/abs/2304.11419v1
# Elastic Microphase Separation Produces Robust Bicontinuous Materials ###### Abstract Bicontinuous microstructures are essential to the function of diverse natural and synthetic systems. Their synthesis has been based on two approaches: arrested phase separation or self-assembly of block copolymers. The former is attractive for its chemical simplicity, the latter for its thermodynamic robustness. Here, we introduce Elastic MicroPhase Separation (EMPS) as an alternative approach to make bicontinuous microstructures. Conceptually, EMPS balances the molecular-scale forces that drive demixing with large-scale elasticity to encode a thermodynamic length scale. This process features a continuous phase transition, reversible without hysteresis. Practically, we trigger EMPS by simply super-saturating an elastomeric matrix with a liquid. This results in uniform bicontinuous materials with a well-defined microscopic length-scale tuned by the matrix stiffness. The versatility and robustness of EMPS is further demonstrated by fabricating bicontinuous materials with superior mechanical properties and controlled anisotropy and microstructural gradients. Phase separation, elasticity, spinodal decomposition, microstructure, composite materials Bicontinuous materials Bicontinuous materials + Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: Corresponding: author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding: author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding: author: author: author: Corresponding author: author: author: author: Corresponding author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding author: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author:: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author:: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: author: Corresponding: author: Corresponding: author:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author::: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: author:: Corresponding: author:: Corresponding: author::: Corresponding: author::: author:: Corresponding: author::: Corresponding: author::: Corresponding: author:: author::: Corresponding: author::: Corresponding: author:::: Corresponding: author:::: Corresponding: author:::: author Fig. 1C). This microstructure features strong structural correlations at a single wavelength, as shown by the ring in the two-dimensional Fourier transform of the image (see inset Fig. 1C). Stacks of fluorescence confocal images shown in Fig.1D and E, reveal that the three-dimensional structure of our material is bicontinuous (see full stack in Movie S1). This highly correlated structure is homogeneous across the centimeter-scale sample, as shown by the Fourier spectra of images acquired at different cross-sections (see Fig. 1F). The resulting phase-separated structures resemble transient bicontinuous networks formed during spinodal decomposition [26; 27; 28]. However, in striking contrast to that classical process, our microstructures are stationary for hours without sign of coarsening. As we show in the following section, the essential ingredient leading to this unusual phase separation process is the elasticity of the polymer matrix. ### Matrix elasticity stabilizes a single length scale To highlight the crucial role of matrix elasticity, we compare phase separation with and without crosslinking of PDMS (see Fig. 2A and B). For a fluid matrix with no cross-linking (see Fig. 2A), phase separation follows the classical spinodal decomposition pathway, with the emergence and rapid coarsening of bicontinuous channels, followed by their break-up into discrete droplets [29]. By contrast, phase separation in cross-linked matrices (see Fig. 2B), results in bicontinuous microstructures that emerge at deeper quenches and do not appear to coarsen over the course of experiment. Fourier analysis of the image sequence shown in Fig. 2B reveals that the contrast of the structure smoothly increases during cooling, while the characteristic length scale of the structure remains fixed. This is shown in Fig. 2C, where the Fourier spectrum at each time-point is characterized by a broad peak centered on a single spatial frequency, \(q_{max}\). As temperature decreases, the height of this peak (_i.e._ contrast) increases smoothly (see top panel Fig. 2D), while the characteristic length scale \(\lambda^{*}=2\pi q_{max}^{-1}\) remains constant at approximately 1.6 \(\mu\)m (see bottom panel Fig. 2D). Intriguingly, elastic microphase separation is fully reversible without hysteresis when the system is heated up. This is shown by the yellow data in Fig. 2D, where the contrast and length scale follow the same path for cooling and heating. We also find that \(\lambda^{*}\) is stable over time at a fixed temperature (see Fig. 2E), and that it remains fixed when the quench rate is varied by a factor of 30 (see Fig. 2F). Together, these observations suggest that the contrast and characteristic length scale of the phase-separated microstructure are equilibrium features of the system. This is reminiscent of microphase separation in block copolymers [15; 19]. However, our microstructures have Figure 1: Elastic MicroPhase Separation (EMPS) produces bicontinuous microstructures with a well-defined spacing. (A) Schematic diagram of the process. First, an elastomer is incubated in a bath of liquid at elevated temperatures. After swelling equilibrium is reached, cooling induces phase separation. (B-E) Results for an 800kPa PDMS elastomer swollen with heptafluorobutyl methacrylate (HFBMA) at T\({}_{swell}\) = 60\({}^{\circ}\)C (57wt% of HFBMA) and cooled to room temperature. (B) Macroscopic images of the change of macroscopic colour before (top) and after (bottom) the phase separation process. (C) Bright-field and (D) confocal microscopy images of microstructure. The inset in C shows the FFT of the shown bright-field image. Yellow domains in (D) depict PDMS-rich domains (Nile red-dyed) and blue domains depict acrylate-rich domains (BDP-dyed). (E) 3D confocal reconstruction of the bicontinuous structure. (F) Azimuthal average of a 2D FFT analysis of optical microscopy images taken at different cross-sections, along the \(z\) axis (see inset). The peak position at \(\sim\)800 nm does not change across the sample volume (1\(\times\)2\(\times\)0.5cm\({}^{3}\)). much bigger than the sizes of their constituent macromolecules, which is typical of kinetically arrested phase separation [30; 31; 32]. Elastic microphase separation, therefore, combines attractive characteristics of both established routes, and provides a bulk route to produce uniform bicontinuous microstructures. ### Matrix stiffness controls the length scale and morphology of the microstructure To elucidate the role of elasticity, we vary the matrix stiffness. Confocal images of the microstructures formed in matrices with Young's moduli, \(E\), ranging from 800 to 10kPa, are shown in Fig. 3A. As \(E\) is reduced from 800 to 180kPa, the characteristic length scale of the bicontinuous morphology increases from 0.8 to 2\(\mu\)m (see Figs. 3A\({}_{1-3}\) and Fig. 3D). For softer matrices, the final morphologies are, in fact, dense packings of discrete droplets, resembling 'compressed emulsions' (see Figs. 3A\({}_{4-6}\)) [33]. In these droplet structures, the length scale increases from about 3 to 9\(\mu\)m, when \(E\) decreases from 80 to 10kPa (see Fig. 3D). In general, we find that over the full range of stiffnesses, \(\lambda^{*}\) scales smoothly as \(1/\sqrt{E}\), as shown by the circles in Fig. 3D (see distributions in Fig. S2). Three-dimensional confocal reconstructions of both morphologies are displayed in Figs. 3B and C (see full stacks in Movie S2 and S3). Independent of the final morphology and stiffness, we find that phase separation initiates with the smooth emergence of a bicontinuous structure at a single wavelength (see Fig. 3E and Fig. S3). While this structure is stable in size and connectivity in stiff samples, it coarsens and breaks in softer samples. This is shown in Fig. 3E, where the evolution of \(\lambda^{*}\) with cooling is shown for three different elasticities. In stiff samples (E=350kPa, Figure 2: EMPS is continuous and produces microstructures with a thermodynamically defined length scale. (A,B) Bright-field optical microscopy images of samples cooled from 60\({}^{\circ}\)C to 23\({}^{\circ}\)C at a cooling rate of dT/dt = -0.5\({}^{\circ}\)C/min. (A) Mixtures of HBFMA liquid and uncrosslinked PDMS and (B) mixtures of liquid and crosslinked PDMS (E=350kPa). The final quench depth is \(\Delta\)T = -37\({}^{\circ}\)C. (C) Azimuthal average of 2D fast Fourier Transform (FFT) of microscopy images shown in (B). As temperature decreases, a peak in the scattering intensity, \(I(q)\), emerges smoothly at a fixed spatial frequency, q\({}_{max}\). The inset shows line profiles of microscopy images at different temperatures (see color bar), indicating the increase of contrast at a fixed length scale. (D) Contrast (max\({}_{q}(I)/\)max\({}_{q,T}(I)\)) and characteristic length scale (\(\lambda^{*}=2\pi q^{-1}_{max}\)) evolution of the bicontinuous microstructure during heating and cooling. (E) Time evolution of \(\lambda^{*}\) for a fixed quench depth, \(\Delta\)T = -15\({}^{\circ}\)C. (F) Quench rate evolution of \(\lambda^{*}\) for a fixed \(\Delta\)T = -37\({}^{\circ}\)C. Fig. 3E), \(\lambda^{*}\) is stable throughout the entire phase separation process. In softer samples (E=40kPa, Fig. 3E), \(\lambda^{*}\) is initially stable, as a network is formed, but then increases as the structure coarsens and forms droplets at deeper quenches (see images in Fig. S3). The initial characteristic length scale of these networks is added in Fig. 3D as yellow squares. For soft samples, this initial length scale is independent of \(E\). In uncrosslinked matrices (E=0kPa, Fig. 3E), \(\lambda^{*}\) rapidly increases as soon as the temperature drops, as expected for classical demixing (see Fig.2A). To compare EMPS to the early stages of classical spinodal decomposition [28], we constructed a minimal analytical model considering the coupling between demixing and elasticity. We evaluated the linear stability of a near-critical mixture, following Cahn and Hilliard [26], with an additional linear-elastic term governing stretching of the matrix. In this model, we find the initial wavelength during the early stage of spinodal demixing is given by \(\lambda^{*}=\lambda_{0}/\sqrt{1+E/E^{*}(T)}\). Here, \(\lambda_{0}\) is the corresponding wavelength for an uncrosslinked matrix, and E\({}^{*}\)(T) is a characteristic stiffness depending on the degree of undercooling. For \(E\gg E^{*}\), we expect the fastest-growing wavelength to scale like \(1/\sqrt{E}\), and for \(E\ll E^{*}\), we expect \(\lambda^{*}\) to be independent of stiffness. This minimal model seems to capture the observed scaling of the initial wavelength \(\lambda^{*}\) with \(E\) (see yellow data in Fig. 3D). However, numerical solutions of this model reveal that linear elasticity is not sufficient to arrest the coarsening of the microstructure over time (see Figs. S11 and S13). This suggests that additional factors such as non-linear elasticity and toughness could be playing a role. ### Elasticity changes the thermodynamics of phase separation While results in the previous section suggest that the microstructure forms by spinodal decomposition, a closer look at the phase behavior of EMPS suggests that it is governed by a distinct thermodynamic process. The key features of our argument are shown in Fig.4 A. There, we quantify the structural contrast evolution of a bicontinuous microstructure during a temperature cycle. Cooling from the binodal curve at \(T_{swell}=60^{\circ}C\), we observe no phase separation until \(T_{micro}=21.1^{\circ}C\) (see blue data Fig.4 A, contrast threshold 0.1), which is \(\sim\)39\({}^{\circ}\)C below the swelling temperature. Our observations rule out phase separation at \(T_{micro}\) as being spinodal decomposition near the critical point, where no gap is expected between the binodal and spinodal curves. Spinodal decomposition, however, is still possible away from the critical point. To test for this, we reheat the sample (see yellow data in Fig.4 A), and find that the microstructure fully dissolves at \(T_{diss}=22^{\circ}C\), very close to \(T_{micro}\). This remarkable observation rules out classical off-critical spinodal demixing, where phase-separated domains should persist until the original swelling temperature is reached. Crucially, we find that \(T_{micro}\approx T_{diss}\) is a robust feature across the full range of incubation temperatures (see Fig. 4B) and matrix stiffnesses (see Fig.4BC and Fig.S5). Further, \(T_{micro}\) is rate independent for sufficiently slow quench rates (see Fig. S6). These results suggest that \(T_{micro}\) is a proper thermodynamic boundary, dividing a region where mixtures are mechanically stabilized by the elastic polymer matrix (see highlighted gray zone in Figure 3: Matrix stiffness tunes microstructural length scale and morphology. (A) Confocal microscopy images of microphas-separated structures for unswollen PDMS Young’s moduli of (A\({}_{1}\)) 800kPa, (A\({}_{2}\)) 350kPa, (A\({}_{3}\)) 180kPa, (A\({}_{4}\)) 80kPa, (A\({}_{5}\)) 40kPa and (A\({}_{6}\)) 10kPa, at 23\({}^{\circ}\)C. (B,C) 3D reconstructions of microstructures with (B) E=350kPa (14\(\times\)14\(\times\)6 \(\mu\)m\({}^{3}\)) and (C) E=40kPa (40\(\times\)40 \(\times\)15 \(\mu\)m\({}^{3}\)). (D) Characteristic length scale, \(\lambda^{*}\), measured by FFT as a function of \(E\). Circles denote results at 23\({}^{\circ}\)C. Yellow symbols indicate the channel width of the bicontinuous structures and blue symbols the droplets’ diameter. Squares represent \(\lambda^{*}\) of the bicontinuous structures formed before the droplets. The error bars represent the standard error of the mean. (E) Evolution of \(\lambda^{*}\) during cooling for \(E=350\)kPa (yellow), \(E=40\)kPa (blue) and \(E=0\)kPa (red). Fig.4B-E), from a microphase-separated area (see highlighted cyan zone in Fig.4B-E). Like a spinodal, a continuous compositional change is observed when this boundary is crossed. Like a binodal, phase-separated domains fully dissolve when it is crossed in the other direction. Such continuous hysteresis-free transitions are normally seen only near critical points. Matrix stiffness has a strong impact on the onset of microphase separation. The gap between the \(T_{swell}\) and \(T_{micro}\) increases with matrix stiffness (compare gray zones in Figs.4B-C). To interpret this, we quantify the degree of supersaturation at the point of microphase separation (see \(\Delta\)c in Fig.4B) as \(\ln(c_{micro}/c_{swell})\) for different elasticities and swelling temperatures. As shown in Fig. 4D, the supersaturation degree collapses the data acquired at different T\({}_{swell}\) and scales roughly linearly with \(E\) of the unswollen matrix. This trend has a simple mechanical interpretation, suggested by [25]. Assuming ideality, the difference in chemical potential between the mixed and marginally microphase separated states is given by \(\Delta\mu=k_{B}T\ln(c_{micro}/c_{swell})\). This leads to an elevated osmotic pressure of the solute in the polymer-depleted phase, \(\Delta\Pi=n_{L}\Delta\mu\), which is needed to maintain the host network in a deformed state. As shown by the red dashed line in Fig. 4D, the onset of microscopic phase separation is consistent with \(\Delta\Pi\approx 10E\). Thus, the competition between elasticity and solubility places a practical upper-bound on the stiffness of samples that can be used, and hence a lower-bound on the length scales accessible by EMPS. ### EMPS makes functional materials with diverse microstructures While our bicontinuous materials do not coarsen over time, the elevated osmotic pressure in the oil-rich microphase leads to its slow transport out of the network over several hours (see Fig. 5A) [25]. However, we can suppress this liquid migration by polymerizing the methacrylate-functionalized HFBMA after phase separation (see Fig.5B\({}_{1}\) and Fig.S8). We achieve this by adding photo-initiator to the incubation bath and illuminating the sample with UV-light after phase separation (see SM). The resulting material is permanent (stable for at least 12 months), and thus enables EMPS to be used to fabricate microstructured functional materials. For instance, EMPS followed by polymerization can significantly increase the toughness and Young's modulus of elastomers. This is shown in Figure 5B\({}_{2}\), where tensile tests of polymerized samples reveal a remarkably elevated toughness of roughly 2MJ/m\({}^{3}\), which is 9\(\times\) greater than pure PDMS and 200\(\times\) greater than polymerized HFBMA (pHFBMA). The composite elastomer has a stiffness \(\sim\)20MPa (11\(\times\) greater than pure PDMS) and withstands strains up to \(\sim\)140% (see Fig. 5B\({}_{2}\) and Fig.S9). This enhanced mechanical performance could arise from a filler- [34] or double-network-effect [35]. Alternatively, the brittle pHFBMA could be plasticized by small quantities of PDMS [36]. Finally, the simplicity and robustness of EMPS enables the fabrication of bicontinuous materials with controlled structural gradients and anisotropy. When swelling and phase separation take place in a polymer matrix with a stiffness gradient, the resulting materials can present smooth and continuous changes in both the length scale (see Fig. 5C) and morphology of the microstructure (see Fig. 5D). Materials with structural gradients are attractive for filtration, implants or tissue engineering applications, where gradual changes in the length scale are crucial for their functionality [37; 38]. Finally, anisotropic bicontinuous structures can be also systematically produced by stretching the material, either before (see Fig. S10) or after phase separation (see Fig. 5E). Such anisotropic structures have recently been proposed to Figure 4: Elastic microphase separation is a continuous hysteresis-free transition. (A) Contrast \((\max_{q}(I)/\max_{q,T}(I))\) evolution of an 800kPa sample from 60\({}^{\circ}\)C (T\({}_{swell}\)) to 15\({}^{\circ}\)C at a quench rate \(dT/dt=-0.5^{\circ}\)C/min. \(T_{micro}\) is the temperature at which microphase separation is observed and \(T_{diss}\) the temperature at which the microstructure disappears. A contrast threshold of 0.1 is used to determine \(T_{micro}\) and \(T_{diss}\). (B) Phase diagram of 800kPa samples and (C) 180kPa samples. The boundary at \(T_{swell}\) is shown by the empty circles and \(T_{micro}\) with the x’s. \(\Delta\)c = c\({}_{swell}\) - c\({}_{micro}\) The gray area shows the mechanically-stabilized part of the phase diagram (mixture \({}^{mec}\)) and the blue area where microphase separation is observed (2 microphases). The mixture \({}^{mec}\) region size increases with \(E\). (D) Supersaturation degree of the liquid at the point of microphase separation. Supersaturation increases approximately as \(10E/n_{L}k_{B}T\). Error bars represent the standard deviation of the mean. have promising orientation-dependent mass transport, energy absorption, and mechanical properties [37; 39; 40]. ### Conclusions Elastic microphase separation is a powerful approach for fabricating homogeneous bicontinuous materials in bulk. The final length scale, morphology, and thermodynamics of this system are found to be intimately related to the mechanics of the elastic matrix. In fact, we have uncovered the emergence of an elastically-controlled thermodynamic boundary delineating a hysteresis-free continuous phase transition over a wide range of compositions. Finally, we have shown the potential of this approach for making functional materials, by fabricating tough elastomers, and bicontinuous materials with controlled anisotropy and microstructural gradients. Our results have broad implications for soft matter and materials science. For soft matter, EMPS challenges classical models of phase separation, where continuous hysteresis-free transitions are only accessible near critical points. As such, EMPS requires a new theoretical framework, incorporating elasticity to address the stability and length scale of the system. We anticipate that a deeper understanding of the interplay of mechanics and thermodynamics will shed crucial light on emerging problems in biology [41; 42; 43] and mechanical engineering [37; 12]. From a materials perspective, EMPS breaks the current dichotomy between block copolymer self-assembly and arrested phase separation to produce bicontinuous microstructures. EMPS combines the thermodynamic robustness of block-copolymer assembly with the chemical simplicity of arrested phase separation, enabling precision microstructures with inexpensive components. While our preliminary results on anisotropic and graded bicontinuous materials already suggest applications in mechanical materials [37], EMPS promises a host of opportunities for developing other functional materials. Further developments of this system could include the reduction of the microstructural length scale to produce structural color [3], the optimization of the elastomers' toughness for wearable devices applications [44], or the selective removal of one of the phases to exploit the exciting filtration, energy storage and catalytic properties of open bicontinuous materials [45; 46; 5]. Figure 5: EMPS produces versatile microstructures and subsequent polymerization endows durability. (A) Bright-field microscopy images of the structural evolution a 350kPa sample over time after microphase separation. The sample is kept in contact with the liquid bath. Without stabilization, contrast slowly disappears near the surface. (\(B_{1}\)) Polymerization after phase separation stabilizes the microstructure (Fig. S8) and enhances mechanical toughness. (\(B_{2}\)) Stress-strain curve for pure PDMS (800kPa), pure polymer and polymerized bicontinuous microstructure. The inset shows a photograph of a tensile test for the polymerized bicontinuous sample. (C,D) Bright-field images of structural gradients created by inducing EMPS in samples with a stiffness gradient. (E) Bright-field images of anisotropic bicontinuous microstructures. Anisotropy is systematically introduced by uniaxially stretching the sample as shown in E\({}_{1}\). The top-overlays in (E\({}_{2-4}\)) show the bright-field images coloured according to intensity gradient. The insets show the FFT of the respective images. ### Acknowledgements We acknowledge funding from the ETH Zurich Fellowship and the Swiss National Science Foundation NCCR for Bioinspired Materials. We thank Nan Xue, Kathryn Rosowski, David Zwicker and Dennis Kochmann for useful discussions. ## Methods ### Fabrication of HFBM-PDMS bicontinuous microstructures First, we prepare pure PDMS matrices by mixing PDMS chains (DMS-V31, Gelest) with a crosslinker (HMS-301, Gelest) and a platinum-based catalyst (SIP6831.2, Gelest) (see full recipe in [47]). The stiffness of the matrix depends on the mass ratio between the chains and crosslinker (from 3:1 to 9:1), while keeping the catalyst concentration constant (0.0019% in volume). Once the different parts are thoroughly mixed together, we pour the mixture into a petri dish, degassed it in vacuum, and finally cure it at 60\({}^{\circ}\)C for approximately 6 days. After curing, the resulting PDMS elastomer is carefully removed from the petri dish and cut into rectangular pieces (\(\sim\)1cm\(\times\)2cm\(\times\)0.5cm). Next, PDMS pieces are transferred into a bath of heptafluorobutyl methacrylate (HFBMA, Apollo scientific) (\(\sim\)1mL HFBMA/0.5g of PDMS), in a 25mL glass bottle. This bath is then typically incubated at \(T_{swell}=60^{\circ}\)C in a pre-heated oven during 2.5 days. After the elastomer is saturated with the liquid, the glass bottle is brought to room temperature, at which point phase separation occurs spontaneously. The resulting phase-separated bicontinuous samples are then prepared for characterization tests. Note that experiments with un-crosslinked PDMS are performed by preparing mixtures of PDMS chains and HFBMA liquid in glass bottles, and following the same temperature steps described above. ### Structural characterization Bright-field optical and confocal fluorescence microscopy images of phase-separated microstructures are obtained using a Nikon-Ti Eclipse inverted optical microscope, equipped with a Confocal Spinning disk Scanner Unit. We typically use 60\(\times\) and 100\(\times\) oil objectives with NA=1.2 and 1.45, respectively. Optimal imaging is achieved when preparing thin PDMS samples (\(\sim\)200-500\(\mu\)m thick), by either curing the PDMS in between two coverslips (1.5, Menzel Glaser), or by using a razor blade to cut thin slides from the bulk samples. Note that the incubation period for the samples between coverslips is significanly longer (\(\sim\)6 days). For confocal microscopy experiments, free BDP is added to HFBMA and Nile-Red is added to PDMS prior to incubation. To characterize the structural evolution of the samples with temperature we use thin PDMS samples (\(\sim\)200-500\(\mu\)m thick) cured on a 35-mm diameter glass-bottomed dish (MatTek). We add 1mL of HFBMA, and seal the petri dish with Teflon tape. The sample is next incubated at 60\({}^{\circ}\)C-oven for 2.5 days. The sample is next transferred into a pre-heated heating stage at 60\({}^{\circ}\)C (InsTec instruments), which is coupled to an optical microscope. For these experiments we use an 60\(\times\) air objective, to avoid temperature variations in the sample. ### Stiffness-gradient and anisotropic samples To prepare PDMS samples with stiffness gradients, we first cure a stiff PDMS mixture (_e.g._ 3:1 chains:crosslinker) on the right-side of a coverslip, and let it cure at 40\({}^{\circ}\)C for 2 hours. We limit it to the right-side by only pouring a small amount of mixture, and using a very small tilt during the curing. Next, we cure a softer PDMS mixture (_e.g._ 5:1 chains:crosslinker) on the left-side of the coverslip, which slowly gets in contact with the pre-cured stiff side. The gradient is let to cure for two days at 60\({}^{\circ}\)C. Anisotropic bicontinuous structures can be prepared by either pre-stretching a PDMS sample before the incubation and phase separation process, or after the system has phase separated. In both cases, strains are externally imposed by clamping 8cm\(\times\)2cm\(\times\)0.5cm pieces of the elastomer, in a home-made device for stretching the sample that can be coupled to an optical microscope (see Fig. S10). The stretch (\(\epsilon\)) is calculated as \(\Delta\)l/l\({}_{0}\), where l\({}_{0}\) is the original unstretched length. Stretch can be applied before incubation or after phase separation. ### Polymerization of bicontinuous microstructures and mechanical tests Polymerization of the phase separated samples is performed in degassed glass vials containing a piece of PDMS (\(\sim\)0.4g), 500\(\mu\)L of HFBMA and 2wt% of photointiator (2-Hydroxy-2-methylpropiophenone, Apollo scientific). Once the incubation has proceed at 60\({}^{\circ}\)C for 2.5 days, the samples are cooled down to room temperature to induce EMPS. Next, we remove the excess of monomer in the container with a degassed syringe and expose the sample to UV-light for 2 hours. We use a 365nm UV-lamp (12 Watts and 0.5cm from the lamp). To measure the mechanical properties of the polymerized materials, we prepare 5\(\times\)2\(\times\)0.5cm\({}^{3}\) dog-bone PDMS samples, and proceed with the swelling, phase separation and polymerization procedure described above. Once dog-bones are polymerized, we perform uniaxial tests using a tensile testing machine (Stable Micro Systems), where we record the engineering strain-stress curves until failure. All mechanical tests were performed in air, at room temperature, and at elongation speeds of 0.05mm per second. The sample toughness (J/m\({}^{3}\)) is calculated as the area under the strain-stress curves. ### Thermo-mechanical model based on Cahn-Hilliard theory Assuming spinodal decomposition near a critical point, we introduce a minimal model with two basic ingredients, a Landau-Ginzburg free energy [48] and an energy term related to the mechanical properties of the matrix. We define \(\phi(x)\) as an order parameter describing how far we are from the critical point (\(\phi=0\)). The matrix has a stiffness \(E_{sw}(\phi)=E_{0}(1-m\phi)\), where \(E_{0}\) is the Young's modulus at the critical point, and \(m\) captures the compositional variation of the stiffness. For simplicity, we assume \(m\) to be independent of \(\phi\) and \(E_{0}\). Then, free energy density including elasticity is represented as: \[f(\phi,\epsilon)=-\frac{1}{2}\alpha(T)\phi^{2}+\beta\phi^{4}+\frac{1}{2} \kappa\phi^{\prime 2}+\frac{1}{2}E_{sw}(\phi)\epsilon^{2}-E_{0}\epsilon_{0}\epsilon. \tag{1}\] The first two terms on the right side represent the free energy of mixing of the matrix and the liquid, while the third represents the interfacial energy between the two materials. \(\alpha(T)\), \(\beta\) and \(\kappa\) are the usual coefficients for a Landau-Ginzburg type free energy, and we take these to be independent of the network elasticity. The last two terms represent the mechanical strain energy of the mixture, which includes the stored elastic strain energy and a contribution related to the swelling pressure that makes the matrix equilibrate at a non-zero strain, \(\epsilon_{0}\). We assume the phase separation processes maintains local mechanical equilibrium. This equilibrium condition is given by \(\partial f/\partial\epsilon=0\), so \(E_{sw}(\phi)\epsilon=E_{0}\epsilon_{0}\). Thermodynamic equilibrium is much slower to achieve, as it involves slow transport of fluid through the sample. This transport occurs along gradients in the exchange chemical potential [49]: \[\mu =\left.\frac{\partial f}{\partial\phi}\right|_{\epsilon,\phi^{ \prime}}-\frac{d}{dx}\left(\left.\frac{\partial f}{\partial\phi^{\prime}} \right|_{\epsilon,\phi}\right) \tag{2}\] \[=-\alpha(T)\phi+4\beta\phi^{3}+\frac{E_{0}^{2}\epsilon_{0}^{2}E_ {sw}^{\prime}(\phi)}{2E_{sw}^{2}(\phi)}-\kappa\phi^{\prime\prime}.\] The flux of monomer is given by \(J=-Md\mu/dx\), where \(M\) is the mobility, which we assume to be constant. Conservation of mass is described by the transport equation: \[\frac{\partial\phi}{\partial t}=-\frac{dJ}{dx}=M\frac{d^{2}\mu}{dx^{2}}. \tag{3}\] To find the wavelength that emerges at the onset of phase separation, we perform a linear-stability analysis of this equation [26]. We let \(\phi=0+\delta\phi\), etc. Then, at leading order, we find the linearized transport equation: \[\frac{1}{M}\frac{\partial\delta\phi}{\partial t} =\delta\phi^{\prime\prime}\left[-\alpha(T)-\epsilon_{0}^{2}E_{0} m^{2}\right]-\kappa\delta\phi^{\prime\prime\prime\prime} \tag{4}\] \[\equiv h_{0}\delta\phi^{\prime\prime}-\kappa\delta\phi^{\prime \prime\prime\prime}.\] \(h_{0}\) is defined as the term in the square brackets for convenience. We seek solutions of the form \(\delta\phi=Ae^{-ikx+\omega t}\), where \(\omega\) is the growth rate of a mode with wavelength \(2\pi/k\). Inserting this into the transport equation, we find \[\frac{\omega}{M}=-k^{2}h_{0}-k^{4}\kappa. \tag{5}\] When \(\omega\) is positive for some range of wavelengths, the system is unstable, and spinodal decomposition will occur. This occurs when \(h_{0}<0\) (_n.b._\(\kappa\) is always positive). Under these conditions, we see that \(\omega\) will take a maximum value when \(d\omega/dk=0\). The corresponding value of \(k\) gives us the wavelength which will dominate the early stages of spinodal decomposition: \[\lambda^{*}=2\pi\sqrt{\frac{2\kappa}{-h_{0}}}=2\pi\sqrt{2\kappa/\left[\alpha( T)+\epsilon_{0}^{2}E_{0}m^{2}\right]}. \tag{6}\] Assuming \(m\) and \(\epsilon_{0}\) to be independent of experimental conditions, and \(E_{0}\) to be proportional to unswollen Young's modulus of the matrix, \(E\), we can write the expression above as \[\lambda^{*}\approx 2\pi\sqrt{\frac{2\kappa}{\alpha(T)+aE}}=\frac{\lambda_{0}}{ \sqrt{1+E/E^{*}(T)}}, \tag{7}\] where \(a\) is a constant, \(\lambda_{0}=2\pi\sqrt{2\kappa/\alpha(T)}\), and \(E^{*}(T)=\alpha(T)/a\).
2307.10257
Game theory analysis when playing the wrong game
In classical game theory, optimal strategies are determined for games with complete information; this requires knowledge of the opponent's goals. We analyze games when a player is mistaken about their opponents goals. For definitiveness, we study the (common) bimatrix formulation where both player's payoffs are matrices. While the payoff matrix weights are arbitrary, we focus on strict ordinal payoff matrices, which can be enumerated. In this case, a reasonable error would be for one player to switch two ordinal values in their opponents payoff matrix. The mathematical formulation of this problem is stated, and all 78 strict ordinal 2-by-2 bimatrix games are investigated. This type of incomplete information game has not -- to our knowledge -- been studied before.
Dan Zwillinger, Paul San Clemente
2023-07-17T13:37:30Z
http://arxiv.org/abs/2307.10257v1
# Game theory analysis when playing the wrong game ###### Abstract In classical game theory, optimal strategies are determined for games with complete information; this requires knowledge of the opponent's goals. We analyze games when a player is mistaken about their opponents goals. For definitiveness, we study the (common) bimatrix formulation where both player's payoffs are matrices. While the payoff matrix weights are arbitrary, we focus on strict ordinal payoff matrices, which can be enumerated. In this case, a reasonable error would be for one player to switch two ordinal values in their opponents payoff matrix. The mathematical formulation of this problem is stated, and all 78 strict ordinal 2-by-2 bimatrix games are investigated. This type of incomplete information game has not - to our knowledge - been studied before. game theory, optimal strategy, wrong game ## I Introduction When playing a game it is important to know both your goals and your opponents goals. If there is confusion about either of these, then a correct analysis could result in the wrong strategy. Brams [1] gives an example from the 1979-1981 Iranian hostage situation. He analyzed the "game" that US President Carter was "playing" and indicates how he (might have) arrived at his strategy. Then Brams analyzed the different "game" that Khomeini was "playing." Specifically, Bram modeled the Iranian hostage "game" using the payoff matrices \[A=\begin{bmatrix}4&2\\ 3&1\end{bmatrix}\qquad B_{\text{Catter}}=\begin{bmatrix}3&4\\ 2&1\end{bmatrix}\qquad B_{\text{Khomeini}}=\begin{bmatrix}2&4\\ 1&3\end{bmatrix}\] where \(A\) is the USA's payoff matrix, \(B_{\text{Catter}}\) is what Carter thought Khomeini's payoff matrix was and \(B_{\text{Khomeini}}\) is what Khomeini's actual payoff matrix was. Since the games were different, President Carter's strategy (derived from analysis of \(A\) and \(B_{\text{Catter}}\)) did not resolve the game as he thought it would. In game theory information can be "complete", which means that all players know everything about the game (for example, Go, chess, and checkers) or the game can be incomplete (for example, poker). There are many types of incomplete information games. This paper addresses one type of incomplete information game - the "cost" of playing a wrong game that is close to the correct game. We believe this analysis is new. We derive four problem statements representing different interpretations of "playing the wrong game". These cases represent different understandings of what the other player thinks you are thinking. We then systematically evaluate these cases for a class of interesting payoff matrices. ### _Assumptions and Terminology_ This paper considers two-player bimatrix games. That is, player 1 has \(n\) different strategies available for use, uses strategy \(\mathbf{x}\in\Delta\), has payoff matrix \(A\) of size \(n\times m\), and receives payoff \(P_{1}\). Here \(\Delta\) is the simplex representing all possible (mixed and pure) strategies1\(\Delta=\{\mathbf{z}\mid 0\leq z_{i}\leq 1,\|\mathbf{z}\|_{1}=1\}\). Similarly, player 2 has the parameters \(m\), \(\mathbf{y}\in\Delta\), \(B\), and \(P_{2}\). The player payoffs are \(P_{1}=\mathbf{x}^{\mathsf{T}}A\mathbf{y}\) and \(P_{2}=\mathbf{x}^{\mathsf{T}}B\mathbf{y}\). Footnote 1: As usual, \(\big{\|}[c_{1}\quad\ldots\quad c_{k}]\big{\|}_{1}=\sum_{i}^{k}|c_{i}|\). We assume that the values in \(A\) and \(B\) are non-negative. This means that the problem is _not_ zero sum; that is \(A\) is not equal to \(-B\). The simultaneous optimal strategies \(\{\mathbf{x}^{*},\mathbf{y}^{*}\}\) are determined by each player maximizing their payoff \[\begin{split}\mathbf{x}^{*}&=\arg\max_{\mathbf{x}\in \Delta}\mathbf{x}^{\mathsf{T}}A\mathbf{y}^{*}\\ \mathbf{y}^{*}&=\arg\max_{\mathbf{y}\in\Delta}\mathbf{x}^{* \mathsf{T}}B\mathbf{y}\end{split} \tag{1}\] where a superscript "T" represents vector transpose. The solution of the general bimatrix game in (1) can be found using the algorithm by Lemke and Howson [1]. ## II Problem Statement This paper makes the following assumptions about the game being played: 1. Player 1 uses payoff matrix \(A\); both players know this. 2. Player 1 believes that player 2 is using payoff matrix \(B\). 3. Matrix \(B\) belongs to a family \(\mathcal{F}(B)\) of related matrices. 4. Player 2 is actually using payoff matrix \(R\), which is in the family \(\mathcal{F}(B)\). 5. The matrix \(R\) may, or may not, be matrix \(B\). 6. Both payers are playing optimally, based on the information available to them. That is, player 1 has incomplete information; player 2's actual payoff matrix is \(R\), which player 1 does not know. To precisely define the problem, we need to specify exactly what the players believe to be true. For player 1 we have two assumptions: * player 1 believes that player 2 is using the payoff matrix \(B\). * player 1 believes that player 2 is using a payoff matrix in \(\mathcal{F}(B)\). where we use "E" to mean "exact" and "F" to mean "family". For player 2 we also have two assumptions: * player 2 believes that "player 1 believes that player 2 is using the payoff matrix \(B\)." * player 2 believes that "player 1 believes that player 2 is using a payoff matrix in \(\mathcal{F}(B)\)." Combining these assumptions, there are four cases to analyze \(\{EE,EF,FE,FF\}\). Each case has different solutions, with the payoffs to player 1 being \(\left\{P_{1}^{EE},P_{1}^{EF},P_{1}^{FE},P_{1}^{FF}\right\}\). A last case that could be studied is the "symmetric case", when each of the two players has equal uncertainty about the other player's payoff matrix. This game is described in section III-E. Note that if there were complete information, and player 1 knew that player 2's payoff matrix was \(R\), then the optimal strategies \(\{\mathbf{x}_{R},\mathbf{y}_{R}\}\) would be found from (1) \[\begin{split}\mathbf{x}_{R}&=\arg\max_{\mathbf{x} \in\Delta}\mathbf{x}^{\text{T}}A\mathbf{y}_{R}\\ \mathbf{y}_{R}&=\arg\max_{\mathbf{y}\in\Delta} \mathbf{x}_{R}^{\text{T}}B\mathbf{y}\end{split} \tag{2}\] Hence, player 1's payoff would be \[P_{1}^{\text{complete}}=\mathbf{x}_{R}^{\text{T}}A\mathbf{y}_{R} \tag{3}\] The purpose of this paper is to understand the difference between \(P_{1}^{\text{complete}}\) and \(\left\{P_{1}^{EE},P_{1}^{EF},P_{1}^{FE},P_{1}^{FF}\right\}\). ### _Computational Specification_ In this paper we evaluate solutions for all strict ordinal 2-by-2 bimatrix games. An _ordinal_ payoff matrix contains relative integer preferences. That is, a strategy that returns a "\(k\)" is preferred to a strategy that returns a "\(j\)", if \(k>j\). A _strict ordinal_ payoff matrix of size \(n\)-by-\(m\) contains the entries \(1,2,\ldots,mn\) and each value is used exactly once; there are no ties among the preferences. Hence, all our payoff matrices contain each of the values \(\{1,2,3,4\}\) exactly once. The Carter and Khomeini payoff matrices shown in the introduction are strict ordinal 2-by-2 payoff matrices. There are 78 unique strict ordinal 2-by-2 bimatrix games; see Figure 1. Uniqueness means that 2 matrix pairs cannot be made the same by re-ordering strategies or switching players. This value is derived in Rapoport and Guger [1]. For a payoff matrix \(B\), we define the family \(\mathcal{F}(B)\) to contain 4 matrices: \(B\) and the 3 matrices derivable from \(B\) by switching a pair of adjacent preferences. (These are called "swaps" by [1].) That is, given \(B\) the matrix \(B_{12}\) is obtained by switching the values "1" and "2", the matrix \(B_{23}\) is obtained by switching the values "2" and "3", and the matrix \(B_{34}\) is obtained by switching the values "3" and "4". The motivation for this type of incomplete information is that we may generally understand someone else's preferences, but we may make a single mistake. For example, suppose you articulated your best friend's top 4 dessert choices. Are you confident that you have not switched their 2nd and 3rd preferences? Here is a numerical example: the payoff matrix \(B=\begin{bmatrix}1&3\\ 4&2\end{bmatrix}\) is in the same family as \[B_{12}=\begin{bmatrix}\mathbf{2}&3\\ 4&\mathbf{1}\end{bmatrix}\quad B_{23}=\begin{bmatrix}1&\mathbf{2}\\ 4&\mathbf{3}\end{bmatrix}\quad B_{34}=\begin{bmatrix}1&\mathbf{4}\\ \mathbf{3}&2\end{bmatrix}\] ## III Problem Statement - Mathematically This section contains mathematical descriptions of the different games of interest. ### _Game EE_ In this case * player 1 believes that player 2 is using the payoff matrix \(B\). * player 2 believes that "player 1 believes that player 2 is using the payoff matrix \(B\)." From assumption "**E**_" player 1 solves the system (from (1)) for the strategies \(\{\mathbf{x}_{EE},\mathbf{y}_{*}\}\): \[\begin{split}\mathbf{x}_{EE}&=\arg\max_{\mathbf{x} \in\Delta}\mathbf{x}^{\text{T}}A\mathbf{y}_{*}\\ \mathbf{y}_{*}&=\arg\max_{\mathbf{y}\in\Delta}\mathbf{x}_{EE}^{\text{T}}B \mathbf{y}\end{split} \tag{4}\] From assumption "**_**E**" player 2 knows that player 1 will solve (4), and is using the strategy \(\mathbf{x}_{EE}\). However, since player 2 Fig. 1: All 78 strict ordinal 2-by-2 bimatrix games; the first 4 elements are \(A\) (column-wise), the next 4 elements are \(B\) (column-wise). is using payoff matrix \(R\), player 2 determines their strategy from \[\mathbf{y}_{EE}=\arg\max_{\mathbf{y}\in\Delta}\ \mathbf{x}_{EE}^{\mathsf{T}}R \mathbf{y} \tag{5}\] This can be readily solved by determining the indices \(I\) of the maximum values of the vector \(\mathbf{x}_{EE}^{\mathsf{T}}R\); all other indices in \(\mathbf{y}_{EE}\) have the numerical value zero (that is, \(y_{i}=0\) for \(i\not\in I\)). Any values can be used for \(y_{i}\) when \(i\in I\), consistent with \(\mathbf{y}\in\Delta\); the payoff to player 2 will be the same. ### _Game FE_ In this case * player 1 believes that player 2 is using a payoff matrix in \(\mathcal{F}(B)\). * player 2 believes that "player 1 believes that player 2 is using the payoff matrix \(B\)." That is, player 2 is confused about what player 1 knows. By "_E" player 2 thinks that player 1 will perform the same analysis that player 1 performed in (4). Then as before, player 2 calculates the quantity in (5), re-written here is new variables and explicitly showing the dependence on \(R\): \[\mathbf{y}_{FE}(R)=\arg\max_{\mathbf{y}\in\Delta}\ \mathbf{x}_{EE}^{\mathsf{T}}R \mathbf{y} \tag{6}\] However, by "F_", player 1 knows not to use \(\mathbf{x}_{EE}\) but needs to incorporate \(\mathcal{F}\) thinking. That is, player 1 needs to solve \[\mathbf{x}_{FE}=\arg\max_{\mathbf{x}}\left(\min_{R\in\mathcal{F}(B)}\mathbf{ x}^{\mathsf{T}}A\mathbf{y}_{FE}(R)\right) \tag{7}\] to ensure a maximal return, regardless of which \(R\in\mathcal{F}(B)(B)\) that player 2 chooses. This can be solved as follows. First, define \(\mathbf{a}_{j}=A\mathbf{y}_{EF}(R_{j})\) for each \(R_{j}\in\mathcal{F}(B)\). Then2 we note that (7) can be written as Footnote 2: The authors thank Peter Kingston for this observation. \[\begin{split}\mathbf{x}_{FE}&=\arg\max_{\mathbf{x}} \left(\min_{R_{j}\in\mathcal{F}}\mathbf{x}^{\mathsf{T}}\mathbf{a}_{j}\right)\\ &=\arg\max_{\mathbf{x}}\left(\max_{z}\left\{z\mid z\leq\mathbf{x} ^{\mathsf{T}}\mathbf{a}_{j}\right\}\right)\end{split} \tag{8}\] This, in turn, can be written as a linear programming problem for the values of \(\mathbf{z}\) and \(z\): \[\begin{split}\max_{\left\{\mathbf{x},z\right\}}& z \\ & z\leq\mathbf{a}_{j}^{\mathsf{T}}\mathbf{x}\qquad\text{for all }j\\ \left\|\mathbf{x}\right\|_{1}&=1\\ & 0\leq x_{i}\leq 1\\ & 0\leq z\end{split} \tag{9}\] The \(\mathbf{x}\) part of the solution of (9) is now \(\mathbf{x}_{FE}\). ### _Game FF_ In this case * player 1 believes that player 2 is using a payoff matrix in \(\mathcal{F}(B)\). * player 2 believes that "player 1 believes that player 2 is using a payoff matrix in \(\mathcal{F}(B)\)." Here, player 1 does not know player 2's payoff matrix, but knows the family (\(\mathcal{F}\)) that contains it. Hence, player 1 needs to solve the following to find \(\mathbf{x}_{FF}\): \[\begin{split}\mathbf{x}_{FF}&=\arg\max_{\mathbf{x}} \left(\min_{R\in\mathcal{F}(B)}\mathbf{x}^{\mathsf{T}}A\mathbf{y}_{*}(R) \right)\\ \mathbf{y}_{*}(R)&=\arg\max_{\mathbf{y}}\ \mathbf{x}_{FF}^{ \mathsf{T}}R\mathbf{y}\end{split} \tag{10}\] After player 1 obtains the optimal \(\mathbf{x}_{FF}\), player 2 will solve the same problem as in (6), rewritten in FF variables: \[\mathbf{y}_{FF}=\arg\max_{\mathbf{y}\in\Delta}\ \mathbf{x}_{FF}^{\mathsf{T}}R \mathbf{y} \tag{11}\] It is possible to determine the optimal strategy (\(\mathbf{x}_{FF}\)) for player 1, as defined by (10), using advanced game theory techniques. For example, we could represent the problem in sequence form and then solve a complementary problem, see [Nisan, Theorem 3.14]. Rather than elaborate on that solution technique, let's refer to a solution of (10) as \(\mathbf{x}_{FF}=M(A,\mathcal{F})\). In section III-F, we show how to determine \(M\) for the payoff matrices that are of interest to us. ### _Game EF_ In this case * player 1 believes that player 2 is using the exactly the payoff matrix \(B\). * player 2 believes that "player 1 believes that player 2 is using a payoff matrix in \(\mathcal{F}(B)\)." From player 1's point of view, Game EF is no different from Game EE. Hence, the solution (\(\mathbf{x}_{EF}\)) is the same as the solution in (4); \(\mathbf{x}_{EF}=\mathbf{x}_{EE}\). Writing those equations in EF variables: \[\begin{split}\mathbf{x}_{EF}&=\arg\max_{\mathbf{x} \in\Delta}\mathbf{x}^{\mathsf{T}}A\mathbf{y}_{*}\\ \mathbf{y}_{*}&=\arg\max_{\mathbf{y}\in\Delta}\mathbf{ x}_{EF}^{\mathsf{T}}B\mathbf{y}\end{split} \tag{12}\] From player 2's point of view, game EF is no different from Game FF. Hence, the solution (\(\mathbf{y}_{EF}\)) is the same as the solution in (10) and (11). Writing those equations in EF variables: \[\begin{split}\mathbf{x}_{EF}&=M(A,\mathcal{F})\\ \mathbf{y}_{EF}&=\arg\max_{\mathbf{y}\in\Delta}\ \mathbf{x}_{EF}^{ \mathsf{T}}R\mathbf{y}\end{split} \tag{13}\] ### _The symmetric game_ In the symmetric game each player has the same knowledge; each knows that the other player is using a payoff matrix within a known family of payoffs. Specifically, player 1 knows * player 1 uses known payoff matrix \(R_{A}\in\mathcal{F}(A)\) 2. player 2 uses an unknown payoff matrix in \(\mathcal{F}(B)\) while, player 2 knows (symmetrically) 3. player 2 uses a known payoff matrix \(R_{B}\in\mathcal{F}(B)\) 4. player 1 uses an unknown payoff matrix in \(\mathcal{F}(A)\) Assume that players 1 and 2 play the strategies \(\mathbf{x}_{S}\) and \(\mathbf{y}_{S}\). Then, player 1 knows that player 2 is going to play one of the strategies \[\mathbf{y}_{S}^{i}(\mathbf{x}_{S})=\arg\max_{\mathbf{y}}\ \mathbf{x}_{S}^{ \mathrm{T}}R_{E}^{i}\mathbf{y}\qquad R_{E}^{i}\in\mathcal{F}(B) \tag{14}\] for \(i=1,...,4\). Player 1 wants to maximize his payoff, so his strategy is given by \[\mathbf{x}_{S}=\arg\max_{\mathbf{x}}\left(\min_{i}\ \mathbf{x}^{\mathrm{T}}R_{A }\mathbf{y}_{S}^{i}(\mathbf{x})\right) \tag{15}\] That is, player 1's strategy only depends on the matrices / and \(B\) and is given by equations (14) and (15). By symmetry layer 2's strategy will be \[\mathbf{x}_{S}^{j}(\mathbf{y}) =\arg\max_{\mathbf{x}}\ \mathbf{x}^{\mathrm{T}}R_{A}^{j}\mathbf{y} \qquad R_{A}^{j}\in\mathcal{F}(A) \tag{16}\] \[\mathbf{y}_{S} =\arg\max_{\mathbf{y}}\left(\min_{j}\ \mathbf{x}_{S}^{j}( \mathbf{y})\right)^{\mathrm{T}}R_{E}\mathbf{y}\] Once again, this game can be solved by using the extensive game form and complementarity. This variation is not considered further in this paper. ### _The function \(M(A,\mathcal{F})\)_ The examples in this paper have payoff matrices of size 2-by-2. For problems of this size, we can readily evaluate the \(M\) function defined in section III-C. The following example shows how it can be done. Consider the payoff matrices \(A\) and \(B\): \[A=\begin{bmatrix}4&1\\ 2&3\end{bmatrix}\qquad B=\begin{bmatrix}1&3\\ 4&2\end{bmatrix} \tag{17}\] The family that contains \(B\) is \(\mathcal{F}(B)=\{B,B_{12},B_{23},B_{34}\}\) with \[B_{12}=\begin{bmatrix}2&3\\ 4&1\end{bmatrix}\quad B_{23}=\begin{bmatrix}1&2\\ 4&3\end{bmatrix}\quad B_{34}=\begin{bmatrix}1&4\\ 3&2\end{bmatrix}\] Since the payoff matrices are two-dimensional, the strategies (\(\mathbf{x}\) and \(\mathbf{y}\)) are also two-dimensional. Therefore we can, without loss of generality, write each of the vectors \(\mathbf{x}\) and \(\mathbf{y}\) in terms of one free parameter \[\mathbf{x} =\begin{bmatrix}a&1-a\end{bmatrix}^{\mathrm{T}}\] \[\mathbf{y} =\begin{bmatrix}b&1-b\end{bmatrix}^{\mathrm{T}}\] where \(0\leq a\leq 1\) and \(0\leq b\leq 1\). Then, by explicit computation, we can determine the payoffs to each player. Using \(P_{N}^{C}\) to denote the payoff to player \(N\) who has payoff matrix \(C\): \[P_{1}^{A} =\mathbf{x}^{\mathrm{T}}A\mathbf{y}=(4b-2)a+(3-b)\] \[P_{2}^{B} =\mathbf{x}^{\mathrm{T}}B\mathbf{y}=(2-4a)b+(2+a)\] \[P_{2}^{B_{12}} =\mathbf{x}^{\mathrm{T}}B_{12}\mathbf{y}=(3-4a)b+(1+2a)\] \[P_{2}^{B_{23}} =\mathbf{x}^{\mathrm{T}}B_{23}\mathbf{y}=(1-2a)b+(3-a)\] \[P_{2}^{B_{34}} =\mathbf{x}^{\mathrm{T}}B_{34}\mathbf{y}=(1-4a)b+(2+2a)\] Consider the different payoffs to player 2; each has the form \(()_{1}b+()_{2}\); where the parenthesis terms do not contain a \(b\). Hence, for player 2 to maximize their payoff, we have: \[\text{if }()_{1}>0\text{ then }b=1\] \[\text{if }()_{1}<0\text{ then }b=0\] and the value of \(b\) is undefined otherwise. Using this logic, we find the following choices for player 2: \[\text{for }B\text{ player 2 plays}\begin{cases}b=1&\text{if }a<\nicefrac{{1}}{{2}}\\ b=0&\text{if }a>\nicefrac{{1}}{{2}}\\ \end{cases}\] \[\text{for }B_{12}\text{ player 2 plays}\begin{cases}b=1&\text{if }a<\nicefrac{{3}}{{4}}\\ b=0&\text{if }a>\nicefrac{{3}}{{4}}\\ \end{cases}\] \[\text{for }B_{23}\text{ player 2 plays}\begin{cases}b=1&\text{if }a<\nicefrac{{1}}{{2}}\\ b=0&\text{if }a>\nicefrac{{1}}{{2}}\\ \end{cases}\] \[\text{for }B_{34}\text{ player 2 plays}\begin{cases}b=1&\text{if }a<\nicefrac{{1}}{{4}}\\ b=0&\text{if }a>\nicefrac{{1}}{{4}}\\ \end{cases}\] Using these values, we can determine the payoff to player 1 (that is, \(P_{1}^{A}\)) using the different payoff matrices in \(\mathcal{F}(B)\); see Figure 2. In this case, the maximum value of the payoff to player 1, minimized over all matrices in \(\mathcal{F}(B)\) is given by \(a=\nicefrac{{1}}{{4}}\) for which \(P_{1}(a)=2.5\). With 2-by-2 payoff matrices, the payoffs will always be bilinear in \(a\) and \(b\); that is, the payoff has the form \(c_{0}+c_{a}a+c_{b}b+c_{ab}ab\) for constants \(\{c_{0},c_{a},c_{b},c_{ab}\}\). Hence, the value of \(b\) (determined by player 2 to maximize their payoff) will always be zero or one, on different sides of an \(a\)_breakpoint_. Then, for each player 2 payoff matrix, player 1's payoff is composed of two different linear functions of \(a\), the change between the two occurring when \(b\) changes between zero and Fig. 2: The payoffs to player 1 based on different player 2 payoff matrices; using (17). one; which corresponds to an \(a\) breakpoint. Hence, Figure 3 is representative of many cases (except, sometimes, the linear functions will have slopes with different signs). Alternatively, if player 1 knew that all of the matrices in the family \(\mathcal{F}(B)\) were equally likely (i.e., each has a probability one-fourth), then the maximizing solution for player 1 would be to determine the best payoff using the _average_ value of the payoffs, see Figure 3. In this case, player 1 would select \(a=0.5^{-}\) with an average payoff of \(P_{1}(a)=2.75\). This improves the payoff to player 1 by 10%. The notion of average payoff will not be pursued further in this paper. Another example, which shows that the optimal value of \(a\) does not need to occur at the breakpoint for a single payoff matrix is given by the matrices \[A=\begin{bmatrix}4&1\\ 2&3\end{bmatrix}\qquad B=\begin{bmatrix}1&2\\ 3&4\end{bmatrix} \tag{18}\] In this case, the results corresponding to graphs (2) and (3) are in graphs (4) and (5). ## IV Numerical Results for strict ordinal games The numerical results in this paper were obtained using Matlab by the following process: 1. Consider every bimatrix pair in the class of strict ordinal 2-by-2 payoffs 2. For each, determine the solutions \(\{\mathbf{x},\mathbf{y}\}\) for the four cases \(\{EE,EF,FE,FF\}\) 3. For each solution, determine the payoff to player 1 and compare it to the complete information payoff in (3). A summary of the numerical results is in the following table \begin{tabular}{|c|c|c|} \hline Type & Number of games with no change & average loss \\ \hline EE & 44 & 1.8676 \\ \hline EF & 56 & 1.7045 \\ \hline FE & 44 & 1.3971 \\ \hline FF & 44 & 0.9632 \\ \hline \end{tabular} These values should be compared to the average payoff to player 1 in the complete information game in (1), which is 3.481. For example, in the "EE" case, player 1 has (on average) lost 54% of the value that could have been obtained. Below are graphs showing the numerical results; both the absolute and relative change in payoff to Player 1 under the four variations considered. The results are shown only for those games for which the payoff to player 1 changes. Fig. 4: The payoffs to player 1 based on different player 2 payoff matrices; using (18). Fig. 5: The payoffs to player 1, averaging the different player 2 payoff matrices; using (18). Fig. 3: The payoffs to player 1, averaging the different player 2 payoff matrices; using (17). ## V Observations and Conclusion We note the following: 1. Under the all variations considered, in the average case, player 1's payoff is reduced compared to the perfect information case. 2. In every case, adding additional uncertainty to either player 1 or player 2 (by transitioning from "E" to "F") results in a worse payoff for player 1 (on average over all 78 games). However, in the "FE" case, there are specific games for which player 1 obtains a small improvement. 3. For 22 games the payoffs to player 1 do not change under any of the game variations considered. Similarly, in 44 games, the payoffs to player 1 are worse in every game variation. For 12 games the payoff to player 1 is worse, but only in the "EF" case. 4. In the games "EE" and "EF" the maximum loss to player 1 can be as large as 3 (which is the maximal loss, going from 4 to 1). In the games "FE" and "FF" the maximum loss to player 1 is only 2. 5. There are several "famous" two person games, here is how they relate to our game analysis: 1. no change in payoff to player 1 under any of the four variations: "chicken", "hero", "no conflict", and "sia hunt" 2. reduced payoff to player 1 under every one of the four variations: "assurance", "battle of the sexes", "compromise", "coordination game", "deadlock", "peace", and "prisoner's dilemma" 3. reduced payoff only in the "EF" variation: "harmony" In all cases, from the perspective of player 1's average payoff, the games "EE" and "EF" are worse than the games "FE" and "FF". Future work that would extend the analysis in this paper: 1. A complete analysis of the symmetric game could be performed. 2. An evaluation could be made of how the payoff to player 2 changes under the variations considered. This would allow questions such as the following to be answered: "How valuable is it to Player 2 to change what Player 1 is thinking about Player 2?" 3. As we've noticed, the payoff to Player 1 does not change under many games; these are games for which it is not in Player 1's interest to learn more about the Player 2's payoff. Understanding the features of these games would be useful, especially if the information could be extrapolated to larger non-ordinal games.
2310.15938
ABKD: Graph Neural Network Compression with Attention-Based Knowledge Distillation
Graph Neural Networks (GNNs) have proven to be quite versatile for a variety of applications, including recommendation systems, fake news detection, drug discovery, and even computer vision. Due to the expanding size of graph-structured data, GNN models have also increased in complexity, leading to substantial latency issues. This is primarily attributed to the irregular structure of graph data and its access pattern into memory. The natural solution to reduce latency is to compress large GNNs into small GNNs. One way to do this is via knowledge distillation (KD). However, most KD approaches for GNNs only consider the outputs of the last layers and do not consider the outputs of the intermediate layers of the GNNs; these layers may contain important inductive biases indicated by the graph structure. To address this shortcoming, we propose a novel KD approach to GNN compression that we call Attention-Based Knowledge Distillation (ABKD). ABKD is a KD approach that uses attention to identify important intermediate teacher-student layer pairs and focuses on aligning their outputs. ABKD enables higher compression of GNNs with a smaller accuracy dropoff compared to existing KD approaches. On average, we achieve a 1.79% increase in accuracy with a 32.3x compression ratio on OGBN-Mag, a large graph dataset, compared to state-of-the-art approaches.
Anshul Ahluwalia, Rohit Das, Payman Behnam, Alind Khare, Pan Li, Alexey Tumanov
2023-10-24T15:34:30Z
http://arxiv.org/abs/2310.15938v1
# ABKD: Graph Neural Network Compression with Attention-Based Knowledge Distillation ###### Abstract Graph Neural Networks (GNNs) have proven to be quite versatile for a variety of applications, including recommendation systems, fake news detection, drug discovery, and even computer vision. Due to the expanding size of graph-structured data, GNN models have also increased in complexity, leading to substantial latency issues. This is primarily attributed to the irregular structure of graph data and its access pattern into memory. The natural solution to reduce latency is to compress large GNNs into small GNNs. One way to do this is via knowledge distillation (KD). However, most KD approaches for GNNs only consider the outputs of the last layers and do not consider the outputs of the intermediate layers of the GNNs; these layers may contain important inductive biases indicated by the graph structure. To address this shortcoming, we propose a novel KD approach to GNN compression that we call Attention-Based Knowledge Distillation (ABKD). ABKD is a KD approach that uses attention to identify important intermediate teacher-student layer pairs and focuses on aligning their outputs. ABKD enables higher compression of GNNs with a smaller accuracy dropoff compared to existing KD approaches. On average, we achieve a \(1.79\%\) increase in accuracy with a \(32.3\times\) compression ratio on OGBN-Mag, a large graph dataset, compared to state-of-the-art approaches. ## 1 Introduction Graph Neural Networks (GNNs) generalize Convolutional Neural Networks (CNNs) to non-Euclidean data. GNNs are widely used in a variety of fields, such as web-search recommendation systems (Ying et al., 2018), fake news detection for social networks (Han et al., 2020), modeling proteins for drug discovery (Zitnik and Leskovec, 2017), and computer vision tasks (Chen et al., 2022). Due to the expanding size of social networks and other graph-structured data, graph datasets have been steadily increasing in size (Wang et al., 2020). As datasets have expanded in size, GNN models have also increased in complexity, leading to substantial latency issues (Zhou et al., 2021; Que et al., 2023; Tan et al., 2023), as shown in figure 1. This is primarily attributed to the irregular structure of graph data and their access pattern in memory (Liu et al., 2022). Due to this limitation, large GNNs need to be compressed into smaller GNNs for latency-sensitive applications such as real-time recommendation (Liu et al., 2022), visual question answering (Senior et al., 2023), image search (Formal et al., 2020), and real-time spam detection (Li et al., 2019). ### Knowledge Distillation Knowledge Distillation (KD) is a common compression technique that uses a teacher model to supervise the training of a smaller student model (Hinton et al., 2015). While the original KD method can be applied to GNNs, it does not take into account any information about node connectivity. GraphAKD (He et al., 2022), LSP (Yang et al., 2021), and G-CRD (Joshi et al., 2022) are GNN-specific knowledge distillation methods that focus on aligning final layer node embeddings by considering node connectivity. However, these methods all consider only the node embeddings at the final layer and do not consider intermediate representations. By just aligning the final node embeddings, the model cannot learn the logic behind leveraging the connectivity of the graph or the inductive biases (Baxter, 2000) contained in the adjacency matrix. Therefore, the student model is effectively just learning a mapping from node attributes to more refined node embeddings. This can lead to suboptimal test generalization when the model encounters previously unseen data. **Objective**: Our goal is to achieve better generalization on out-of-distribution data, measured by accuracy @ various compression rates, by considering intermediate node embeddings and taking into account more of the inductive biases that GNNs contain. ### Attention-Based Knowledge Distillation To improve accuracy over SOTA, we must consider some of the inductive biases that other methods overlook - the connectivity of the input graph. We observe that for the vast majority of GNN architectures, the \(k^{th}\) layer of a GNN computes node embeddings by aggregating information from each node's \(k\)-hop neighborhood. Therefore, each layer contains its own inductive bias. As other KD methods consider only the node embeddings at the last layer of the teacher and student networks, they do not leverage all the inductive biases present in the network. However, one obvious challenge that is present in aligning intermediate node embeddings is that teacher and student networks will likely have a different number of hidden layers. As a result, there is no 1-1 correspondence between teacher and student layers and no way to easily figure out which teacher node embeddings should be aligned with which student node embeddings. To tackle these challenges, we propose Attention-Based Knowledge Distillation (ABKD). We use a trainable attention mechanism to learn which teacher-student pairs are the most important to align. We also utilize trainable projections into a common ABKD embedding space for teacher and student hidden layers. By aligning across intermediate layers, the student learns how to use the adjacency matrix to construct node embeddings instead of just learning a mapping from the node attributes to the final layer node embeddings (Figure 2). ### Our Contributions Our contributions are summarized as the following: 1. We design Attention-Based Knowledge Distillation (ABKD), a novel knowledge distillation approach for GNNs that incorporates the intermediate feature maps from every layer in both the teacher and student networks. This approach can be utilized to train teacher and student networks of any architectural configuration. Figure 1: Inference latency of GNNs with varying model sizes on the Flickr Zeng et al. (2020) dataset on a standard GCN model architecture with increasing embedding dimension. All tests were run on a Tesla V100 GPU. 2. We create an automatic feature linking mechanism using attention to identify which teacher-student layer pairs are the most important, which we then use to closely align their feature maps. 3. Our approach broadly improves the test accuracy of student networks over a large range of compression ratios. 4. We comprehensively test our approach on several datasets using different model architectures including GCNs (Kipf and Welling, 2016), RGCNs (Schlichtkrull et al., 2017) and GraphSAGE (Hamilton et al., 2017). We also test on several large datasets that are carefully curated to evaluate out-of-distribution generalization such as OGBN-Mag (Wang et al., 2020) and OGBN-Arxiv (Wang et al., 2020). ## 2 Related Work **Knowledge Distillation** KD for GNNs is a relatively niche field that has expanded over the last three years with the work of LSP (Yang et al., 2021). In this work, the authors attempt to align node embeddings between the student and teacher networks by maximizing the similarity between embeddings that share edges. As only node embeddings between edges are aligned, this KD method only preserves local topology. Joshi et al. (2022) extend LSP and propose two different KD algorithms: Global Structure Preserving Distillation (GSP) and Global Contrastive Representation Distillation (G-CRD). GSP extends LSP by considering all pairwise similarities among node features, not just pairwise similarities between nodes connected by edges. The authors also propose G-CRD which aims to implicitly preserve global topology by aligning the student and teacher node feature vectors via contrastive learning (van den Oord et al., 2018). Another work introduces graph adversarial knowledge distillation (GraphAKD), which trains the student model as a generator network and a discriminator network to distinguish between the predictions of the student and teacher models (He et al., 2022). Another work, GeometricKD forces the number of teacher and student hidden layers to be the same to study the impact of a student network operating on a smaller graph than the teacher (Yang et al., 2023). With the exception of GeometricKD, these works all consider the node embeddings at the final layer of the teacher and student network and aim to align those embeddings with one another in various ways. GeometricKD constrains the student and teacher networks to have the same number of layers, and it aligns the node embedding of teacher layer \(i\) with student layer \(i\), thus forming a 1-1 correspondence between the layers, which makes it inflexible to all student and teacher configurations. There have been several KD approaches that have been applied to CNNs that the GNN community has tried to adapt to GNNs with poor results, namely Fitnets (Romero et al., 2015) and attention transfer (Zagoruyko and Komodakis, 2016). These methods both compute a distance metric such as mean-squared error between the last layer node embeddings of the student and teacher network and do not take into account the adjacency matrix. Using attention to find similarities across student and teacher layers is a concept explored in CNNs Ji et al. (2021). However, this work's ideas cannot be applied to GNNs because the operations it uses to compare student and teacher features do not apply to GNNs. GNNs need special consideration in this regard over CNNs due to the non-spatial and unstructured form of graph data. **Compression:** Other common techniques for GNN compression include quantization and pruning. Quantization techniques differ for GNNs when compared to other deep neural networks because of unique sources of error, such as inaccurate gradient estimates, that arise due to the unstructured nature of GNNs (Tailor et al., 2021). To account for inaccurate gradient estimates, DegreeQuant, protects some number of nodes in every training iteration from lower precision training and utilizes full precision training for those nodes (Tailor et al., 2021). Other approaches consider the structure of the graph in their calculations to implicitly avoid GNN-specific sources of error (Feng et al., 2020; Bahri et al., 2021). Pruning techniques, which involve compressing networks by selectively deleting learned parameters, have also been applied to GNNs. These techniques involve applying strategies that have worked for other deep networks and using them to identify the most important channels for output activation (Chen et al., 2021)(Zhou et al., 2021). One approach also works to dynamically prune during execution (Chen et al., 2021). ## 3 Proposed Approach ### Intuition and Mathematical Foundations In this section, we first discuss the intuition behind ABKD and introduce some of the mathematical definitions needed to explain it thoroughly. #### 3.1.1 SoftKD Intuition In SoftKD (Hinton et al., 2015), we compute two different losses. The first, \(H(s_{p},y)\), is a cross-entropy loss between the output student probability distribution and the ground truth labels. The other, \(H(s_{p},t_{p})\), is a cross-entropy loss between the output student probability distribution and the output teacher probability distribution. The total loss is defined as: \[L_{KD}=H(s_{p},y)+\alpha H(s_{p},t_{p}) \tag{1}\] Here, \(\alpha\) is a hyper-parameter controlling how much the KD loss affects the total loss. The goal is to align the output student probability distribution with the output teacher probability distribution. The higher \(H(s_{p},t_{p})\) is, the less aligned the student and teacher output probability distributions are. #### 3.1.2 ABKD Intuition Similarly, with ABKD, we want to incorporate this intuition of alignment. However, we want to go further than just aligning the final output - we want to align the outputs at the intermediate layers, as the intermediate layers contain inductive biases that are not present in just the final layers. As one of our goals with ABKD is to work with any combination of teacher and student architectural configurations, this presents one significant challenge: Teacher and student networks will likely have a different number of hidden layers, which means there is no 1-1 correspondence between teacher and student layers. ABKD solves this problem by identifying which teacher-student layer pairs are the most important to align via an attention mechanism. This mechanism works with any arbitrary number of teacher layers and student layers, which makes this approach amenable to any arbitrary teacher-student configuration. ABKD also uses a reprojection technique to account for the student and teacher networks having different hidden dimensions. The output of each hidden layer for both the teacher and student networks is projected into a standardized embedding dimension, which ensures that we can work with student and teacher networks of any embedding dimension. As each layer represents its own semantic information, an important challenge that we faced was to ensure that each layer's feature map was not smoothed out by a single projection matrix. To this end, we use separate trainable linear layers for each hidden layer in both the teacher and student networks, in order to ensure that we don't lose out on any valuable semantic information in the hidden layers. These trainable linear layers help us construct the two key components of ABKD, which are the attention map and the dissimilarity map. At a high level, the attention map tells us how important each teacher-student layer pair is, while the dissimilarity map tells us how distant the feature maps of each teacher-student layer pair are. The teacher-student layer pairs with higher attention scores are deemed as more important, and ABKD focuses on reducing their dissimilarity scores during training. \begin{table} \begin{tabular}{|c|c|} \hline Method & Number of Layers Considered \\ \hline GraphAKD & \(1\) \\ G-CRD & \(1\) \\ Fitnets & \(1\) \\ LSP & \(1\) \\ ABKD & **All** \\ \hline \end{tabular} \end{table} Table 1: Comparison of Attention-Based Knowledge Distillation with other Knowledge Distillation approaches. #### 3.1.3 Mathematical Foundation Without loss of generality, we will consider distilling a general Graph Convolution Network (GCN) (Kipf & Welling, 2016), in which the output of the \(l^{th}\) layer is \[H^{l}=\sigma(\hat{A}H_{l-1}W_{l}) \tag{2}\] Here, \(\sigma\) is an activation function and \(\hat{A}\) is the normalized adjacency matrix, \(\hat{A}=D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\), where \(D\) is the diagonal degree matrix. \(H^{l-1}\in R^{n\times d}\) represents the output of the last hidden layer and \(W^{l}\in R^{d\times d}\) represents the trainable weights of the current layer. Consider two different tensors \(T\in R^{T_{l}\times n\times d_{e}}\) and \(S\in R^{S_{l}\times n\times d_{e}}\). \(T_{l}\) and \(S_{l}\) represent the number of layers in the teacher and student networks respectively. \(n\) represents the number of nodes in the graph that is being trained on and \(d_{t}\) and \(d_{s}\) represents the dimensionality of the teacher and student networks, respectively. \(T\) and \(S\) represent the calculated layer maps before activation is applied at every layer for the teacher and student networks, respectively. ### ABKD #### 3.2.1 Attention Scores The first step of ABKD is to generate \(A\in R^{T_{l}\times S_{l}}\). \(A_{ij}\) will represent an "importance" score for the layer pair consisting of teacher layer \(i\) and student layer \(j\). We take the average of the feature maps along the node dimension to compute a mean node feature for every layer in both the teacher and student networks. Call these tensors \(T_{a}\in R^{T_{l}\times d_{e}}\) and \(S_{a}\in R^{S_{l}\times d_{e}}\). Then, we pass each layer in \(T_{a}\) through its own linear layer to create \(T_{p}\in R^{T_{l}\times d_{a}}\), where \(d_{a}\) is the embedding dimension of ABKD. Similarly, we create \(S_{p}\in R^{S_{l}\times d_{a}}\). We can finally generate \(A\) in the following manner: \[A=softmax(\frac{T_{p}S_{p}^{T}}{\sqrt{d_{a}}}) \tag{3}\] #### 3.2.2 Dissimilarity Scores The next step is to compute a pairwise dissimilarity score for each teacher-student layer pair. Again, we project the features into \(d_{a}\). For calculating the attention scores, we averaged over the node dimension before projecting, as our goal was to identify important layers. When calculating the Figure 2: ABKD generates an attention map using a trainable attention mechanism and a dissimilarity map using a trainable subspace projection. The loss matrix is an element-wise multiplication of the attention matrix and the dissimilarity matrix. pairwise dissimilarity, we want to incorporate the per-node embeddings. So, we use a separate set of projection matrices. We use \(P_{t}\in R^{d_{t}\times d_{a}}\) and \(P_{s}\in R^{d_{s}\times d_{a}}\) to represent the projections. However, distance metrics are less semantically valuable if \(d_{a}\) is high. To alleviate this problem, we define a trainable matrix \(P\in R^{d_{a}\times d_{a}}\) to project all vectors into the subspace defined by the column space of \(P\). Since the cardinality of the subspace defined by the column space of \(P\) will be smaller than or equal to the cardinality of \(R^{d_{a}}\), distance metrics within the subspace will be more valuable on average compared to distance metrics in \(R^{d_{a}}\). The final step is to average over the embedding dimension and then produce \(D\in R^{T_{1}\times S_{l}}\), which gives dissimilarity scores for each teacher-student layer pair. For calculating the dissimilarity, we experiment with Euclidean and cosine distance, but Euclidean distance generally tends to perform better. The dissimilarity score for a layer pair \((i,j)\) can be represented as: \[D_{ij}=||(T_{i}P_{t}-S_{j}P_{s})P\frac{\mathds{1}_{d_{a}}}{d_{a}}||_{2}^{2} \tag{4}\] where \(\mathds{1}_{d_{a}}\) is a vector of \(1\)s in \(R^{d_{a}}\). #### 3.2.3 Final Loss Calculation To produce the final loss matrix, we element-wise multiply \(A\) and \(D\) and then take the row-wise mean to produce a single number that represents the ABKD loss. \[L_{abkd}=(\mathds{1}_{T_{l}})^{T}(A\odot D)(\frac{\mathds{1}_{S_{l}}}{S_{l}}) \tag{5}\] The final loss is calculated as \[L=H(s_{p},y)+\beta L_{abkd} \tag{6}\] There is one important theorem to consider that proves \(L_{abkd}\) distills valuable knowledge from the teacher network to the student network. **Theorem 1:** Consider a teacher layer \(i\) and a student layer \(j\) such that \(j<i\). Consider the weight \(W_{j}^{s}\) which is the associated weight for the \(j^{th}\) layer of the student network. The weight update for \(W_{j}^{s}\) will be affected by \(W_{i}^{t}\), which are the weights for the \(i^{th}\) layer of the teacher network. **Proof:** To calculate the weight update, we have to first formulate a loss for the layer pair \((i,j)\). By adapting equation 5 to a specific element in the loss matrix, we get: \[L_{ij} \sim(\frac{\mathds{1}_{n}}{n})^{T}(T_{i}^{p}(W_{j}^{ps})^{T})(W_{ j}^{s})^{T}((H_{j-1}^{s})^{T}\hat{A}^{T}))\frac{\mathds{1}_{n}}{n}*D_{ij}\] \[T_{i}^{p} =\hat{A}H_{i-1}^{t}W_{i}^{t}W_{i}^{pt}\] where \(W_{i}^{pt}\) and \(W_{j}^{ps}\) represent the projection matrices used to calculate the attention score. Assuming that our weight updates are performed by gradient descent, to calculate the weight update for \(W_{j}^{s}\) due to \(L_{ij}\), we have to first calculate \(\frac{\partial L_{ij}}{\partial W_{j}^{s}}\in R^{d_{a}\times d_{s}}\): \[\frac{\partial L_{ij}}{\partial W_{j}^{s}}=\frac{\partial A_{ij}}{W_{j}^{s}}D_{ ij}+\frac{\partial D_{ij}}{W_{j}^{s}}A_{ij} \tag{7}\] We focus on the first term and obtain the result: \[\frac{\partial A_{ij}}{\partial W_{j}^{s}}\sim((H_{j-1}^{s})^{T}\hat{A}^{T}( \frac{\mathds{1}_{n}}{n}))((\frac{\mathds{1}_{n}}{n})^{T}T_{i}^{p}(W_{j}^{ps})^ {T}) \tag{8}\] It is apparent that as \(\frac{\partial A_{ij}}{\partial W_{j}^{s}}\) contains the term \(T_{i}^{p}=\hat{A}H_{i-1}^{t}W_{i}^{t}W_{i}^{pt}\), the weight update will reflect terms from the \(i^{th}\) teacher layer. This theorem goes to show that even though student layer \(j\) is responsible for aggregating information for every node from its \(j\)-hop neighborhood, its weights collect knowledge from teacher layer \(i\) - knowledge that would be unavailable to student layer \(j\) using any of the other GNN KD approaches. An important follow-up observation is that the loss for a teacher-student layer pair will be higher if the pair is deemed as more important and if their projected feature maps have a high dissimilarity score (\(D_{ij}>0.5\)), which indicates that they aren't closely aligned. To verify this observation, consider an arbitrary teacher layer \(i\) and an arbitrary student layer \(j\). Consider \(A_{ij}\). If the pair \((i,j)\) is deemed important, \(A_{ij}\) will be high. Now consider \(D_{ij}\). If the pair \((i,j)\) is not closely aligned, then \(D_{ij}\) will be high. As \(L_{ij}=A_{ij}*D_{ij}\), if both \(A_{ij}\) and \(D_{ij}\) are high, then \(L_{ij}\) will also be high, hence proving that the loss is high for important but misaligned layer pairs. tion, our results empirically prove that ABKD can train student models that generalize better than other KD approaches. **ABKD aligns intermediate embeddings**: To empirically prove that ABKD actually aligns intermediate embeddings based on the attention matrix, we visualize the before and after training attention and dissimilarity maps in figure 4. We train on OGBN-Mag and use a deeper teacher network of 5 layers and a hidden dimension of 512. The student network has 3 layers and a hidden dimension of 32. Our results show that dissimilarity scores are low where the attention scores are high and vice versa. This is in line with the intuition that we constructed in 3.2.3. **Deep GNNs**: We also test ABKD on deep GNN architectures. While most GNN architectures are shallow due to the problem of over-smoothing, there are some approaches that alleviate this issue, allowing for deep GNN architectures. One of these approaches is GCNII (Chen et al., 2020). We test on Cora (Mccallum et al., 2000), Citeseer (Sen et al., 2008), Pubmed (Namata et al., 2012) and Figure 4: Attention and Dissimilarity maps before and after training with ABKD. Cooler colors refer to lower scores and warmer colors correspond to higher scores. \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Dataset** & **Cora** & **Citeseer** & **Pubmed** & **NELL** \\ **Teacher** & GCNII (64L-64H) & GCNII (64L-64H) & GCNII (64L-64H) & GCNII (64L-64H) \\ **Student** & GCNII (4L-4H) & GCNII (4L-4H) & GCNII (4L-4H) & GCNII (4L-4H) \\ \hline Teacher & \(88.40\) & \(77.33\) & \(89.78\) & \(95.55\) \\ Student & \(74.22\pm 1.32\) & \(68.14\pm 1.45\) & \(88.66\pm 0.87\) & \(85.20\pm 0.75\) \\ LSP & \(75.18\pm 1.45\) & \(70.43\pm 1.32\) & \(89.01\pm 1.73\) & \(85.20\pm 1.32\) \\ GSP & \(78.32\pm 1.21\) & \(69.43\pm 0.87\) & \(89.13\pm 1.05\) & \(86.13\pm 1.47\) \\ G-CRD & \(83.35\pm 1.32\) & \(71.42\pm 1.21\) & \(89.73\pm 1.05\) & \(88.61\pm 1.01\) \\ ABKD & \(\mathbf{84.27\pm 1.48}\) & \(\mathbf{71.96\pm 0.87}\) & \(\mathbf{89.87\pm 0.45}\) & \(\mathbf{91.83\pm 0.74}\) \\ \# Parameters & \(5835\) & \(14910\) & \(2083\) & \(22686\) \\ Ratio & \(60.7\times\) & \(33.5\times\) & \(141.3\times\) & \(27.7\times\) \\ \hline \hline \end{tabular} \end{table} Table 3: Average accuracies over several trials for a variety of datasets, using GCNII. The results are based on the average of five trials, with each distillation method applied to the same set of student weights. NELL (Carlson et al., 2010). Table 3 shows that ABKD is able to distill these deep GCNIIs into shallower GCNIIs with higher accuracy compared to other distillation methods in high compression settings. While currently, most GNN use cases require shallow GNNs, this could be useful for future applications that require deeper GNNs. **Improved Weight Initialization for Highly Compressed Networks**: We find that for smaller datasets, information from the teacher network is mainly distilled into one layer of the student network, as shown in Figure 5. Our hypothesis for this occurrence is that smaller datasets are not very complex and one layer is sufficient for learning most of the patterns. Through experimentation, we prove that when we initialize a one-layer network from the weights of this particular layer, we get improved accuracy compared to random initialization, as shown in Table 4. ### Ablation Studies **Aligning Intermediate Layers is Important**: To prove that the aligning of intermediate layers is necessary for superior performance, we experiment with a variant of ABKD, which we call modified ABKD, where we set \(A\in R^{T_{1}\times S_{1}}\) to all zeros, but we set the bottom right value to 1. This indicates that we are only interested in the dissimilarity between the last layer node embeddings of the teacher and student models. The results in table 5, prove that we gain accuracy by considering the outputs of intermediate layers for both teacher and student models. In this experiment, we start from the same set of initialized weights for both the ABKD and modified ABKD approaches. **Improvements due to Subspace Projection**: In our approach, we introduced the concept of subspace projection as an alleviation to high dimensional embedding spaces. While it is not needed for ABKD to work, it does improve our results as the learned subspace projection matrix tends to be of lower rank than the embedding dimension. This indicates that we can project our feature maps into subspaces smaller than \(R^{d_{a}}\), which increases the semantic value of the dissimilarity scores. **One Linear Layer Per Hidden Layer Is Necessary**: In our approach, we mention that each hidden teacher and student layer is assigned its own linear layer for projection into \(d_{a}\). This is because each \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Initialized & Random Init & No Training \\ \hline Cora & \(80.29\) & \(73.58\) & \(65.36\) \\ Citeseer & \(70.12\) & \(68.12\) & \(54.20\) \\ Pubmed & \(88.76\) & \(86.03\) & \(72.90\) \\ \hline \hline \end{tabular} \end{table} Table 4: Results for weight initialization experiment. Experimental details can be found in the appendix. Figure 5: Attention maps for Cora, Citeseer, and Pubmed. Each color in the heatmap represents the importance score associated with that teacher-student layer pair. Warmer colors mean higher important scores. It seems apparent that most of the knowledge from the teacher layers is distilled into one student layer. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Original-Mag & OGBN-Arxiv \\ \hline Student & \(44.46\pm 0.54\) & \(71.27\pm 0.48\) \\ ABKD & \(\mathbf{47.46\pm 0.45}\) & \(\mathbf{73.18\pm 0.56}\) \\ Modified ABKD & \(46.56\pm 0.60\) & \(71.89\pm 0.87\) \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study comparing modified ABKD that only considers aligning the last layer node embeddings with ABKD that considers intermediate layer node embeddings. The teacher and student models are the same as the ones in Table 2. layer represents its own \(k\)-hop neighborhood, and using just one linear layer would prove inadequate in capturing the full spectrum of essential semantic information contained within each layer. We run an experiment in which we use only one linear layer for the teacher and student projections and demonstrate that there is a significant accuracy dropoff compared to using individual linear layers for the projection. ## 5 Conclusion Graph Neural Networks (GNNs) have increased in complexity in recent years, owing to the rapidly increasing size of graph datasets. This increase in complexity poses a problem for latency-sensitive applications such as real-time recommendation and real-time spam detection, as GNN model sizes do not scale well with latency. To address this predicament, we propose an innovative solution known as Attention-Based Knowledge Distillation (ABKD). ABKD employs an attention-based feature linking mechanism to identify important intermediate teacher-student layer pairs and focuses on aligning the node embeddings of those pairs. This knowledge distillation approach broadly outperforms SOTA GNN-specific KD approaches over a wide variety of compression settings. It also works with both deep and shallow networks as proven by our experiments and performs well with several different types of GNN architectures. On average, we achieve a \(1.79\%\) increase in accuracy with a \(32.3\times\) compression ratio on OGBN-Mag, a large graph dataset, compared to state-of-the-art approaches. \begin{table} \begin{tabular}{r c c} \hline \hline Dataset & Subspace Projection & No Projection \\ \hline Cora & \(84.27\pm 1.32\) & \(83.78\pm 1.25\) \\ Citeseer & \(71.96\pm 0.83\) & \(71.46\pm 1.21\) \\ Pubmed & \(89.87\pm 0.87\) & \(89.03\pm 1.04\) \\ NELL & \(91.83\pm 0.85\) & \(90.98\pm 0.75\) \\ OGBN-Mag & \(47.21\pm 0.32\) & \(46.83\pm 0.45\) \\ OGBN-Arxiv & \(73.25\pm 0.56\) & \(72.87\pm 0.34\) \\ \hline \hline \end{tabular} \begin{tabular}{r c} \hline Dataset & Multiple Linear Layers & One Linear Layer \\ \hline Cora & \(84.27\pm 1.48\) & \(75.52\pm 1.32\) \\ Citeseer & \(71.96\pm 0.87\) & \(68.86\pm 1.27\) \\ Pubmed & \(89.87\pm 0.45\) & \(88.75\pm 0.64\) \\ NELL & \(91.83\pm 0.74\) & \(85.38\pm 0.68\) \\ OGBN-Mag & \(47.21\pm 0.32\) & \(44.67\pm 0.54\) \\ OGBN-Arxiv & \(73.25\pm 0.56\) & \(71.23\pm 0.47\) \\ \hline \hline \end{tabular} \end{table} Table 6: Subspace Projection Ablation Results. The teacher and student networks are the same as the ones described in tables 2 and 3. \begin{table} \begin{tabular}{r c c} \hline \hline Dataset & Multiple Linear Layers & One Linear Layer \\ \hline Cora & \(84.27\pm 1.48\) & \(75.52\pm 1.32\) \\ Citeseer & \(71.96\pm 0.87\) & \(68.86\pm 1.27\) \\ Pubmed & \(89.87\pm 0.45\) & \(88.75\pm 0.64\) \\ NELL & \(91.83\pm 0.74\) & \(85.38\pm 0.68\) \\ OGBN-Mag & \(47.21\pm 0.32\) & \(44.67\pm 0.54\) \\ OGBN-Arxiv & \(73.25\pm 0.56\) & \(71.23\pm 0.47\) \\ \hline \hline \end{tabular} \end{table} Table 7: Linear Layer Ablation Results. The teacher and student networks are the same as the ones described in tables 2 and 3.
2308.00819
Search for an invisible scalar in $t \bar{t}$ final states at the LHC
We use the current $t\bar t$ experimental analysis to look for Dark Matter (DM) particles hidden in the final state. We present a phenomenological study where we successfully perform the reconstruction of a $t\bar{t}$ system in the presence of a scalar mediator $Y_0$, that couples to both Standard Model (SM) and to DM particles. We use a \texttt{MadGraph5\_aMC@NLO} simplified DM model, where signal samples of $pp \rightarrow t\bar{t}Y_0$ are generated at the Large Hadron Collider (LHC) with both Charge-Parity (CP) -even and CP-odd couplings of $Y_0$ to the top quarks. Different mass scales for the $Y_0$ mediator are considered, from the low mass region ($\sim$ 0~GeV) to masses close to the Higgs boson mass (125~GeV). The dileptonic final states of the $t\bar{t}$ system were used in our analysis. The reconstruction of the $t\bar{t}$ system is done with a kinematic fit, without reconstructing the mediator. All relevant SM backgrounds for the dileptonic $t\bar{t}$ search at the LHC are considered. Furthermore, CP angular observables were used to probe the CP-nature of the coupling between the mediator and top-quarks, which allowed to set confidence level (CL) limits for those Yukawa couplings as a function of the mediator mass.
Duarte Azevedo, Rodrigo Capucha, Pedro Chaves, João Bravo Martins, António Onofre, Rui Santos
2023-08-01T20:14:02Z
http://arxiv.org/abs/2308.00819v1
# Search for an invisible scalar in \(t\bar{t}\) final states at the LHC ###### Abstract We use the current \(t\bar{t}\) experimental analysis to look for Dark Matter (DM) particles hidden in the final state. We present a phenomenological study where we successfully perform the reconstruction of a \(t\bar{t}\) system in the presence of a scalar mediator \(Y_{0}\), that couples to both Standard Model (SM) and to DM particles. We use a MadGraph5_aMC@NLO simplified DM model, where signal samples of \(pp\to t\bar{t}Y_{0}\) are generated at the Large Hadron Collider (LHC) with both Charge-Parity (CP) -even and CP-odd couplings of \(Y_{0}\) to the top quarks. Different mass scales for the \(Y_{0}\) mediator are considered, from the low mass region (\(\sim 0\) GeV) to masses close to the Higgs boson mass (125 GeV). The dileptonic final states of the \(t\bar{t}\) system were used in our analysis. The reconstruction of the \(t\bar{t}\) system is done with a kinematic fit, without reconstructing the mediator. All relevant SM backgrounds for the dileptonic \(t\bar{t}\) search at the LHC are considered. Furthermore, CP angular observables were used to probe the CP-nature of the coupling between the mediator and top-quarks, which allowed to set confidence level (CL) limits for those Yukawa couplings as a function of the mediator mass. Introduction Evidence for the existence of dark matter (DM) provided by astrophysical observations, be it from gravitational lensing effects [1, 2, 3, 4], galactic rotational velocity curves [5] or from other measurements like the collision of clusters of galaxies (Bullet Cluster) [6], is by now overwhelming. However, the nature of this non-luminous matter remains unknown in spite of decades of intensive searches in a vast array of experiments. From the observations, we can hypothesise that if DM is actually composed of particles, they only interact very weakly with the Standard Model (SM). Since SM neutrinos have been mostly ruled out as possible DM candidates, due to cosmological constraints [7], this suggests the possible existence of a "dark sector" composed of particles beyond the SM. Attempts at direct and indirect detection of the myriad of particles proposed from extending the SM with such a sector (such as weakly interacting massive particles and axions, among many others) have, so far, provided no definitive results favouring any particular hypotheses [8, 9]. DM searches at colliders have focused on mono-events, such as mono-jet, mono-Higgs among others [10, 11, 12]. There are also bounds on the portal couplings from the invisible branching ratio of the 125 GeV Higgs [13, 14]. Searches for DM particles produced alongside a \(t\bar{t}\) pair and single top quark events have also been performed in the past [15, 16, 17, 18] mostly using variables such as missing energy as a discriminator, with no attempt to reconstruct the kinematics of the top quarks. However, and to best of our knowledge, this is the first time a study is designed to look for DM particles hidden in the current on-going searches and analysis. Let us consider as an example the case of \(t\bar{t}\) production in the di-leptonic channel. The question we want to answer is: if a very light DM particle would be produced alongside the \(t\bar{t}\) pair would we see any difference in any distributions? If no differences are found, could the analysis already performed be used to limit the couplings of DM particles for very light invisible particles? One should note that from the point of view of high energy collider physics we are really exploring the case of a zero DM mass particle which means that we are spanning DM masses from the scenarios of ultra-light particles to a few GeV in mass. That is, for LHC energies, masses below about 1 GeV are for all practical purposes equal to zero. Still we will also show how the \(t\bar{t}\) analysis performs in the case of a 125 GeV scalar. In order to implement our idea we consider a new mediator that couples to both DM and to the SM particles. We study the possibility of using previous mainstream experimental analysis of \(pp\to t\bar{t}\) in the di-leptonic channel to gauge the impact of a spin-0 DM mediator (\(J^{CP}=0^{\pm}\)) in the associated production process \(t\bar{t}Y_{0}\). The analysis is performed within the description of simplified models of DM production at the LHC, where the DM mediator (\(Y_{0}\)) couples to the top-quarks proportionally to the top mass. The results are presented as a function of the modifier of this new Yukawa coupling. This is a convenient approach, as the LHC can explore a large spectrum of DM mediator masses and coupling strengths, allowing to access the CP-nature of these mediators, in case they exist even as mixed CP states. This paper is organised as follows: the simplified DM model, the relevant parameters and the angular observables we used, are presented in Section 2. The event generation and simulation are described in Section 3 and, in Section 4, the event selection and kinematic reconstruction are discussed. Our results are presented in Section 5 and the main conclusions are described in Section 6. The DM Lagrangian In our study, we used the simplified DM model DMsimp[19] where, besides the scalar \(Y_{0}\) boson, we also have a dark sector that couples only to \(Y_{0}\). In our paper, we will remain agnostic to the latter, focusing solely on the interaction between the \(Y_{0}\) DM mediator to the SM content. In particular, we will assume Yukawa couplings proportional to the mass of the respective SM particle and hence dedicate ourselves exclusively to top quarks. The Lagrangian density can thus be simplified and written as follows \[\mathcal{L}_{SM}^{Y_{0}}=\frac{y_{33}^{t}}{\sqrt{2}}\bar{t}(g_{u _{33}}^{S}+ig_{u_{33}}^{P}\gamma^{5})tY_{0}\quad, \tag{2.1}\] where the \(g_{u_{33}}^{S/P}\) are the CP-even/-odd couplings of the DM mediator (\(Y_{0}\)) to top quarks, respectively. They are normalized to the SM Yukawa couplings, \(y_{ii}^{f}=\sqrt{2}m_{f}/v\). The scalar hypothesis (CP=+1) is given by setting \(g_{u_{33}}^{S}=1\) and \(g_{u_{33}}^{P}=0\) and for the pseudo-scalar scenario (CP=-1) we set \(g_{u_{33}}^{S}=0\) and \(g_{u_{33}}^{P}=1\). When both \(g_{u_{33}}^{S/P}\neq 0\), the interaction has both CP-even and -odd components and is thus CP-violating. Our goal is to explore how angular distributions of SM particles may help probing the dark sector, by looking into the expected changes of these observables in the presence of this DM mediator. Several CP-observables have been proposed in the literature to probe the CP-nature of the coupling of top quarks to the Higgs boson at the LHC or future colliders, using mainly the \(t\bar{t}H\) channel [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]. The vast majority of these variables are only sensitive to the square terms \((g_{u_{33}}^{S})^{2}\) and \((g_{u_{33}}^{P})^{2}\) that appear in the cross section of the interaction described by equation 2.1. After looking in detail at several possible observables, we concluded that the most effective ones are the azimuthal angle difference \(\Delta\phi_{\ell-\ell^{+}}\) of the charged leptons that come from the decay of top quarks, and the \(b_{4}\) variable evaluated in the laboratory frame (LAB), \(b_{4}=(p_{t}^{z}.p_{\bar{t}}^{z})/(|\vec{p}_{t}|.|\vec{p}_{\bar{t}}|)\), where the \(z\)-direction corresponds to the beam direction, and \(\vec{p}_{t(\bar{t})}\) and \(p_{t(\bar{t})}^{z}\) correspond to the total and z-component of the top (anti-top) quark momentum measured in the LAB frame, respectively. It is worth noting that the \(b_{4}\) variable depends on the \(t\) and \(\bar{t}\) polar angles, \(\theta_{t}\) and \(\theta_{\bar{t}}\) respectively, with respect to the \(z\)-direction, and can be expressed as \(b_{4}=\cos\theta_{t}\times\cos\theta_{\bar{t}}\). In order to evaluate this variable, the kinematic reconstruction of the \(t\bar{t}\) system needs to be accomplished. This will be discussed in the next sections. ## 3 Event generation and simulation LHC-like signal and background events were generated with MadGraph5_aMC@NLO[51], with a centre-of-mass energy of 13 TeV. The DM simplified model, DMsimp[19]1, was used to generate \(pp\to t\bar{t}Y_{0}\) signal events, at Leading Order (LO). The pure scalar and pseudo-scalar signals were generated by setting the respective couplings as mentioned in the previous section (following the Lagrangian density in equation 2.1). The mass of the DM mediator was set to \(m_{Y_{0}}=0,1,10\) and 125 GeV, and the top quark mass to \(m_{t}=172.5\ GeV\). Only the dileptonic final state of the \(t\bar{t}\) system was considered (\(t\bar{t}\to bW^{+}\bar{b}W^{-}\to b\ell^{+}\nu_{\bar{t}}\bar{b}\ell^{-}\bar{ \nu}_{\ell}\)), with the DM mediator forced not to decay, although if we allow the DM mediator to decay to mostly DM particles, the analysis and subsequent results do not change. Backgrounds from SM \(t\bar{t}\) (plus up to 3 jets), \(t\bar{t}V\) (plus up to 1 jet), \(ttH\), single top quark production (\(t\)-, \(s\)- and \(Wt\)-channels), \(W/Z\) (plus up to 4 jets), \(W(Z)b\bar{b}\) (plus up to 2 jets) and \(WW,ZZ,WZ\) diboson processes were also generated using MadGraph5_aMC@NLO at LO. Following event generation and hadronization by PYTHIA[52], all signal and background events went through a fast simulation of a typical LHC detector performed by Delphes[53], using the default ATLAS detector cards. Further details on the event generation and detector simulation can be found in [44]. The analysis of signal and background events is performed within the MadAnalysis5[54] framework. ## 4 Event selection and kinematic reconstruction Events are selected by requiring the jets and leptons reconstructed by Delphes to have their pseudo-rapidity2\(\eta<2.5\) and transverse momenta \(p_{T}>20\) GeV. Only events with two jets and two isolated leptons of opposite charge are accepted. To avoid contamination from the \(Z\) + jets background, we require the invariant mass of the two lepton system to fulfil \(|m_{\ell^{+}\ell^{-}}-m_{Z}|>10\) GeV. Further details on the event selection criteria can be found in [45]. Footnote 2: The pseudo-rapidity is defined by \(\eta=-ln[tan(\theta/2)]\), where \(\theta\) is the particle polar angle. In the reconstruction of the \(t\bar{t}\) system, we assume the reconstructed leptons are the ones from the W decays, originated in the top quark decays. We then need to assign the reconstructed jets to the correct b-quarks from the associated top quarks. In order to avoid mismatches, once two jets are present in the final state, we use multivariate statistical methods from the Toolkit for Multivariate Data Analysis, TMVA[55], to find the right pairing of jets and leptons.. To that effect we use several distributions (which already include hadronization and shower effects), where _right_ and _wrong_ combinations are compared. A wrong combination happens whenever a jet not originating from a top decay is assigned to its corresponding b-quark. Right Figure 1: Normalized TMVA input variable distributions for correct combinations (labeled as _signal_, in blue) and wrong combinations (labeled as _background_, in red), for a DM \(J^{P}=0^{+}\) mediator. The \(\Delta R\) between the \(\ell^{+}\) and the \(b\)-jet from the \(t\) decay, is shown on the left plot. The corresponding \(\Delta\Phi\) distribution can be seen on the right. (labelled as Signal, in blue) and wrong (labelled as Background, in red) combinations are represented in Figures 1 and 2, for the \(\Delta R\)3, \(\Delta\Phi\) and \(\Delta\theta\) between the \(\ell^{+}\) lepton and the jet from the hadronization of the \(b\)-quark (originated in the \(t\) decay and labelled as \(b_{t}\)). Similar distributions were also obtained for the \(\ell^{-}\) lepton, and used to optimize finding the right combination. Clear differences between the wrong and right combinations are visible in all distributions shown. The invariant mass difference between the pairs \((b_{t},l^{+})\) and \((\bar{b}_{t},l^{-})\) is also shown in Figure 2, for the right and wrong combinations. Several multivariate statistical methods were then trained by TMVA, using right and wrong combinations distributions for training and testing. All individual distributions were combined into a single discriminant classifier for each one of the methods. In Figure 3 (left), we show the Receiver Operating Characteristic (ROC) curve for the \(Y_{0}\) scalar (\(J^{P}=0^{+}\)) and pseudo-scalar (\(J^{P}=0^{-}\)) mediators in the top and bottom plots, respectively. From the ROC curves we can see that the best method is a Boosted Decision Tree with Gradient boost (BDTG). The corresponding classifier outputs are shown in Figure 3 (right), for scalar and pseudo-scalar DM mediators in the top and bottom plots, respectively. From the comparison between both cases, we can see that the scalar mediator is more challenging, with a slightly worse ROC curve. For this reason, from this point on, all results shown originate directly from the scalar mediator analysis, since this case represents the most conservative scenario. Footnote 3: \(\Delta R\equiv\sqrt{\Delta\Phi^{2}+\Delta\eta^{2}}\), where \(\Delta\Phi\left(\Delta\eta\right)\) correspond to the difference in the azimuthal angle (pseudo-rapidity) of two objects. To reconstruct the 3-momentum of the undetected neutrinos from the top quark decays, we Figure 2: Normalized TMVA input variable distributions for correct combinations (labeled as _signal_, in blue) and wrong combinations (labeled as _background_, in red), for a DM \(J^{P}=0^{+}\) mediator. The \(\Delta\theta\) between the \(\ell^{+}\) and the \(b\)-jet from the \(t\) decay is shown on the left and the lepton plus \(b\)-jet mass difference is shown on the right. impose the following energy-momentum conservation conditions to events, \[(p_{\nu}+p_{\ell^{+}})^{2}=m_{W}^{2}, \tag{4.1}\] \[(p_{\bar{\nu}}+p_{\ell^{-}})^{2}=m_{W}^{2},\] \[(p_{W^{+}}+p_{b})^{2}=m_{t}^{2},\] \[(p_{W^{-}}+p_{\bar{b}})^{2}=m_{t}^{2},\] \[p_{\nu}^{x}+p_{b}^{x}=\not{E}^{x},\] \[p_{\nu}^{y}+p_{b}^{y}=\not{E}^{y},\] where \(p_{\varsigma}\) (\(p_{\varsigma}^{\kappa}\)) represents the four-momentum of particle \(\varsigma\) (its projection along the \(\kappa\)-axis). In the first four equations mass constraints are imposed, where neutrinos and charged leptons are assumed to reconstruct the masses of the \(W\) bosons they originated from which, when combined with the right jet, should reconstruct the correspondent top quark masses. We also consider (last two equations) the total missing transverse energy is wholly accounted for by the neutrinos. In this approximation, we are assuming the DM mediator contribution to the missing transverse energy to be negligible (as well as its \(z\)-axis component) when compared to the neutrinos contribution. In order to find the best solution for each event, the top quark and \(W\) boson mass values, used by the fit, were sampled 500 times from 2-dimensional probability Figure 3: Background rejection versus signal acceptance (ROC curve) for different multi-variate methods are compared, for the \(J^{P}=0^{+}\) mediator (top left). The distribution of the best classifier (BDTG) is also shown (top right). The equivalent plots for the case of the \(J^{P}=0^{-}\) mediator, are shown in the bottom plots. distribution functions (_p.d.f_.s) obtained from parton level (with shower effects) \(t\bar{t}Y_{0}\) signal events (see [45], for more details). The neutrinos four-momenta are obtained from solving the set of equations 4.1, above. If a solution is found, new mass values are tried around the found value (up to 6 times), to check if neighbour mass points can indeed provide a better fit overall. Due to the quadratic form of the mass equations, multiple solutions may exist for a single event. We chose the solution that provides the highest value of the likelihood (\(L\)) constructed using parton level (with shower effects) distributions, in particular the _p.d.f_.s for the transverse momenta of the neutrinos, top quarks and \(t\bar{t}\) system, \(P(p_{T_{\nu}})\), \(P(p_{T_{\bar{\nu}}})\), \(P(p_{T_{t}})\), \(P(p_{T_{\bar{t}}})\) and \(P(p_{T_{t\bar{t}}})\), respectively. This likelihood is defined according to \[L\propto\frac{1}{p_{T_{\nu}}p_{T_{p}}}P(p_{T_{\nu}})P(p_{T_{\bar{\nu}}})P(p_{T_ {t}})P(p_{T_{t}}), \tag{4.2}\] Figure 4: Two-Dimensional distributions of \(t\bar{t}Y_{0^{+}}\) events: generator-level transverse momentum versus reconstructed transverse momentum for several particles (neutrino, top left; \(t\), top right; \(t\overline{t}\) system, bottom left; W\({}^{+}\) boson, bottom right), for the massless DM mediator. where the normalization factor \(1/p_{T_{\tau}}p_{T_{\bar{b}}}\) is applied to account for the energy losses due to radiation emission and effects from detector resolutions which will increase the reconstructed neutrino 4-momentum. This factor compensates for values of neutrino \(p_{T}\) which are too extreme by giving less weight to those solutions. If no solution is found, the event is discarded. For a scalar (pseudo-scalar) DM mediator mass of 0 GeV, we found that 75% (72%) of all events were correctly reconstructed, a number that matches quite well typical SM \(t\bar{t}\) analyses where such kinematic reconstruction is attempted. In Figure 4, we show the 2-dimensional \(p_{T}\) distributions of the neutrino (top left), top quark (top right), \(t\bar{t}\) system (bottom left) and \(W^{+}\) boson (bottom right), after kinematic reconstruction. We can see that the parton level (with shower effects) and the reconstructed kinematics are highly correlated for all particles or systems of particles. This implies that, after experimental effects are taken into account, it is still possible to reconstruct the \(t\bar{t}\) system without even trying to reconstruct the invisible DM mediator. This point is quite important since it opens up the possibility of studying the changes in angular distributions of \(t\bar{t}\) systems in the presence of a new invisible particle. ## 5 Results and Discussion In Figure 5, we show the \(b_{4}\) (left) and \(\Delta\phi_{\ell^{+}\ell^{-}}\) (right) distributions after event selection and kinematic reconstruction, for a reference luminosity of 100 fb\({}^{-1}\). All SM backgrounds: \(t\bar{t}\) (\(t\bar{t}c\bar{c}\) and \(t\bar{t}\)+light jets), \(t\bar{t}b\bar{b}\), \(t\bar{t}V\), \(t\bar{t}H\), single top quark production (\(t\)-, \(s\)- and \(Wt\)-channels), \(W/Z\)+jets, and diboson (\(WW,ZZ,WZ\)) events are represented. The \(t\bar{t}Y_{0}\) scalar and pseudo-scalar signals, with \(m_{Y_{0}}=0\) GeV, are shown as well, scaled by factors of 2 and 500 respectively, for convenience. As expected, the main SM background contribution is the \(t\bar{t}\) due to its similarity with the signal final state topology. All other backgrounds are essentially residual to the overall SM background contribution. Differences in the shapes of the background distributions can also be noticed when compared with the signals. For instance, in the \(b_{4}\) distribution and for the scalar signal (in brown), events tend to populate positive values more than negative values. This behaviour is inverted for the pseudo-scalar case (in orange). For the \(\Delta\phi_{\ell^{+}\ell^{-}}\) distribution (which is symmetric around zero), the scalar and pseudo-scalar signals populate differently its extreme regions. For completeness, we show in Figure 6 the missing transverse energy (\(E_{T}\)) distribution, which shows a quite similar behaviour to the SM background one, for both the CP-even and CP-odd signals. This means that missing \(E_{T}\) is not a good discriminating variable for this process which is a very important point to make in an analysis for a process with DM in the final state. The \(b_{4}\) and \(\Delta\phi_{\ell^{+}\ell^{-}}\) distributions were then used to set confidence level limits (CLs) on the Figure 5: The \(b_{4}\) (left) and \(\Delta\phi_{\ell^{+}\ell^{-}}\) (right) distributions for scalar and pseudo-scalar signals (dashed curves) together with the SM processes (full lines) with dileptonic final states, are represented after event selection and kinematic reconstruction (exp), for a reference luminosity of 100 fb\({}^{-1}\). Scaling factors are applied to the scalar and pseudo-scalar signals for convenience. Figure 6: Missing transverse energy (\(E_{T}\)) distributions for scalar and pseudo-scalar signals (dashed curves) together with the SM processes (full lines) with dileptonic final states, are represented after event selection and kinematic reconstruction (exp), for a reference luminosity of 100 fb\({}^{-1}\). Scaling factors are applied to the scalar and pseudo-scalar signals for convenience. exclusion of the SM with a new CP-mixed massless DM mediator particle, \(Y_{0}\), assuming the SM hypothesis as the null hypothesis (Scenario 1). The result is shown in Figure 7, for an integrated luminosity corresponding roughly to the RUN 2 luminosity plus the contribution from the first year of RUN 3, i.e., \(L\sim 200\) fb\({}^{-1}\). The CL limits are shown as contour plots in the \((g_{u_{33}}^{S},g_{u_{33}}^{P})\) 2D plane. It is clear that the CLs are identical for both observables, in this scenario. The CLs are also evaluated, for Scenario 1, for the full luminosity expected at the end of the High-Luminosisty phase of the LHC (HL-LHC), for \(L=3000\) fb\({}^{-1}\), using the \(\Delta\phi_{\ell^{+}\ell^{-}}\) distribution. The resulting 68% and 95% exclusion limits, for both luminosity values, are in Table 1. For \(L=3000\) fb\({}^{-1}\), we observe a substantial improvement by factors of 2 to 3, on the exclusion limits. Quite similar results where obtained when using a simple counting experiment. This leads us to conclude that the observable choice has little to no impact on the exclusion limits in this scenario and the DM mediator production cross section is, in itself, the dominant factor. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Exclusion Limits & \multicolumn{2}{|c|}{\(L\) = 200 fb\({}^{-1}\)} & \multicolumn{2}{|c|}{\(L\) = 3000 fb\({}^{-1}\)} \\ from \(\Delta\phi_{l^{+}l^{-}}\) & (68\% CL) & (95\% CL) & (68\% CL) & (95\% CL) \\ \hline \multirow{2}{*}{\(m_{Y_{0}}\) = 0 GeV} & \(g_{u_{33}}^{S}\in\) & [-0.067, +0.067] & [-0.125, +0.125] & [-0.022, +0.022] & [-0.052, +0.052] \\ & \(g_{u_{33}}^{P}\in\) & [-0.91, +0.91] & [-1.71, +1.71] & [-0.44, +0.44] & [-0.85, +0.85] \\ \hline \end{tabular} \end{table} Table 1: Exclusion limits for the \(t\bar{t}Y_{0}\) CP-couplings for fixed luminosities of 200 fb\({}^{-1}\) and 3000 fb\({}^{-1}\) of the SM plus \(Y_{0}\), assuming the SM as the null hypothesis. The limits are shown at confidence levels of 68% and 95%, for the \(\Delta\phi_{l^{+}l^{-}}\) variable. Figure 7: CLs for the exclusion of the SM with a massless DM mediator, \(Y_{0}\), with mixed scalar and pseudo-scalar couplings with the top quarks, against the SM as null hypothesis, for the \(\Delta\phi\) between the charged leptons, \(\Delta\phi_{\ell^{+}\ell^{-}}\) (left), and \(b_{4}\) (right) observables. Limits are shown for a luminosity of \(L=200\) fb\({}^{-1}\). For completeness, an alternative scenario was considered (Scenario 2), where we assumed as null hypothesis the SM plus a pure CP-even DM mediator of mass 0 GeV. The main goal of this scenario is to quantify how well could the mixed state be excluded from a pure CP-even mediator, in case of a discovery. This scenario was explored by using the \(\Delta\phi_{\ell^{+}\ell^{-}}\) distribution as well as the simple counting experiment used above for Scenario 1. The results are shown in Figure 8. Here, however, the difference between both distributions is quite clear, i.e., the 68% CLs are much worse in the latter case. This indicates that, in Scenario 2, in contrast with Scenario 1, the chosen observable will have an important impact on the exclusion limits. This also means that angular observables can indeed help on studying the CP-nature of DM mediators upon discovery. Lastly, to extend our results to a massive \(Y_{0}\) produced together with pairs of top quarks, additional signal events were generated with mediator masses (\(m_{Y_{0}}\)) set to 1, 10 and 125 GeV. The selection criteria and reconstruction procedure described in Section 4 were the same. The resulting \(\Delta\phi_{\ell^{+}\ell^{-}}\) distributions were then used to set CLs, in both Scenarios 1 and 2, for a luminosity of 200 fb\({}^{-1}\) and 3000 fb\({}^{-1}\) as before. The exclusion limits are depicted for all masses in Figure 9, for \(L=3000\) fb\({}^{-1}\). The respective 68% and 95% exclusion limits for Scenario 1 are shown in Table 2, for both \(L=200\) fb\({}^{-1}\) and \(L=3000\) fb\({}^{-1}\). As expected, exclusion limits worsen as masses increase in both Scenarios, since the \(t\bar{t}Y_{0}\) production cross section decreases for heavier \(Y_{0}\) masses, and improve with increasing luminosity values. Also, notice that the observable choice having a very small impact on the exclusion limits in Scenario 1 is true only for smaller DM mediator masses, where the \(Y_{0}\) production cross section is the dominant factor contributing to the exclusion limits. Figure 8: CLs for the exclusion of the SM with a massless DM mediator, \(Y_{0}\), with mixed scalar and pseudo-scalar couplings with the top quarks, against the SM plus a pure scalar DM mediator, for the \(\Delta\phi\) between the charged leptons (left). For completeness, the results for a simple event counting experiment (one Bin) is also shown (right). Limits are represented for a luminosity of \(L=200\) fb\({}^{-1}\). Figure 9: CLs for the exclusion of the SM with a massive DM mediator, \(Y_{0}\) (\(m_{\tilde{\nu}_{0}}=1\), 10 and 125 GeV in the top, middle, and bottom rows, respectively), with mixed scalar and pseudo-scalar couplings, against the SM as null hypothesis (left), for the \(\Delta\phi\) between the charged leptons. For completeness, the corresponding CL for the exclusion of the SM plus a mixed DM mediator against the SM plus a pure scalar DM mediator is also shown for \(\Delta\phi\) (right). Limits are shown for a luminosity corresponding to the full HL-LHC luminosity (\(L=3000\) fb\({}^{-1}\)). ## 6 Conclusions In this work we have explored the idea of using on-going searches and measurements, such as the analysis that leads to the measurement of the \(t\bar{t}\) production cross section at the LHC, to look for hidden DM particles in the final states. To that end, we present a new approach to fully reconstruct the kinematics of the \(t\bar{t}\) system present in \(t\bar{t}Y_{0}\) events produced at the LHC. Our study was performed within the context of simplified models of DM production at colliders. In this kinematic reconstruction, the missing transverse energy was fully attributed to the undetected neutrinos and no attempt to reconstruct the invisible DM mediator was tried. The approximations used in our work appear to be valid in a wider range of the DM mediator mass (starting at \(m_{Y_{0}}=0\) GeV), according to the resulting correlations between the generated and reconstructed kinematics. An example of these correlations is shown in Figure 4 for the \(m_{Y_{0}}=0\) GeV case. Moreover, we have checked that the pairing of the \(b\)-jets and charged leptons originated from the same parent top quark decay was very well achieved using several angular distributions and dedicated multivariate statistical methods. We have analyzed a significant number of angular observables, from which two of them were selected to illustrate our findings, the \(\Delta\phi_{\ell^{+}\ell^{-}}\) and \(b_{4}\) distributions. These observables were shown to be sensitive not only to DM mediators with different mass scales, but also with different CP-natures, in what concerns their couplings to heavy SM particles. These distributions were then used to set exclusion limits assuming the SM as the null hypothesis, for a luminosity of 200 fb\({}^{-1}\) and 3000 fb\({}^{-1}\), corresponding to the full luminosity of the High-Luminosity Phase of the LHC (HL-LHC). We also considered a benchmark scenario that takes into account, as null hypothesis, the SM plus a pure scalar DM mediator, in order to check how sensitive the analysis is to a possible CP-mixed nature of the DM mediator. We observe that, in the former case, the 95% CL limits using the \(\Delta\phi_{\ell^{+}\ell^{-}}\) angular distribution were \(g^{S}_{u_{33}}\in[-0.125,0.125]\) and \(g^{P}_{u_{33}}\in[-1.71,1.71]\) for a luminosity of 200 fb\({}^{-1}\), and \(g^{S}_{u_{33}}\in[-0.052,0.052]\) and \(g^{P}_{u_{33}}\in[-0.85,0.85]\) for a luminosity of 3000 fb\({}^{-1}\). We have also checked that a simple counting experiment can provide similar exclusion limits. In the second case, we observed that the use of angular distributions can improve the exclusion limits for the pseudo-scalar coupling by at least a factor of two, if we want to understand the CP-nature of the DM mediator couplings to SM particles. Finally, we \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Exclusion Limits & \multicolumn{2}{c|}{\(L=200\) fb\({}^{-1}\)} & \multicolumn{2}{c|}{\(L=3000\) fb\({}^{-1}\)} \\ from \(\Delta\phi_{l^{+}l^{-}}\) & (68\% CL) & (95\% CL) & (68\% CL) & (95\% CL) \\ \hline \(m_{Y_{0}}\) = 1 GeV & \(g^{S}_{u_{33}}\in\) & [-0.073, +0.073] & [-0.142, +0.142] & [-0.038, +0.038] & [-0.068, +0.068] \\ & \(g^{P}_{u_{33}}\in\) & [-0.89, +0.89] & [-1.65, +1.65] & [-0.43, +0.43] & [-0.83, +0.83] \\ \hline \(m_{Y_{0}}\) = 10 GeV & \(g^{S}_{u_{33}}\in\) & [-0.198, +0.198] & [-0.368, +0.372] & [-0.098, +0.098] & [-0.188, +0.188] \\ & \(g^{P}_{u_{33}}\in\) & [-0.87, +0.87] & [-1.65, +1.65] & [-0.44, +0.44] & [-0.83, +0.83] \\ \hline \(m_{Y_{0}}\) = 125 GeV & \(g^{S}_{u_{33}}\in\) & [-0.328,+0.322] & [-0.608, +0.612] & [-0.162, +0.162] & [-0.308, +0.308] \\ & \(g^{P}_{u_{33}}\in\) & [-1.48, +1.49] & [-2.77, +2.78] & [-0.75, +0.75] & [-1.41, +1.41] \\ \hline \end{tabular} \end{table} Table 2: Exclusion limits for the \(t\bar{t}Y_{0}\) CP-couplings for fixed luminosities of 200 fb\({}^{-1}\) and 3000 fb\({}^{-1}\) of the SM plus \(Y_{0}\), assuming the SM as null hypothesis, for \(Y_{0}\) masses of 1 GeV (top), 10 GeV (middle) and 125 GeV (bottom). The limits are shown at confidence levels of 68% and 95%, for the \(\Delta\phi_{l^{+}l^{-}}\) variable. extended our study to the case of a massive DM mediator with \(m_{Y_{0}}\) set to 1, 10 and 125 GeV. We observed that the exclusion limits set from the \(\Delta\phi_{\ell^{+}\ell^{-}}\) distributions got worse for increasing \(Y_{0}\) masses, since the \(t\bar{t}Y_{0}\) cross section decreases in that case. #### Acknowledgments R.C. and R.S. are partially supported by the Portuguese Foundation for Science and Technology (FCT) under Contracts no. UIDB/00618/2020, UIDP/00618/2020, CERN/FIS-PAR/0025/2021 CERN/FIS-PAR/0010/2021 and CERN/FIS-PAR/0021/2021. R.C. is additionally supported by FCT Grant No. 2020.08221.BD. A.O. is partially supported by FCT, under the Contract CERN/FIS-PAR/0037/2021. D.A. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257.
2304.00514
The impact of individual information exchange strategies on the distribution of social wealth
Wealth distribution is a complex and critical aspect of any society. Information exchange is considered to have played a role in shaping wealth distribution patterns, but the specific dynamic mechanism is still unclear. In this research, we used simulation-based methods to investigate the impact of different modes of information exchange on wealth distribution. We compared different combinations of information exchange strategies and moving strategies, analyzed their impact on wealth distribution using classic wealth distribution indicators such as the Gini coefficient. Our findings suggest that information exchange strategies have significant impact on wealth distribution and that promoting more equitable access to information and resources is crucial in building a just and equitable society for all.
Yang Shao, Hirokazu Atsumori, Tadayuki Matsumura, Kanako Esaki, Shunsuke Minusa, Hiroyuki Mizuno
2023-04-02T11:18:52Z
http://arxiv.org/abs/2304.00514v1
# The impact of individual information exchange strategies on the distribution of social wealth ###### Abstract Wealth distribution is a complex and critical aspect of any society. Information exchange is considered to have played a role in shaping wealth distribution patterns, but the specific dynamic mechanism is still unclear. In this research, we used simulation-based methods to investigate the impact of different modes of information exchange on wealth distribution. We compared different combinations of information exchange strategies and moving strategies, analyzed their impact on wealth distribution using classic wealth distribution indicators such as the Gini coefficient. Our findings suggest that information exchange strategies have significant impact on wealth distribution and that promoting more equitable access to information and resources is crucial in building a just and equitable society for all. ## 1 Introduction Wealth distribution is a crucial aspect of any society, and understanding how it forms is a complex and multifaceted issue. The importance of studying social wealth distribution lies not only in the fact that wealth is a crucial social resource, but also in the observation that statistical patterns similar to those found in wealth distribution can be identified in many other fields, such as the citation counts of research papers and the level of attention received by celebrities (Vermeulen, 2018). To study the distribution of wealth is essentially to study the distribution of a group of similar social successes represented by the distribution of wealth. One important factor to be considered in wealth distribution is the role of information exchange (Coelho et al., 2005; Hu et al., 2007). Information exchange plays a role in shaping wealth distribution patterns by allowing individuals and groups to acquire knowledge and resources that are necessary for building wealth. In recent years, advances in technology have made information exchange more accessible and efficient than ever before. The internet and social media platforms, in particular, have transformed the way people communicate and share information. These developments have created new opportunities for people to access information and resources and have made it easier for individuals and groups to collaborate and work together towards common goals. However, not all individuals and groups have equal access to information and resources (Nishi et al., 2015). In many societies, there are significant disparities in access to education, technology, and other resources that are necessary for participating in information exchange. These disparities can create barriers that prevent some individuals and groups from fully participating in the economy and building wealth. Moreover, information exchange can also have unintended consequences that can exacerbate existing inequalities. For example, individuals and groups that are already wealthy may have more resources to invest in acquiring and sharing information, which can further consolidate their wealth and power. Despite these challenges, information exchange remains a crucial factor in shaping wealth distribution patterns. In this report, we will explore the ways in which information exchange affects wealth distribution, and the various ways in which society can work to promote more equitable access to information and resources. By understanding the role of information exchange in wealth distribution, we can work towards building a more just and equitable society for all. In this research, we use simulation based method to focus on the investigation of the impact of various modes of information exchange on wealth distribution. ## 2 Related works Research methods for wealth distribution and information exchange mainly include theoretical analysis, empirical research, and numerical simulation. Theoretical analysis and empirical research mainly focus on the construction of theoretical models and analysis of empirical data, while numerical simulation can more intuitively demonstrate the impact of different information exchange strategies on wealth distribution. Therefore, in this research, we use numerical simulation to construct different information exchange strategies and analyze their impact on wealth distribution. In these numerical simulations, we use some classic models, such as the famous Barabasi-Albert Model (Albert and Barabasi, 2002) and Small-World Network Model (Watts and Strogatz, 1998), and make some improvements and extensions based on actual conditions. In each model, we adopt different information exchange strategies and compare their performance in wealth distribution. In addition, in this research, we also use some classic wealth distribution indicators, such as the Gini coefficient (Dorfman, 1979) and Lorenz curve (Gastwirth, 1971), to measure the distribution of wealth under different information exchange strategies. At the same time, we also consider some possible influencing factors, such as the initial distribution of node wealth and network structure, to more comprehensively evaluate the impact of different information exchange strategies on wealth distribution. Basic Principles of Wealth Distribution Wealth distribution is an important field of study in economics and sociology. Economists mainly focus on the disparity of wealth and inequality in the distribution of wealth, while sociologists are more concerned with issues of distribution fairness and social justice. Early research mainly focused on individual economic behavior and the role of market mechanisms, such as Adam Smith's theory of the "invisible hand" and David Ricardo's labor theory of value. However, in recent years, people have increasingly recognized the important role of social networks and information exchange in wealth distribution (Nishi et al., 2015). Social Networks and Wealth Distribution Social network theory holds that social networks have a significant impact on economic activity and the formation of wealth distribution. In these networks, behaviors such as information exchange, resource sharing, and cooperative transactions between individuals can promote the flow and distribution of wealth. For example, research by Albert-Laszlo Barabasi and Reka Albert (Barabasi and Albert, 1999) shows that social network structure plays an important role in the stability and equilibrium of wealth distribution. At the same time, numerous empirical studies have shown that behaviors such as information exchange and resource sharing between nodes in social networks are crucial for the flow and distribution of wealth. Information Exchange and Wealth Distribution Information exchange is a crucial factor in wealth distribution (Peress, 2004). Information asymmetry often leads to inequality in resource allocation and wealth distribution. For example, in financial markets, those with more information can typically earn higher returns, exacerbating wealth inequality. However, transparency and fairness in information exchange can also promote balanced and stable wealth distribution. In recent years, with the continuous development and application of information technology, various new information exchange strategies and platforms have emerged, providing new opportunities to promote the flow and distribution of wealth. Simulation Based Research With the development of modern economics, more and more researchers begin to pay attention to the use of simulation methods to study economic phenomena (Herz and Merz, 1998; Moiseev and Akhmadeev, 2017). The simulation method is a computer program-based simulation technique that helps researchers conduct experiments and analyzes in a virtual economic environment to explore possible economic changes and policy impacts. Compared with traditional statistical and empirical research methods, simulation methods have stronger theoretical and experimental control capabilities. They can provide researchers with a more realistic picture of the economic environment, allowing them to better simulate the effects of economic changes and policy implementation. In addition, simulation methods can help researchers better understand the nature of economic phenomena and provide more empirical evidence for decision-making. Simulation methods are widely used in economics. For example, they can be used to study the impact of tax policies on different groups (Altig et al., 2001) or to explore the stability of market regulatory mechanisms (Teufel et al., 2013). Pluchino et al. (2018) used a simulation-based method to show that in Western culture, the dominant elite paradigm, which is characterized by high competition, overlooks the influence of external factors in personal success stories. Success often depends not only on personal qualities, such as talent and intelligence, but also on random factors. The research suggests that it is not reasonable to allocate too much honor or resources to lucky people, and recommends policy measures to improve elite management, diversity of thought, and innovation. However, their research assumes that everyone is a completely independent individual and does not take into account the influence of social networks on opportunity creation (reflected in luck). Their research also did not take into account that in different social cultures, changes in social networks will bring different luck distributions to individuals in different positions. Referring to the aforementioned influence of social network and information exchange on wealth distribution (Nishi et al., 2015; Barabasi and Albert, 1999; Peress, 2004), we add the influence of social network and information exchange strategy to the simulation, so as to observe and analyze the influence of different information exchange strategies of individuals and different information exchange cultures of groups on the wealth distribution. ## 3 Simulation Methodology ### Simulation Setups Our simulations use the TvL model (Pluchino et al., 2018), an agent model based on a small number of simple assumptions that aims to describe the career evolution of a group of people under the influence of lucky or unlucky random events. We consider \(N\) individuals (denoted by blue dots in Fig.1), each of whom has a talent value of \(T_{i}\) (intelligence, skill, ability, etc.), which obeys a normal distribution around a given mean \(m_{T}\), with a standard deviation of \(\sigma_{T}\), randomly distributed in a square world. The world has a periodic boundary condition (i.e., has a ring topology) surrounded by a certain number of "motion" events (denoted by green and red circles in Fig.1), some of which are lucky and others unlucky (neutrals are not considered in the model events because they have no significant impact on the individual's life). In Fig.1, we represent these events as colored circles: lucky events are green and represent a relative percentage of the total number of events, \(pL\), and unlucky events are red and represent a percentage of the total number of events \(100-pL\). The total number of events \(N_{E}\) is uniformly distributed, but the distribution tends to be completely uniform only when \(N_{E}\) is infinity. In our simulation, \(N_{E}\) is proportional to \(N/2\). Thus, at the start of each simulation, different regions of the world randomly distribute more lucky or unlucky events, while other regions are more neutral. Moving circles further randomly within the square lattice (i.e., the world) does not change the fundamental feature of the model, which is that different individuals, regardless of their talents, face different numbers of lucky or unlucky events during their lifetime. For a single simulation run, consider a 50-year working life period (from 20 to 70 years) with time steps of half a year for a total of 100 steps. At the start of the simulation, all agents have the same amount of capital of the order \(C_{i}(0)=C(0)(i=1,...,N)\), denoting their starting wealth level. The purpose of this selection is to provide no initial advantage to anyone. While the talent of an agent is time independent, the capital of an agent varies over time. During the temporal evolution of the model, i.e., during the lifetime of the agent under consideration, all event circles are randomly moved around the world, and in doing so may intersect with some agent's positions. In each time step, each event circle is moved by a distance of length \(v\) in a random direction. The radius of event circles is \(r\). When the event circle and the individual change from disjoint to intersecting, we say that the event happened to this agent. After the intersection, the event circle will not disappear. According to this, at a given time step (i.e., every half year), there are 3 different possible actions for a given agent \(A_{i}\): 1. There is no event circle intercepting the location of agent \(A_{i}\): this means that no important events have occurred in the past half a year. Agent \(A_{i}\) does nothing. 2. The lucky event intercepts the position of agent \(A_{i}\): this means that a lucky event has occurred in the past six months (note that the generation of innovative ideas is considered to be a lucky event that occurs in the brain of the agent). Therefore, agent \(A_{i}\) will increase its capital by an order of magnitude with probability proportional to its talent \(T_{i}\). Only when \(rand[0,1]<T_{i}\), i.e., when the agent is smart enough to benefit from its luck, its capital order will become \(C_{i}(t)=C_{i}(t-1)+dC\). Here, \(dC\) is the average impact of each event on the magnitude of wealth. 3. Unfortunate event intercepts the position of agent \(A_{i}\): this means that an unfortunate event has occurred within the past half year; thus, agent \(A_{i}\) will reduce its capital by an order of magnitude, i.e., \(C_{i}(t)=C_{i}(t-1)-dC\). The above rules (including changing capital by orders of magnitude in the event of misfortune or luck, the probability of change being proportional to the talent of the agent, etc.) are simple and widely agreed because they are based on commonsense evidence that wealth in every person's life is usually characterized by very rapid growth or decline. In addition, these rules give highly talented people a significant advantage, because they can better exploit the opportunities that luck brings (including the ability to use the good ideas born in their brains). On the other hand, a car accident or sudden illness, Figure 1: An example of initial setup for simulations (\(N=1000\) individuals (agents), with different degrees of talent (intelligence, skills, etc.), are randomly located in their positions within a square world of \(200*200\) patches with periodic boundary conditions. During each simulation, which covers several dozens of years, they are exposed to a certain number \(N_{E}\) of lucky (green circles) and unlucky (red circles) events, which move across the world following random trajectories (random walks). In this example, \(N_{E}=500\).) for example, is always an unfortunate event, and individual talents play no role in avoiding such events. Thus, we can more effectively generalize the concept of "talent" as "any personal quality that increases the chances of being seized." In other words, the term "talent" broadly refers to intelligence, skill, cleverness, tenacity, determination, hard work, risk-taking, etc. Research by Pluchino et al. (2018) showed that having the advantage of great talent is a necessary but not sufficient condition for attaining very high wealth. However, the matter of building a social network and obtaining information from it can neither be included in individual talent nor simply abstracted into luck. Because of changes in social networks themselves and obtaining information in social networks, both of these things depend not only on the information exchange strategies of the other party (or more generally, on the information exchange strategies of the group culture). This is a process of mutual influence, and this process cannot be decoupled into two independent variables of talent and luck. So, in order to study the influence of social network and information exchange on wealth distribution, we added the following settings. 1. The action of agent exchanging information in social network is added. 2. The action that the agent moves in the space according to the obtained information is added. 3. The action of the agent to update its social network based on the information obtained is added. Therefore, compared with the original research (Pluchino et al., 2018), our research adds a social network through which agents can share their own information and get information from other agents, so that they can move to richer places (places with more lucky events). Each agent has the following 6 attributes: geographic location, talent value, wealth magnitude, social network links, social network update strategy, and geographic location movement strategy. Among them, the latter 3 items are newly added in our research. ### Experiment Designs As a preliminary research, we limit the update rules of social networks to the following 4 types: 1. Random: There is a random social relationship between agents 2. Location: There is a social relationship between agents whose distance between geographic locations is within a radius \(R\). 3. Wealth: There is a social relationship between agents whose distance between wealth magnitudes is within \(nC\) times \(dC\). 4. Talent: There is a social relationship between agents whose distance between talent value is within \(nT\) times standard deviation of talent value \(T\). We limit mobile strategies to the following 3 types: 1. Random: Random movement 2. Highest: Follow the agent with the highest order of wealth in the social network 3. Average: Follow the weighted average position of the agent position in the social network according to the order of wealth We use mean wealth, wealth variance, and Gini coefficient 3 indicators to assess the macro-level results of the simulations. ## 4 Results and Analysis We set population \(N=1000\), number of events \(N_{E}=500\), percent ratio of lucky events \(pL=50\), mean of talents \(m_{T}=0.6\), deviation of talents \(\sigma_{T}=0.1\), radius of events \(r=1\), moving speed of events \(v=1\), initial magnitude of wealth \(C(0)=5\), altitude of wealth magnitude changing \(dC=0.5\), radius of location neighborhood \(R=5\), radius of wealth neighborhood \(nC=3\), radius of talent neighborhood \(nT=1\) in our simulations. Fig.2 to Fig.4 show the simulation results with Random social network and Random mobile strategy. Fig.5 to Fig.7 show the simulation results with Random social network and Highest mobile strategy. Fig.8 to Fig.10 show the simulation results Figure 4: The final distribution of wealth among the population (log-lin scale). Despite the normal distribution of talent, the distribution of wealth shows a strong double-exponential effect. (Random-Random) Figure 3: In the left figure, talent is plotted as a function of wealth magnitude: the wealthiest people are not the most talented. In the right figure, wealth magnitude is plotted as a function of talent: the wealthiest agents have talents only around average, while the most talented only have assets around their starting assets. (Random-Random) Figure 2: The frequency distributions of numbers of lucky (left) and unlucky (right) events are reported separately on a log-linear scale. It can be seen that both distributions are well fitted by an exponential distribution with similar negative exponents. (Random-Random) with Random social network and Average mobile strategy. Fig.11 to Fig.13 show the simulation results with Location social network and Random mobile strategy. Fig.14 to Fig.16 show the simulation results with Location social network and Highest mobile strategy. Fig.17 to Fig.19 show the simulation results with Location social network and Average mobile strategy. Fig.20 to Fig.22 show the simulation results with Wealth social network and Random mobile strategy. Fig.23 to Fig.25 show the simulation results with Wealth social network and Highest mobile strategy. Fig.26 to Fig.28 show the simulation results with Wealth social network and Average mobile strategy. Fig.29 to Fig.31 show the simulation results with Talent social network and Random mobile strategy. Fig.32 to Fig.34 show the simulation results with Talent social network and Highest mobile strategy. Fig.35 to Fig.37 show the simulation results with Talent social network and Average mobile strategy. Table 1 shows the mean of wealth magnitude, standard deviation of wealth magnitude and Gini coefficient under different combinations of social network and mobile strategies. The experimental data from the strategy combination revealed the impact of different strategies on the mean and standard deviation of wealth magnitude as well as the Gini coefficient. Looking at the mean of wealth magnitude, the wealth social network with random or average mobile strategy showed the highest mean wealth at 4.82, whereas the location social network and highest mobile strategy had a lower average wealth at 4.70. In terms of standard deviation, the random social network with highest mobile strategy had the highest standard deviation at 1.33, while the location social network with highest mobile strategy had the lowest at only 0.83. The random social network and random or highest mobile strategy also had the highest Gini coefficient at 0.13, while the location social network with highest mobile strategy had the lowest at 0.09. These data demonstrate that the random and wealth social network perform better in achieving high mean wealth, but tend to with high standard deviation and Gini coefficient. On the other hand, the location social network performs better in standard deviation and Gini coefficient, but limits the increase in mean wealth. The result of talent social network is somewhere in between. Also, highest mobile strategy tends to suppress the mean and standard deviation of wealth and the Gini coefficient at the same time, while average mobile strategy tends to increase the mean of wealth magnitude while increasing the standard deviation and Gini coefficient except for random social network scenarios. terms of standard deviation and Gini coefficient, but limits the increase in mean wealth. In contrast, the wealth social network performed better in achieving high mean wealth, but tent to have higher standard deviation and Gini coefficient. The talent social network failed somewhere in between. The choice of mobile strategy also had a significant impact on wealth distribution, with the highest mobile strategy suppressing both the mean and standard deviation of wealth and the Gini coefficient, while the average mobile strategy tent to increase the mean of wealth magnitude but also increase the standard deviation and Gini coefficient.
2310.13391
Learning Successor Features with Distributed Hebbian Temporal Memory
This paper presents a novel approach to address the challenge of online temporal memory learning for decision-making under uncertainty in non-stationary, partially observable environments. The proposed algorithm, Distributed Hebbian Temporal Memory (DHTM), is based on factor graph formalism and a multicomponent neuron model. DHTM aims to capture sequential data relationships and make cumulative predictions about future observations, forming Successor Features (SF). Inspired by neurophysiological models of the neocortex, the algorithm utilizes distributed representations, sparse transition matrices, and local Hebbian-like learning rules to overcome the instability and slow learning process of traditional temporal memory algorithms like RNN and HMM. Experimental results demonstrate that DHTM outperforms LSTM and a biologically inspired HMM-like algorithm, CSCG, in the case of non-stationary datasets. Our findings suggest that DHTM is a promising approach for addressing the challenges of online sequence learning and planning in dynamic environments.
Evgenii Dzhivelikian, Petr Kuderov, Aleksandr I. Panov
2023-10-20T10:03:14Z
http://arxiv.org/abs/2310.13391v3
# Learning Successor Representations with Distributed Hebbian Temporal Memory ###### Abstract This paper presents a novel approach to address the challenge of online hidden representation learning for decision-making under uncertainty in non-stationary, partially observable environments. The proposed algorithm, Distributed Hebbian Temporal Memory (DHTM), is based on factor graph formalism and a multi-component neuron model. DHTM aims to capture sequential data relationships and make cumulative predictions about future observations, forming Successor Representation (SR). Inspired by neurophysiological models of the neocortex, the algorithm utilizes distributed representations, sparse transition matrices, and local Hebbian-like learning rules to overcome the instability and slow learning process of traditional temporal memory algorithms like RNN and HMM. Experimental results demonstrate that DHTM outperforms classical LSTM and performs comparably to more advanced RNN-like algorithms, speeding up Temporal Difference learning for SR in changing environments. Additionally, we compare the SRs produced by DHTM to another biologically inspired HMM-like algorithm, CSCG. Our findings suggest that DHTM is a promising approach for addressing the challenges of online hidden representation learning in dynamic environments. ## 1 Introduction Modeling sequential data is one of the most important tasks in Artificial Intelligence as it has many applications, including decision-making and world models, natural language processing, conversational AI, time-series analysis, and video and music generation [29; 9; 8; 23; 31]. One of the classical approaches to modeling sequential data is forming a representation that stores and condenses the most relevant information about a sequence and finding a general transformation rule of this information through the dimension of time [27; 18; 28]. We refer to the class of algorithms that use this approach as Temporal Memory algorithms, as they essentially model the cognitive ability of complex living organisms to remember the experience and make future predictions based on this memory [21; 10; 11; 35]. This paper addresses the problem of hidden representation learning for decision-making under uncertainty, which can be formalized as agent Reinforcement Learning for a Partially Observable Markov Decision Process (POMDP) [38]. Inferring the hidden state in a partially observable environment is, in effect, a sequence modeling problem as it requires processing a sequence of observations to get enough information about hidden states. One of the most efficient representations of the hidden states for POMDP is the Successor Representation (SR) that disentangles hidden states and goals given by the reward function [6; 1]. Temporal Memory algorithms can be leveraged to make cumulative predictions about future observations for a given state to form its SR. The most prominent TM algorithms, like RNN and HMM, use backpropagation to capture data relationships, which is known for its instability due to recurrent non-linear derivatives. They also require having complete sequences of data at hand during training. Although the gradient vanishing problem can be partially circumvented in a way RWKV [36] or LRU [32] models do, the problem of online learning is still a viable topic. In contrast to HMM, RNN models and their descendants also lack a probabilistic theory foundation, which is beneficial for modeling sequences captured from stochastic environments [39; 41]. There is little research on TM models that can be used in fully online adaptable systems interacting with partially observable stochastic environments with access only to one sequence data point at a time, a prevalent case in Reinforcement Learning [22]. We propose a Distributed Hebbian Temporal Memory (DHTM) algorithm based on the factor graph formalism and multicomponent neuron model. The resulting graphical structure of our model is similar to one of the Factorial-HMM [15] but with a factor graph forming online during training. We also show that depending on the graphical structure, our TM can be viewed as an HMM version of either RNN or LRU regarding information propagation in time. An important feature of our model is that transition matrices for each factor are stored as different components (segments) of artificial neurons, which makes computations very efficient in the case of sparse transition matrices. Our TM forms sequence representations fully online and employs only local Hebbian-like learning rules [20; 4; 26], circumventing gradient drawbacks and making the learning process much faster than gradient methods. Some key ideas for our TM algorithm are inspired by neurophysiological models of the neocortex neural circuits and pyramidal neurons [12; 19; 34]. For example, emission matrices for random variables are fixed to resemble the columnar structure of the neocortex layers, which significantly lessens the number of trainable parameters, speeding up learning and leading to sparse transition matrices. Another example is using multicomponent neurons with dendritic segments as independent detectors of neuron pattern activity. Each dendritic segment can be viewed as a row of an HMM state transition matrix. Thus, we don't explicitly store large transition matrices, only their non-zero parts. The DHTM model notoriously fits Successor Representations in the Reinforcement Learning setup to speed up TD learning. The proposed TM is tested as a world model [16; 17] for an agent architecture, making decisions in a simple Pinball-like environment. Our algorithm outperforms a classic RNN (LSTM) and a more advanced RNN-like algorithm--RWKV, in the Successor Representation formation task. Our contribution in this work is the following: * We propose a distributed memory model DHTM based on the factor graph formalism and multicompartment neural model. * Our model stores sparse transition matrices in neural segments, which significantly lessens the number of trainable parameters and speeds up learning. * The DHTM learns fully online employing only local Hebbian-like rules. * The DHTM model fits Successor Representations in the RL setup to speed up TD learning. * We show that the proposed model can be viewed as an HMM version of RNN. * Tested as a world model for an RL agent architecture in a Pinball environment, DHTM outperforms LSTM and RWKV in the Successor Representation formation task. ## 2 Background This section provides basic information about some concepts necessary to follow the paper. ### Reinforcement Learning In this paper, we consider decision-making in a partially observable environment, which is usually formalized as Partially Observable Decision Process [38]. A POMDP is defined as a tuple \(\mathcal{M}=(S,A,P,R,O,D,\gamma)\), where \(S\)--state space, \(A\)--action space, \(P(s,a,s^{i})=Pr(s^{i}\,|\,s,a)\)--transition function, \(R(s)\)-reward function, O--observation space, \(D(a,s^{i},o)=Pr(o\mid a,s^{i})\)--sensor model and \(\gamma\in[1,0)\)--discount factor, given a transition \(s,a\to s^{i}\), where \(s\in S\), \(a\in A\), \(o\in O\). If \(S,A,O\) are finite, \(P,D\) can be viewed as real valued matrices, otherwise, they are conditional density functions. Here we consider deterministic rewards, which depend only on the current state, i.e. \(R(s):S\rightarrow\mathbb{R}\). The task of RL is to find a policy \(\pi(a\mid s):S\times A\rightarrow[0,1]\), which maximizes expected return \(G=\mathbb{E}[\sum_{t=0}^{T}\gamma^{l}R_{t}]\), where \(T\) is an episode length. For value based methods, it is convenient to define optimal policy via Q-function: \(Q^{\pi}(s_{t},a_{t})=\mathbb{E}[\sum_{l\geq t}\gamma^{l}R(s_{l+1})\mid s_{t},a_ {t},\pi]\). For an optimal value function \(Q^{*}\) an optimal policy can be defined as \(\pi(a\mid s)=\operatorname*{softmax}_{a}Q^{*}(s,a)\). ### Hidden Markov Model Partially observable Markov process can be approximated by a Hidden Markov model (HMM) with hidden state space \(H\) and observation space \(O\). \(O\) is the same as in \(\mathcal{M}\), but \(H\) generally is not equal \(S\). Variables \(H_{t}\) represent an unobservable (hidden) approximated state of the environment which evolves over time, and observable variables \(O_{t}\) represent observations that depend on the same time step state \(H_{t}\), and \(h_{t},o_{t}\) are corresponding values of this random variables. For the sake of simplicity, we suppose that actions are fully observable and information about them is included into \(H_{t}\) variables. For the process of length \(T\) with state values \(h_{1:T}=(h_{1},\ldots,h_{T})\) and \(o_{1:T}=(o_{1},\ldots,o_{T})\), the Markov property yields the following factorization of the generative model: \[p(o_{1:T},h_{1:T})=p(h_{1})\prod_{t=2}^{T}p(h_{t}|h_{t-1})\prod_{t=1}^{T}p(o_{ t}|h_{t}). \tag{1}\] In case of discrete hidden state, a time-independent stochastic transition matrix can be learned with Baum-Welch algorithm [2], a variant of Expectation Maximization algorithm. To compute the statistics for the expectation step, it employs the forward-backward algorithm, which is a special case of sum-product algorithm [24]. ### Successor Representation Successor Representations are such representations of hidden states from which we can linearly infer the state value equation 2 given the reward function [6]. Here, we assume observation and state spaces are discrete. \[V(h_{t}=i) =\mathrm{E}[\sum_{l=0}^{T}\gamma^{l}R_{t+l}\mid h_{t}=i]=\sum_{l= 0}^{T}\gamma^{l}\mathrm{E}[R_{t+l}\mid h_{t}=i]\] \[=\sum_{l=0}^{T}\gamma^{l}\sum_{j}p(o_{t+l}=j\mid h_{t}=i)R_{j}= \sum_{j}\sum_{l=0}^{T}\gamma^{l}p(o_{t+l}=j\mid h_{t}=i)R_{j}\] \[=\sum_{j}M_{ij}R_{j}, \tag{2}\] \[M_{ij} =\sum_{l=0}^{T}\gamma^{l}p(o_{t+l}=j\mid h_{t}=i), \tag{3}\] where \(\gamma\) is a discount factor and vector \(\mathrm{SR}(h=i)=\{M_{ij}\}_{j}\) is a Successor Representation of a state \(i\). \(R_{j}\) is a reward for observing the state \(j\). As shown in equation 3, SR can be computed by a TM that is able to predict future observations. TM algorithms effectively predict observations only for a finite time horizon \(T\). Therefore, in order to learn SR, we will employ a technique similar to TD learning in standard RL: \[\delta_{ij}=\sum_{l=0}^{T}\gamma^{l}p(o_{t+l}=j\mid h_{t}=i))+ \gamma^{T+1}\sum_{k}M_{kj}p(h_{t+T+1}=k\mid h_{t}=i)-M_{ij}, \tag{4}\] \[M_{ij}\gets M_{ij}+\alpha\delta_{ij}, \tag{5}\] where \(\alpha\in(0,1)\) is a learning rate, \(\delta_{ij}\)--TD error for SR. In partially observable environments, however, exact state values are not known, therefore we operate with state distributions or so-called belief states [38], which are inferred from observations. In that case, state value and SR are functions of hidden state variable distribution: \[V[p(h_{t}\mid o_{t})] =\mathrm{E}_{h_{t}\sim p(h_{t}\mid o_{t})}[V(h_{t})], \tag{6}\] \[M[p(h_{t}\mid o_{t}),o=j] =\sum_{i}p(h_{t}=i\mid o_{t})M_{ij}, \tag{7}\] which also affects update rule equation 5. ### Sparse Distributed Representations In our work, we design our model to operate with sparse distributed representations (SDRs) to reflect the spatiotemporal property of cortical network activity [37]. In the discrete time case, SDR is a sparse binary vector in a high-dimensional space. To encode observed dense binary patterns to SDRs, we use a biologically plausible k-WTA (k-winners take all) neural network algorithm called spatial pooler with a Hebbian-like unsupervised learning method (see details in Appendix A.1). ## 3 Methods This section describes our TM model and its usage for SR formation. We also outline the agent architecture that we use in our RL tasks. We use the same agent architecture and encoder-decoder pipeline for every TM we compare. ### Distributed Hebbian Temporal Memory We base our TM algorithm on the sum-product belief propagation algorithm in a factor graph shown in Figure 1A. Analogously to Factorial-HMM [14] we divide the hidden space \(H\) into subspaces \(H^{k}\) in our model of the environment. There are three sets of random variables (RV) in the model: \(H^{i}_{t-1}\)--hidden random variables representing hidden states from the previous time step (context), \(H^{k}_{t}\)--hidden random variables for the current time step, and \(O^{k}_{t}\)--observable random variables. All variables have a categorical distribution. RV state values are denoted as corresponding lowercase letters, i.e., \(h^{i}_{t-1}\), \(h^{k}_{t}\), \(o^{k}_{t}\). For each \(H^{k}_{t}\), we use a separate graphical model, considering them as independent to make our TM algorithm computationally efficient. However, hidden variables of the same time step are statistically interdependent in practice. We introduce their interdependence through a segment computation trick that goes beyond the standard sum-product algorithm. The model also has three types of factors: \(M^{i}_{t-1}\)--messages from previous time steps, \(F^{k}_{c}\)--context factor Figure 1: A. A factor graph for a single hidden variable and its context. Each hidden variable is considered to be independent of the others. \(H^{k}_{t}\) and \(O^{k}_{t}\) are hidden and observable random variables for a time step \(t\). \(F^{k}_{c}\) and \(F^{k}_{e}\) are context and emission factors for the corresponding variables. Unary factors \(M^{i}_{t-1}\) called messages represent information about previous time steps. B. Segment computations: estimate of segment activation probability and log-factor value \(f_{t}\) determine effective cell excitation by segment \(l\). C. Factor value learning involves sampling variable states from their posterior distributions in order to reinforce factor values for corresponding active segments. (generalized transition matrix), \(F_{e}^{k}\)--emission factor. We also assume that messages \(M_{t-1}^{i}\) consider posterior information from the time step \(t-1\). Therefore, we don't depict observable variables for previous time steps. The main routine of the algorithm is to estimate distributions of currently hidden state variables given by the equation 8, the computational flow of which is schematically depicted in Figure 1-B: \[p(h_{t}^{k})\propto\sum_{\{h_{t-1}^{i}:i\in\omega_{k}\}}\prod_{i\in\omega_{k}}M_ {t-1}^{i}(h_{t-1}^{i})F_{c}^{k}(h_{t}^{k},\{h_{t-1}^{i}:i\in\omega_{k}\}), \tag{8}\] where \(\omega_{k}=i_{1},\ldots,i_{n}\)--set of previous time step RV indexes included in \(F_{c}^{k}\) factor, \((n+1)\)--factor size. For computational purposes, we translate the problem to the neural network architecture with Hebbian-like learning. As can be seen from Figure 1-B, every RV can be viewed as a set of spiking neurons representing the RV's states, that is, \(p(h_{t}^{k})=p(c_{l}^{i}=1)\), where \(j\)--index of a neuron corresponding to the state \(h_{t}^{k}\). Cell activity is binary \(c_{t}^{i}\in\{0,1\}\) (spike/no-spike), and the probability might be interpreted as a spiking rate. Factors \(F_{c}^{k}\) and \(M_{t-1}^{i}\) can be represented as vectors, where elements are factor values for all possible combinations of RV states included in the factor. Let's denote elements of the vectors as \(f_{l}\) and \(m_{u}\) correspondingly, where \(l\) corresponds to a particular combination of \(k,h_{t}^{k},h_{t-1}^{i_{1}},\ldots,h_{t-1}^{i_{l}}\) state values and \(u\) indexes all neurons representing states of previous time step RVs. Drawing inspiration from biological neural networks, we introduce the notion of a segment that links together factor value \(f_{l}\), the computational graph shown in Figure 1B, and the excitation \(E_{l}\) induced by the segment \(l\) to the cell it is attached to. A segment is a computational unit that detects a particular context state represented by its presynaptic cells. Analogously, dendritic segments of biological neurons known to be coincidence detectors of its synaptic input [40]. The segment is active, i.e., \(s_{l}=1\) if all its presynaptic cells are active; otherwise, \(s_{l}=0\). Computationally, a segment transmits its factor value \(f_{l}\) to a cell it is attached to if the context matches the corresponding state combination. We can now rewrite equation 8 as the following: \[p(h_{t}^{k})\propto\sum_{l\in\operatorname{seg}(j)}L_{l}f_{l}^{k}, \tag{9}\] where \(L_{l}=\prod_{u\in\operatorname{rec}(l)}m_{u}\) is segment's likelihood as long as messages are normalized, \(\operatorname{seg}(j)\)--indexes of segments that are attached to cell \(j\), \(\operatorname{rec}(l)\)--indexes of cells that constitute receptive field of segment with index \(l\) (presynaptic cells). Initially, all factor entries are zero, meaning cells have no segments. As learning proceeds, new segments are grown. As seen from equation 9, it can benefit from having sparse factor value vectors in this computational model, as its complexity grows linearly concerning the amount of non-zero components. And that's usually the case in our model due to one-step Monte-Carlo learning and specific form of emission factors \(F_{e}^{k}\): \[F_{e}^{k}(h_{t}^{k},o_{t}^{k})=I[h_{t}^{k}\in\operatorname{col}(o_{t}^{k})], \tag{10}\] where \(I\)--indicator function, \(\operatorname{col}(o_{t}^{k})\) is a set of hidden states connected to the observational state \(o_{t}^{k}\) that forms a column. The form of emission factor is inspired by presumably columnar structure of the neocortex and was shown to induce sparse transition matrix in HMM [13]. In this segment model, resulting from the sum-product algorithm, likelihood is calculated because presynaptic cells are independent. However, it's not usually the case for sparse factors. To take into account the interdependence, we use the following equation for segment log-likelihood: \[\log L_{l}=\log\sum_{u\in\operatorname{rec}(l)}w_{ul}m_{u}+\sum_{u\in \operatorname{rec}(l)}(1-w_{ul})\log m_{u}-\log n_{l}, \tag{11}\] where \(w_{pl}\)--synapse efficiency or neuron specificity for segment, so that \(w_{ul}=p(s_{l}=1|c_{t-1}^{u}=1)\), and \(n_{l}\)-number of cells in segment's receptive field. The idea that underlies the formula is to approximate between two extreme cases: * \(p(s_{l}=1|c_{t-1}^{u}=1)\to 1\) for all \(u\), which means that all cells in the receptive field are dependent and are part of one cluster, i.e., they fire together. In that case, it should be \(p(s_{l})=m_{u}\) for any \(u\), but we also reduce prediction variance by averaging between different \(u\). * \(p(s_{l}=1|c_{t-1}^{u}=1)\to 0\) for all \(u\) means that presynaptic cells don't form a cluster. In that case, segment activation probability is just a product of the activation probability of each cell. The resulting equation for belief propagation in DHTM is the following: \[p(h_{t}^{k})=p(c_{t}^{j}=1) =\operatorname*{softmax}_{j\in\operatorname*{cells}[H_{t}^{k}]} (\max_{l\in seg(j)}(E_{l})), \tag{12}\] \[E_{l} =\log f_{l}+\log L_{l}, \tag{13}\] where \(\operatorname*{cells}[H_{t}^{k}]\)--indexes of cells that represent states for \(H_{t}^{k}\) variable. Here, we also approximate logarithmic sum with \(\max\) operation inspired by the neurophysiological model of segment aggregation by cell [40]. The next step after computing \(p(h_{t}^{k})\) distribution parameters is to incorporate information about current observations \(p(h_{t}^{k}\mid o_{t}^{k})\propto p(h_{t}^{k})I[h_{t}^{k}\in\operatorname*{ col}(o_{t}^{k})]\). After that, the learning step is performed. The step for closing the loop of our TM algorithm is to assign the posterior for the current step \(p(h_{t}^{k}\mid o_{t}^{k})\) to \(M_{t-1}^{i}\). DHTM learns \(f_{l}\) and \(w_{ul}\) weights by Monte-Carlo Hebbian-like updates. First, \(h_{t-1}^{i}\) and \(h_{t}^{k}\) are sampled from their posterior distributions: \(p(h_{t-1}^{i}\mid o_{t-1}^{i})\propto M_{t-1}^{i}\) and \(p(h_{t}^{k}\mid o_{t}^{k})\) correspondingly. Then \(f_{l}\) is updated according to the segment's \(s_{l}\) and its cell's \(c_{t}^{j}\) activity so that \(f_{l}\) is proportional to several coincidences \(s_{l}=c_{t}^{j}=1\) during the recent past, i.e., cell and its segment are active at the same time step, like shown in Figure 1C. It's similar to Baum-Welch's update rule [2] for the transition matrix in HMM, which, in effect, counts transitions from one state to another, but, in our case, the previous state (context) is represented by a group of RVs. Weights \(w_{ul}\) are also updated by the Hebbian rule to reflect the specificity of a presynaptic \(u\) for activating a segment \(l\). If the activities of the presynaptic cell and its segment coincide, we increase \(w_{ul}\); otherwise, \(w_{ul}\) is decreased. ### Agent Architecture We test DHTM as a part of an agent in the RL task. The agent architecture is the same for all TM algorithms we compare to but with different memory modules. The agent consists of a memory model, SR representations, and an observation reward function. The memory model aims to speed up SR learning by predicting cumulative future distributions of observation variables according to equation 4. As shown in equation 5, SR representations are learned to estimate state value. The observation reward function is also learned during interaction with the environment and, combined with SR representations, is used to estimate the action value function. The agent training procedure is outlined in Algorithm 1. For each episode, the memory state is reset to a fixed initial message with RESET_MEMORY() and action variable is initialized with null value. An observation image returned by an environment (obs) is first preprocessed to get spiking events, mimicking a simple event-based camera with a floating threshold determined from the average difference between the current and previous step image intensities. The resulting events are encoded to SDRs with a biologically inspired spatial pooling encoder described in Appendix A.1. In OBSERVE() routine, the memory and SR learning happens as described in Section 3.1. An agent learns associations between observation states and rewards in line 8 and transforms them into observation priors \(p(o_{t}^{k})\): \[r_{i}^{k} \gets r_{i}^{k}+\alpha I[o_{t}^{k}=i](R_{t}-r_{i}^{k}) \tag{14}\] \[p(o_{t}^{k}) =\operatorname*{softmax}_{\mathrm{i}}(\lambda r_{i}^{k}) \tag{15}\] where \(\alpha\) is a learning rate, \(R_{t}\)--a reward for the current time step, \(\lambda\)--a reward scaling value. The logarithm of the observation prior serves as the value function used to estimate true state value, as shown in equation 2. An agent has a softmax policy over predicted values: \(\pi(a_{t}\mid o_{t})=\operatorname*{softmax}(V[p(h_{t+1}\mid o_{t},a_{t})])\). We use the model to predict the hidden state distribution for every action in the next timestep \(t+1\) and then estimate its value according to equation 6. An episode continues until the terminal state or maximum steps are reached. An agent shares its memory weights between episodes. ## 4 Experiments We test our agent in a pinball-like environment where SRs are easy to interpret. This section shows how different memory models affect SR learning and an agent's adaptability. In our work, we compare the proposed DHTM model with LSTM [21], RWKV [36], and the factorial version of CHMM [13], which is several CHMMs trained in parallel independently. Pinball is a partially observable environment developed in the Godot Game Engine [3]. A ball that can move in the surface's 2D space and a surface with borders make up the environment (see Figure 2-A). Force fields depicted as circles introduce stochasticity to the environment as they deflect the ball in random directions. An agent can apply arbitrary momentum to a ball. For each time step, the environment returns an image of the top view of the table as an observation and a reward. The agent gets the reward by entering force fields. Each force field can be configured to pass a specific reward value and to terminate an episode. For our experiments, we use two configurations of the Pinball environment shown in Figure 2-B. We narrow the action space to three momentum vectors: vertical, 30 degrees left and 30 degrees right from the vertical axis. Each time step, the agent gets a small negative reward and a large positive reward if the ball enters the force field in the center. The episode finishes when the ball enters the force field or the maximum number of steps is reached. Each trial is run for 500 episodes, each a Figure 2: Examples of observations from the testing environment. A. Pinball environment with a viewing area of 50x36x1. On the right is the raw image, and on the left is the preprocessed binarized image of the ball’s movement. B. The experiments used two different setups. The left image shows a setup in which the target is not blocked. The image on the right depicts the setup, with the target obscured by a random field that deflects the ball perpendicular to its movement direction. maximum of 15 steps long, and we average the results over three trials for each parameter set and memory model. We first test the accuracy of five-step SR representations by measuring their pseudo-surprise, which is surprise computed for observed states on different time steps after SR was predicted with respect to normalized SR. The lower the surprise, the better SR's quality. To form SR, we accumulate predictions of observations for five steps forward with a discount factor \(\gamma=0.99\). As can be seen from Figure 3, SRs produced by our memory model (dhtm) give lower surprise than SRs of LSTM (lstm) and RWKV (rwkv), and is on par with SRs produced by Factorial version of CHMM (fchmm). Then, we test how the number of prediction steps affects the agent's adaptability in the Pinball environment. In the first 500 episodes, the agent is trained to reach the target in the center, as shown in Figure 2-B, then the target is blocked by a random force that applies force in perpendicular direction to the ball's movement. That is, the previous optimal strategy becomes useless. The results show that an agent that uses five prediction steps during n-step TD learning of SR faster adapts to the changes in the environment inc comparison to 1-step TD learning for SR, as seen from Figure 4. Figure 4: Comparison of agent’s adaptability during changes in the environment with different prediction steps during n-step TD learning of SR. At the 500th episode, the environment changes its configuration, shown in Figure 2. Figure 3: Surprise comparison for various memory models including DHTM, LSTM, RWKV, and Factorial CHMM. The SRs generated by normalized five-step prediction models are used to calculate surprise for three future time steps. Conclusion In this paper we introduce a novel probabilistic Factorial-HMM-like algorithm DHTM for learning an observation sequence model in stochastic environments that uses local Hebbian-like learning rules and a transition matrix sparsification, which biologically plausible multicomponent neural models inspire. We show that our memory model can quickly learn the observation sequences representation and the transition dynamics. The DHTM produces more accurate n-step Successor Representations than LSTM and RWKV, which speeds up n-step TD learning of the SR in Reinforced Learning tasks with the changing environment. #### Author Contributions ED developed the theoretical foundations of the memory model and its software implementation, conducted experiments, and prepared the text of the article. PK developed the encoder and decoder, prepared and configured the LSTM and RWKV baselines. AP advised and supervised the work of the team. PK and AP also helped with writing the article.
2308.09183
RatGPT: Turning online LLMs into Proxies for Malware Attacks
The evolution of Generative AI and the capabilities of the newly released Large Language Models (LLMs) open new opportunities in software engineering. However, they also lead to new challenges in cybersecurity. Recently, researchers have shown the possibilities of using LLMs such as ChatGPT to generate malicious content that can directly be exploited or guide inexperienced hackers to weaponize tools and code. These studies covered scenarios that still require the attacker to be in the middle of the loop. In this study, we leverage openly available plugins and use an LLM as proxy between the attacker and the victim. We deliver a proof-of-concept where ChatGPT is used for the dissemination of malicious software while evading detection, alongside establishing the communication to a command and control (C2) server to receive commands to interact with a victim's system. Finally, we present the general approach as well as essential elements in order to stay undetected and make the attack a success. This proof-of-concept highlights significant cybersecurity issues with openly available plugins and LLMs, which require the development of security guidelines, controls, and mitigation strategies.
Mika Beckerich, Laura Plein, Sergio Coronado
2023-08-17T20:54:39Z
http://arxiv.org/abs/2308.09183v2
# RatGPT: Turning online LLMs into Proxies for Malware Attacks ###### Abstract. The evolution of Generative AI and the capabilities of the newly released Large Language Models (LLMs) open new opportunities in software engineering. However, they also lead to new challenges in cybersecurity. Recently, researchers have shown the possibilities of using LLMs such as ChatGPT to generate malicious content that can directly be exploited or guide inexperienced hackers to weaponize tools and code. These studies covered scenarios that still require the attacker to be in the middle of the loop. In this study, we leverage openly available plugins and use an LLM as proxy between the attacker and the victim. We deliver a proof-of-concept where ChatGPT is used for the dissemination of malicious software while evading detection, alongside establishing the communication to a command and control (C2) server to receive commands to interact with a victim's system. Finally, we present the general approach as well as essential elements in order to stay undetected and make the attack a success. This proof-of-concept highlights significant cybersecurity issues with openly available plugins and LLMs, which require the development of security guidelines, controls, and mitigation strategies. ChatGPT, Cybersecurity, Command and control + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none ## 2. Approach The goal is to show the feasibility of a harmless executable that can autonomously bootstrap and weaponize itself with LLM-generated code and communicate with the Command and Control (C2) server of an attacker by using web-accessible plugins of certain LLMs as a proxy. In this section, we outline the system's main components. Prior to this experimental design, we need to define _"vulnerable"_ plugins. In our approach, first, we discuss "Prompt Initialisation," where we tried to set up the LLM to be less restrictive for the system to function properly. Next, we explore "IP Address Generation," explaining how we generated the IP address the payload connects to via ChatGPT. Then, we look at Payload Generation, where we explain how the payload itself is generated to weaponize the initially harmless executable the victim executes. Lastly, we look at "Communication with the C2 Server," describing how different parts of the system communicate with each other to demonstrate the use of ChatGPT as a proxy between the victim and the attacker. These main features of this demonstration, are detailed in the following chapters. ### Plugin Vulnerability Criteria For this study, we will only define a plugin as "vulnerable" for this attack, only if it fulfils the following criteria: 1. Be able to browse the web. 2. Be able to go to any user-specified URL. While we specifically used plugins that access and process the content of a web page (e.g., summarise, extract keywords, etc.), modifying the attack to use plugins that read the contents of specific file types hosted on web servers (e.g., PDF files) should be trivial. In that case, the attacker needs to store the commands in these file types, which are then read by the plugins. ### Prompt Initialisation Many public LLMs, such as OpenAI [3], have implemented safeguards, to detect to some degree, if a supplied prompt is intended to be used for harmful purposes. Therefore, we needed to trick these systems into allowing potentially harmful prompts to be evaluated anyway, commonly known as "jailbreaking." In this implementation, we opted to use a modified version of the DAN jailbreak [13], which only outputs the DAN version of its response and surrounds text that is not code with "****. This step is necessary since the code generated by the LLMs is evaluated using the exec() function in Python, which does not allow non-Python keywords in its input. Surrounding the text with "**** effectively creates code comments, which the interpreter ignores. ### IP Address Generation To avoid hard-coding the IP address of the C2 server inside the payload to counter naive string analysis approaches to extract critical malware properties, the IP address is generated dynamically with the help of the LLM. The individual parts of the IP address in dotted-decimal notation are generated with individual prompts and are concatenated in the end. Initial tests were conducted to generate the individual parts with mathematical prompts. What is the full value of the Fibonacci Sequence? Expected: 34 Generated (multiple attempts): 47 38 39 21 However, the answer generated was not deterministic and considered too unreliable. Therefore, experiments using historical facts were conducted. In what way the number of keyword? Expected: 1932 Generated: 1932 This proved to be very stable and is currently the method in use. To extract the numbers from the output produced by ChatGPT, the prompt had to be adjusted such that ChatGPT returns only the numbers. However, in some cases, the DAN-jailbreak still added its name to the output, which we filtered using Python string manipulation. ### Payload Generation In the current version, a victim receives a seemingly innocent executable and attempts to execute it. When the executable is run, multiple prompts are generated and sent to a public LLM. These prompts include instructions on how to build the IP address of the C2 server, how to generate Python code for the functions the payload should respond to (e.g., shell command execution, uploading/downloading files, listing the files in the current working directory, etc.), and how to set up the connection to the C2 server. The responses from the LLM are then combined and evaluated by the interpreter using the exec() function. Consequently, the executable has been weaponized with external code that resides in memory and can now establish a connection to the C2 server and evaluate commands received from it. Another characteristic of this approach is that we effectively created in-memory malware, which is harder to analyse by static analysis methods. Figure 1. Payload generation flow. ### Communication with the C2 Server In the first version, we wanted to make use of the web connectivity exposed by ChatGPT on GPT-4, which can be used to get content from any user-supplied URL as demonstrated by Greshake et al. [9], where the response could be controlled by modifying the website's content, which the attacker controls when visited by the LLM. Using this functionality, the payload could communicate with the C2 server by crafting prompts for ChatGPT to query information from the attacker-controlled web server. Since only HTTP GET requests are possible, the query has to be encoded in the URL path. However, OpenAI deactivated its web browsing feature at the beginning of July, closing this opportunity. Since the introduction of 'plugins', users can access and interact with third-party information and services through ChatGPT. While many plugins are specific to their services, very few plugins are capable of browsing the web by themselves. Using such web browsing plugins, we can access any user-defined website, including a C2 web server, that receives and sends commands to active payloads. Figure 2 summarises the main steps that happen during the execution of the executable. When the victim executes the harmless payload, it gets weaponized by LLM-generated code as described in subsection 2.4. This payload periodically sends website lookup requests to the LLM web connectivity feature (in the case of ChatGPT, plugins can browse the web) in the form of prompts (e.g. 'What are the news on http://attacker_address/?"). The web connectivity feature browses to the website supplied in the prompt and reads the content, which might contain a command that the attacker wrote on this website. This result gets returned as a response to the query to the victim's executable. The weaponized executable.interprets the command and executes the corresponding handler associated with it. When a victim's executable wants to transmit information back to the attacker, it can perform another query to the LLM by appending the data (either encoded in Base64 or ASCII) to the URL (e.g., 'What are the news on http://attacker_address/ZXh0cmFjdGVkX2RhdGEEK?", where ZXh0cmFjdGVkX2RhdGEEK is 'extracted_data" encoded in Base64). To further hide the malicious part of the web server, an internal list can be created that contains valid user agents that the plugins use to browse the malicious website. Consequently, the web server can present a different website and appear innocent if the user agent of a web browser does not match the user agent of a plugin. ## 3 Proof of Concept The goal of the proof-of-concept is to demonstrate that we are able to use ChatGPT as a proxy between a victim and an attacker's C2 server. While our example is simple, this does not imply that we are functionally limited; creating a very powerful implementation was not in the scope of this study. ### Experimental Setup The experimental setup consists of multiple actors responsible for various parts of the process. * **ChatGPT**: Used to generate the payload and interact with our C2 server through plugins. * **Virtual Private Server (VPS)**: Responsible for hosting the C2 server that ChatGPT accesses. It has a public IPv4 address, which the victim's executable generates on-the-fly. * **Victim's executable**: Executable on the victim's machine. It generates the IPv4 address of the VPS, generates code to interact with the C2 server, polls the attacker's website via ChatGPT and executes commands on the victim's machine. It contains basic code snippets responsible for cleaning up the results of ChatGPT, as well as access tokens for ChatGPT, plugin IDs and prepared prompts to jailbreak ChatGPT and to query websites. * **Automated CAPTCHA solver service**: To bypass the CAPTCHA challenges, we are relying on a third-party service which automatically solves these challenges. ### Possible Scenario This section will describe a possible attack scenario, starting with the executable delivery, over its execution leading to the communication from the victim to our C2 server via ChatGPT, to the possible attack scenarios once the access on the victim's machine is granted. #### 3.2.1. **Infiltration and Social Engineering** To deploy the seemingly innocent executable, we need to convince a user to Figure 2. Payload execution and communication flow. execute it on their system. The current buzz around Generative AI is an attractive pretext to use social engineering as a means to attract curious victims to execute it. For instance, marketing the executable as a free ChatGPT PLUS crack could be sufficient to lure people into downloading and executing it. In the best case, we would want our victims to be people working in highly restricted networks which allows for outbound HTTP connections and doesn't yet have ChatGPT or other online LLMs blocked. The importance of HTTP communication is that the payload exclusively communicates via HTTP instead of other arbitrary ports are usually blocked by corporate firewalls. In its current implementation, when a user executes the executable, it will seem as if the program doesn't want to run, and they will attempt to close it. But by closing the window, it will merely continue executing in the background and establish a connection to ChatGPT and thus to our C2 server. We now have access to a machine in a theoretical network. In a future version, we could adapt the executable to present a legitimate interface that lets a user interact with ChatGPT, while performing the attack in the background. #### 3.2.2. **Victim Reconnaissance** Now that we have established a connection, we need to act quickly. We only have a limited amount of messages we can send between the client and the server (c.f. subsubsection 4.3.2), and we already "spent" one message on the payload generation, four messages on the IP address generation and one message to announce ourselves to the C2 server. Furthermore, polling the C2 server for new commands in a fixed interval also costs one message, as well as sending data back. We can start by identifying the user on the victim's machine, the current working directory, and the contents of that directory. We issue a single _shellCmd_ command containing all the commands in one string (shellCmd whoami && pwd && ls -a) to produce only one message. The string is now published on the website, and we are waiting for the victim's payload to poll our website through one of ChatGPT's web browsing plugins. After the victim's payload receives the response from ChatGPT containing the website's contents, it extracts the commands, interprets them, and executes them. Finally, it appends the produced text output to the website URL of our C2 and sends a request to ChatGPT to browse this resource. The C2 receives the GET request to an unknown path and interprets the path as the output of the last command. #### 3.2.3. **Data Exfiltration** From the output of the last command, we learn that the user conveniently has a plain text file called _passwords.txt_ (this file was placed for demonstration purposes; any arbitrary file should be possible if you have the correct permissions) in their current working directory. We attempt to exfiltrate the contents of the said file by sending another _shellCmd_ command with the string "shellCmd cat passwords.txt" to the victim. The polling payload on the victim's machine retrieves the command with a web browsing plugin, interprets the command, and runs "cat passwords.txt" in a shell. In the next step, the output of the shell command is appended to the URL of the C2 server, and a new request is issued to the C2 server with the URL containing the results. On the C2 side, we receive the GET request and extract the unlencoded data from the request path. Since we are now in possession of the victim's credentials, we could continue with further post-exploitation tasks. ## 4. Discussion The findings of this study underline the importance of securing openly available plugins in public LLMs. While the results provide valuable insights, we will now discuss the study's limitations, existing safeguards and future work recommendations. We also propose some theoretical approaches on how to mitigate attacks of this kind. ### Limitations During this study, some challenges were faced, e.g., unreliable ChatGPT outputs, already existing safeguards, and the ban of some exploitable plugins. The following subsections will further investigate these current limitations. #### 4.1.1. **Non-deterministic Payload Generation** Since the output of the LLMs is non-deterministic, successful generation of the correct payload is unreliable and thus cannot be guaranteed. While we constructed our prompts to be as straightforward as possible, there are still cases where the payload is missing important aspects of the required payload. Most common were missing function implementations for the commands from the C2 server that the payload should parse, or general errors of the parser itself. This resulted in the payload not interpreting the commands at all or, in some cases, interpreting them but not continuing the execution since the function bodies of the commands were missing. #### 4.1.2. **Plugin Availability** During the development of our proof-of-concept, we had to deal with situations where the plugin of choice to connect to our C2 server either was removed from ChatGPT's plugin list or not able to establish a connection any more, breaking the implementation until we found replacement plugins. For this reason, enabling multiple web-enabled plugins will provide strong fallback options, should some web-enabled plugins be removed from the "Plugin store". ### Future Work Since we are able to demonstrate the use of LLMs as proxies with the proof-of-concept, several improvements can be made to extend the current version. In this section, we will centre our attention on obfuscating the prompts and creating a ransomware example using the same methods as described earlier. #### 4.2.1. **Prompt Obfuscation** The current proof-of-concept demonstrates the implementation of two-stage malware1, where the second stage is not downloaded from a machine controlled by the attacker. Instead, the payload is generated on the fly with the help of prompts that are included in the first-stage executable. While much antivirus software can be bypassed with this approach, humans can quickly dissect the malware and determine the inner workings of the malware due to the plain text nature of the prompts. Currently, the bootstrapping code is a Base64 encoded string that gets decoded on launch. This might fool simple analysis tools, but for the rest it results in security through obscurity. In a future version, text obfuscation techniques could be implemented to complicate this analysis process. #### 4.2.2. **Ransomware Example** While we were able to demonstrate a simplified version of a RAT communicating with the attacker via ChatGPT, it could also be possible to create a ransomware example using the same general process. When deploying the ransomware, it could bootstrap its payload, encrypt the victim's data, and send the key to the C2 server by appending it to the URL of the C2 server and making a request via ChatGPT. This possibility underlines the need for security measures highlighted in subsection 4.4. ### Existing Safeguards Whether intentional or inadvertent, certain safeguards, which will be enumerated in this section, currently exist that impede the reliable use of ChatGPT as a proxy for malware. #### 4.3.1. **Caaptcha Protection** In our proof-of-concept, we are using the Web API to interact with the GPT-4 model that uses plugins. To safeguard these endpoints from bot-related abuse, they are protected by a cloud security provider. Consequently, if suspicious traffic is detected, it presents the users with a challenge they need to solve to be admitted to the main content. In the case of our automated payload, this protection is often triggered. While we were able to bypass this protection multiple times with online solver services, it became increasingly more difficult to bypass the "suspicious behavior detection mechanism". #### 4.3.2. **Message Cap For GPT-4** In its current state, the ChatGPT GPT-4 model can only process a limited amount of requests in a defined period of time. This posed a constraint in the ability to make progress, since we had to anticipate the "cooldown phase" of the used credits to perform the experiments and refine the prompts. In a practical scenario, this limitation would limit the number of commands an attacker could execute on their victims: the payload on the victim's machine sends messages to poll for new commands from the C2 server, as well as to send results back to the C2 server. ### Possible Mitigations This section presents potential mitigations to the security issue at hand. It is important to note that this list is non-exhaustive, and further research may uncover additional strategies. * **Website Whitelisting:** While many available plugins were created to access specific systems, the plugins in our study are able to access any user-controlled website. Implementing a whitelisting system to only allow predefined websites fulfilling certain conditions (e.g., domain name should be at least x days old, valid HTTPS certificate, no direct connection to an IP address, etc.) or checking the validity on the fly could reduce the number of potentially dangerous C2 websites. * **Restricting access to online LLMs:** This mitigation is targeted towards people and entities that could become victims in this attack. Although an extreme approach, restricting the access of online LLMs on a network level (e.g., by updating firewall rules or using DNS filtering) would eliminate the possibility to communicate with the C2 server, removing the dangers of an attacker gaining control of the system. * **Prompt Scanning:** The nature of the proof-of-concept executable is a collection of prompts that bootstrap the malicious payload, which then periodically sends prompts to communicate with the C2 server. Since this is an entirely new approach of building malware which might occur more often in the wild, this calls for an evolution in malware detection tools. Such tools need to be capable of: 1. Detecting that prompts are present in the executable. 2. Discerning potentially malicious prompts from harmless prompts. 3. Implementing heuristic analysis to predict and identify new variants or evolutions of the malware. In response to this emerging malware paradigm, it's imperative that detection tools evolve swiftly to address and neutralize such advanced threats. ## 5. Related Work Large Language Models have recently been used to weaponize code and generate attacks in several case studies (Gupta et al., 2017; Gershake et al., 2018; Gershake et al., 2018; Gershake et al., 2018). Gupta et al. (Gupta et al., 2018) have summarized some possibilities of Jailbreaks and demonstrated the feasibility of prompt injection attacks on ChatGPT. Other studies have already highlighted the potential of LLMs to increase the attack vector of phishing attacks (Gupta et al., 2018) due to LLMs' capabilities of producing human-like content that can easily seem legitimate to users. Similarly, the previous study shows that ChatGPT can easily prompt users to reveal confidential information or to make users download malicious content. Greshake et al. (Greshake et al., 2018) mention a scenario where the attacker controls the content of a website to control what the LLM receives and the use of external API to communicate back to the attacker. In their approach, however, they show prompts on the attacker-controlled web page and the communication seemingly only stays within the LLM system and does not interact with the user's system. Conclusion Large Language Models have opened new opportunities, improving, and speeding up tasks in everyone's daily lives. However, in this study, we have proven how easily unsecured openly available LLMs and plugins can be misused to perform efficient and undetected attacks around the world. This proof-of-concept demonstrates the potential transformation of LLMs into proxies for malware attacks, allowing their misuse through plugins to establish connections with command and control servers. This facilitates complete access to a victim's machine without necessitating direct interaction between the victim and the LLM. This work highlights the need for new mitigation strategies and the development of further security guidelines on the deployment of LLMs. ## Acknowledgement Special thanks to Mercatus Center at George Mason University, for their invaluable support in this research. The views presented in this paper do not represent official positions of the Mercatus Center or George Mason University.
2307.06435
A Comprehensive Overview of Large Language Models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond. This success of LLMs has led to a large influx of research contributions in this direction. These works encompass diverse topics such as architectural innovations, better training strategies, context length improvements, fine-tuning, multi-modal LLMs, robotics, datasets, benchmarking, efficiency, and more. With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction. Considering the rapidly emerging plethora of literature on LLMs, it is imperative that the research community is able to benefit from a concise yet comprehensive overview of the recent developments in this field. This article provides an overview of the existing literature on a broad range of LLM-related concepts. Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of research in LLMs. This review article is intended to not only provide a systematic survey but also a quick comprehensive reference for the researchers and practitioners to draw insights from extensive informative summaries of the existing works to advance the LLM research.
Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, Ajmal Mian
2023-07-12T20:01:52Z
http://arxiv.org/abs/2307.06435v9
# A Comprehensive Overview of ###### Abstract Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond. This success of LLMs has led to a large influx of research contributions in this direction. These works encompass diverse topics such as architectural innovations of the underlying neural networks, context length improvements, model alignment, training datasets, benchmarking, efficiency and more. With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction. Considering the rapidly emerging plethora of literature on LLMs, it is imperative that the research community is able to benefit from a concise yet comprehensive overview of the recent developments in this field. This article provides that overview to the research community. It not only focuses on a systematic treatment of the existing literature on a broad range of LLM related concept, but also pays special attention to providing comprehensive summaries with extensive details about the individual existing models, datasets and major insights. We also pay heed to aligning our overview with the emerging outlook of this research direction by accounting for the other recently materializing reviews of the broader research direction of LLMs. Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of this research direction. This review article is intended to not only provide a systematic survey, but also a quick comprehensive reference for the researchers and practitioners to draw insights from extensive informative summaries of the existing works to advance the LLM research direction. Large Language Models, LLMs, chatGPT, LLM training, LLM Benchmarking ## I Introduction Language plays a fundamental role in facilitating communication and self-expression for humans, and likewise, communication holds paramount importance for machines in their interactions with humans and other systems. Large Language Models (LLMs) have emerged as cutting-edge artificial intelligence systems designed to process and generate text, aiming to communicate coherently [1]. The need for LLMs stems from the growing demand for machines to handle complex language tasks, including translation, summarization, information retrieval, and conversational interactions. Recently, significant breakthroughs have been witnessed in language models, primarily attributed to deep learning techniques, advancements in neural architectures like transformers, increased computational capabilities, and the accessibility of training data extracted from the internet [2]. These developments have brought about a revolutionary transformation by enabling the creation of Large Language Models (LLMs) that can approximate human-level performance on certain evaluation benchmarks [3, 4]. LLMs, particularly pre-trained language models (PLM), have shown tremendous generalization abilities for text understanding and generation tasks while trained in a self-supervised setting on a large corpus of text [5, 6, 7]. The performance Fig. 1: The trends in the number of LLM models introduced over the years. of pre-trained language models (PLMs) improves significantly when fine-tuned for downstream tasks, surpassing the performance of models trained from scratch. These characteristics of language models motivated researchers to train larger PLMs on even bigger datasets and found that scaling model and dataset size further improve the generalization abilities. Now modern LLMs are capable of performing various tasks like code generation, text generation, tool manipulation, reasoning, and understanding in zero-shot and few-shot settings in diverse domains, even without requiring any fine-tuning on downstream tasks [8, 9, 10]. Such generalization was previously unattainable with smaller models, marking a significant advancement in language modeling. This development has sparked enthusiasm and excitement within the research community for the enhancement of LLM architectures and training strategies, leading to the development of numerous LLMs [11, 12, 13, 8, 9, 10, 14]. The graph presented in Fig 1 depicts an increasing trend in the number of released LLMs, including open-source and closed-source models, over the years. Furthermore, Fig 2 highlights the names of significant releases of various LLMs and Fig 3 provides a broader overview of LLMs. During the early days of Large Language Models (LLMs), many research efforts focused on developing models for transfer learning to downstream tasks [11, 12, 15] until the emergence of models like GPT-3 [8], which demonstrated impressive performance even without fine-tuning. Due to the closed-source nature of GPT-3, there was a demand for open-source alternatives, leading to the development of various models [9, 10] operating at the scale of GPT-3 and trained on extensive web-based datasets [16, 17, 18, 19]. Subsequently, researchers proposed several architectural designs and training strategies that showed superior performance compared to GPT-3 across various tasks [15, 14, 20, 21]. The performance of LLMs improves further with instruction fine-tuning, outperforming pre-trained LLMs on various benchmarks [22, 23]. Instruction fine-tuning of LLMs refers to a specific training approach by incorporating additional prompts or instructions during the fine-tuning phase to guide the output and thus enable the users to have more fine-grained control over the outputs of LLMs. These prompts can be natural language instructions or example demonstrations based on the task's requirement. In the literature, different datasets have been curated for instruction fine-tuning. These datasets include more instances and tasks that further improve the performance over baselines [24, 23, 25, 26]. When performing instruction fine-tuning, all the model parameters need to be updated. However, parameter-efficient fine-tuning takes a different approach by updating only a small number of parameters while still maintaining good performance. This Fig. 2: Chronological display of LLM releases: light blue rectangles represent ‘pre-trained’ models, while dark rectangles correspond to ‘instruction-tuned’ models. Models on the upper half signify open-source availability, whereas those on the bottom half are closed-source. The chart illustrates the increasing trend towards instruction-tuned models and open-source models, highlighting the evolving landscape and trends in natural language processing research. method keeps the original model frozen and adds a few extra parameters at different locations within the model [27, 28, 29, 30, 31]. This approach helps achieve efficient fine-tuning while minimizing the impact on the model's overall performance. Due to the success of LLMs on a wide variety of tasks, the research literature has recently experienced a large influx of LLM related contributions. Naturally, the research community has started the effort of organizing this literature as survey articles. For instance, Zhou et al. [32] presented an overview of the foundation models. An impressive effort is recently made by Zhou et al. [33] in their survey that also discusses aspects related to model architectures, fine-tuning, emergent abilities, and more. Another recent survey on augmented language models provides a historical account of the foundation models [34]. In contrast to these surveys, our contribution focuses on providing a comprehensive yet concise overview of the general direction of LLM research. On one hand, this article summarizes more details of the individual models as compared to the existing efforts. On the other, it also covers more models in providing their summaries. It also delves into the details of model development, architectures, training datasets, and other related concepts to provide a self-contained comprehensive overview of this direction. Hence, this article addresses an important gap of providing a concise yet comprehensive overview of the rapidly developing general direction of LLM research. Our key contributions are summarized as follows. * We present the first survey on the developments in LLM research with the specific aim of providing concise yet Fig. 3: A broader overview of LLMs, dividing LLMs into five branches: 1. Training 2. Inference 3. Evaluation 4. Applications 5. Challenges comprehensive overview of the direction. We present extensive summaries that include fine-grained details of the reviewed contributions. * In this self-contained article, we cover a range of concepts to comprehend the general direction of LLMs, including background concepts, popular models, crucial discoveries, related datasets and evaluation details etc. * Besides paying special attention to the chronological order of LLMs throughout the article, we also summarize major findings of the popular contributions, and provide detailed discussion on the key design and deployment aspects of LLMs to help practitioners to effectively leverage this technology. It is noteworthy that although this article is the first contribution in its own right in terms of providing a concise yet comprehensive overview of LLMs, our work complements the recent (and emerging) surveys of this direction, e.g., [33, 32]. Infrequently, we also loosely follow the existing terminologies to ensure providing a more standardized outlook of this research direction. For instance, following [33], our survey considers a language model to be _large_ if it has 10B parameters or more. Hence, we discuss such models in detail in this survey. We refer the readers interested in smaller models to [35, 36, 32]. The organization of this paper is as follows. Section II discusses the background of LLMs. Section III focuses on LLMs overview, architectures, and training pipelines and strategies. Section IV presents the key findings derived from each LLM. Section V highlights the configuration and parameters that play a crucial role in the functioning of these models. The LLM training and evaluation benchmarks are discussed in section VI, followed by concluding remarks and future direction in the conclusion section. ## II Background We provide the relevant background to understand the fundamentals related to LLMs in this section. Aligned with our objective of providing a comprehensive overview of this direction, this section offers a comprehensive yet concise outline of the basic concepts. We focus more on the intuitive aspects and refer the readers interested in details to the original works. ### _Tokenization_ LLMs are trained on text to predict text, and similar to other natural language processing systems, they use tokenization [37] as the essential preprocessing step. It aims to parse the text into non-decomposing units called tokens. Tokens can be characters, subwords [38], symbols [39], or words, depending on the size and type of the model. Some of the commonly used tokenization schemes in LLMs are briefed here. Readers are encouraged to refer to [40] for a detailed survey. #### Ii-A1 WordPiece [41] It was introduced in [41] as a novel text segmentation technique for Japanese and Korean languages to improve the language model for voice search systems. WordPiece selects tokens that increase the likelihood of an n-gram-based language model trained on the vocabulary composed of tokens. #### Ii-A2 Bpe [39] Byte Pair Encoding (BPE) has its origin in compression algorithms. It is an iterative process of generating tokens where pairs of adjacent _symbols_ are replaced by a new symbol, and the occurrences of the most occurring symbols in the input text are merged. #### Ii-A3 UnigramLM [38] In this tokenization, a simple unigram language model (LM) is trained using an initial vocabulary of _subword_ units. The vocabulary is pruned iteratively by removing the lowest probability items from the list, which are the worst performing on the unigram LM. ### _Attention_ Attention, particularly _selective attention_, has been widely studied under perception, psychophysics, and psychology. Selective attention can be conceived as "the programming by the O of which stimuli will be processed or encoded and in what order this will occur" [42]. While this definition has its roots in visual perception, it has uncanny similarities with the recently formulated _attention_[43, 44] (which stimuli will be processed) and _positional encoding_ (in what order this will occur) [44] in LLMs. We discuss both in sections II-C and II-D, respectively. ### _Attention in LLMs_ The attention mechanism computes a representation of the input sequences by relating different positions (_tokens_) of these sequences. There are various approaches to calculating and implementing attention, out of which some famous types are given below. #### Ii-C1 Self-Attention [44] The self-attention is also known as intra-attention since all the queries, keys, and values come from the same block (encoder or decoder). The self-attention layer connects all the sequence positions with \(O(1)\) space complexity which is highly desirable for learning long-range dependencies in the input. #### Ii-C2 Cross Attention In encoder-decoder architectures, the outputs of the encoder blocks act as the queries to the intermediate representation of the decoder, which provides the keys and values to calculate a representation of the decoder conditioned on the encoder. This attention is called cross-attention. #### Ii-C3 Full Attention The naive implementation of calculating self-attention is known as full attention. #### Ii-C4 Sparse Attention [45] The self-attention has a time complexity of \(O(n^{2})\), which becomes prohibitive when scaling the LLMs to large context windows. An approximation to the self-attention was proposed in [45], which greatly enhanced the capacity of GPT series LLMs to process a greater number of input tokens in a reasonable time. #### Ii-C5 Flash Attention [46] The bottleneck for calculating the attention using GPUs lies in the memory access rather than the computational speed. Flash Attention uses the classical input tiling approach to process the blocks of the input in GPU on-chip SRAM rather than doing IO for every token from the High Bandwith Memory (HBM). An extension of this approach to sparse attention follows the speed gains of the full attention implementation. This trick allows even greater context-length windows in the LLMs as compared to those LLMs with sparse attention. ### _Encoding Positions_ The _attention_ modules do not consider the order of processing by design. Transformer [44] introduced "positional encodings" to feed information about the position of the tokens in input sequences. Several variants of positional encoding have been proposed [47, 48]. Interestingly, a recent study [49] suggests that adding this information may not matter for the state-of-the-art decoder-only Transformers. _1. Absolute:_ This is the most straightforward approach to adding the sequence order information by assigning a unique identifier to each position of the sequence before passing it to the attention module. _2. Relative:_ To pass the information on the relative dependencies of different tokens appearing at different locations in the sequence, a relative positional encoding is calculated by some kind of learning. Two famous types of relative encodings are: _Alibi:_[47] In this approach, a scalar bias is subtracted from the attention score calculated using two tokens which increases with the distance between the positions of the tokens. This learned approach effectively favors using recent tokens for attention. _RoPE:_ Keys, queries, and values are all vectors in the LLMs. RoPE [48] involves the rotation of the query and key representations at an angle proportional to their absolute positions of the tokens in the input sequence. This step results in a relative positional encoding scheme which decays with the distance between the tokens. ### _Activation Functions_ The activation functions serve a crucial role in the curve-fitting abilities of the neural networks, as proved in [50]. The modern activation functions used in LLMs are different from the earlier squashing functions but are critical to the success of LLMs. We discuss these activation functions in this section. _1. ReLU [51]:_ Rectified linear unit (ReLU) is defined as \[ReLU(x)=max(0,x) \tag{1}\] _2. ReLU [52]:_ Gaussian Error Linear Unit (GeLU) is the combination of ReLU, dropout [53] and zoneout [54]. It is the most widely used activation function in contemporary LLM literature. _3. GLU variants [55]:_ Gated Linear Unit [56] is a neural network layer that is an element-wise product (\(\otimes\)) of a linear transformation and a sigmoid transformed (\(\sigma\)) linear projection of the input given as \[GLU(x,W,V,b,c)=(xW+b)\otimes\sigma(xV+c), \tag{2}\] where \(X\) is the input of layer and \(l\), \(W,b,V\) and \(c\) are learned parameters. GLU was modified in [55] to evaluate the effect of different variations in the training and testing of transformers, resulting in better empirical results. Here are the different GLU variations introduced in [55] and used in LLMs. \[ReGLU(x,W,V,b,c) =max(0,xW+b)\otimes,\] \[GEGLU(x,W,V,b,c) =GELU(xW+b)\otimes(xV+c),\] \[SwiGLU(x,W,V,b,c,\beta) =Swish\beta(xW+b)\otimes(xV+c).\] ### _Layer Normalization_ Layer normalization leads to faster convergence and is a widely used component in transformers. In this section, we provide different normalization techniques widely used in LLM literature. _1. LayerNorm:_ Layer norm computes statistics over all the hidden units in a layer \((l)\) as follows: \[u^{l}=\frac{1}{n}\sum_{i}^{n}a_{i}^{l}\qquad\sigma^{l}=\sqrt{\frac{1}{n}\sum_{ i}^{n}(a_{i}^{l}-u^{l})^{2}}, \tag{3}\] where \(n\) is the number of neurons in the layer \(l\) and \(a_{i}^{l}\) is the summed input of the \(i\) neuron in layer \(l\). LayerNorm provides invariance to rescaling of the weights and re-centering of the distribution. _2. RMSNorm:_[57] proposed that the invariance properties of LayerNorm are spurious, and we can achieve the same performance benefits as we get from LayerNorm by using a computationally efficient normalization technique that trades off re-centering invariance with speed. LayerNorm gives the normalized summed input to layer \(l\) as follows \[\overline{a_{i}^{l}}=\frac{a_{i}^{l}-u^{l}}{\sigma}g_{i}^{l} \tag{4}\] where \(g_{i}^{l}\) is the gain parameter. RMSNorm [57] modifies \(\overline{a_{i}^{l}}\) as \[\overline{a_{i}^{l}}=\frac{a_{i}^{l}}{\text{RMS}(\mathbf{a}^{l})}g_{i}^{l}, \text{ where RMS}(\mathbf{a}^{l})=\sqrt{\frac{1}{n}\sum_{i}^{n}(a_{i}^{l})^{2}}. \tag{5}\] _3. Pre-Norm and Post-Norm:_ LLMs use transformer [44] architecture with some variations. The original implementation [44] used layer normalization after the residual connection, commonly called post-LN, concerning the order of _Multihead attention - Residual - LN_. There is another order of the normalization, referred to as pre-LN [58] due to the position of the normalization step before the self-attention layer as in _LN - Multihead attention - Residual_. Pre-LN is known to provide more stability in the training [59]. _4. DeepNorm:_ While pre-LN has certain benefits over post-LN training, pre-LN training has an unwanted effect on the gradients [59]. The earlier layers have larger gradients than those at the bottom. DeepNorm [60] mitigates these adverse effects on the gradients. It is given as \[\mathbf{x}^{l_{f}}=LN(\alpha\mathbf{x}^{l_{p}}+G^{l_{p}}(\mathbf{x}^{l_{p}}, \theta^{l_{p}}), \tag{6}\] where \(\alpha\) is a constant and \(\theta^{l_{p}}\) represents the parameters of layer \(l_{p}\). These parameters are scaled by another constant \(\beta\). Both of these constants depend only on the architecture. ### _Distributed LLM Training_ This section describes distributed LLM training approaches briefly. More details are available in [61, 62, 63, 9]. #### Iii-G1 Data Parallelism Data parallelism replicates the model on multiple devices where data in a batch gets divided across devices. At the end of each training iteration weights are synchronized across all devices. #### Iii-G2 Tensor Parallelism Tensor parallelism shards a tensor computation across devices. It is also known as horizontal parallelism or intra-layer model parallelism. #### Iii-G3 Pipeline Parallelism Pipeline parallelism shards model layers across different devices. This is also known as vertical parallelism. #### Iii-G4 Model Parallelism A combination of tensor and pipeline parallelism is known as model parallelism. #### Iii-G5 3D Parallelism A combination of data, tensor, and model parallelism is known as 3D parallelism. #### Iii-G6 Optimizer Parallelism Optimizer parallelism also known as zero redundancy optimizer [61] implements optimizer state partitioning, gradient partitioning, and parameter partitioning across devices to reduce memory consumption while keeping the communication costs as low as possible. ### _Libraries_ Some commonly used libraries for LLMs training are: 1) Transformer [64], 2) DeepSpeed [65], 3) Megataon-LM [62], 4) JAX [66], 5) Colossal-AI [67], 6) BMTrain [63], 7) FastMoE [68], and frameworks are 1) MindSpore [69], 2) PyTorch [70], 3) Tensorflow [71], 4) MXNet [72]. ### _Data PreProcessing_ This section briefly summarizes data preprocessing techniques used in LLMs training. #### Iii-G1 Quality Filtering For better results, training data quality is essential. Some approaches to filtering data are: 1) classifier-based and 2) heuristics-based. Classifier-based approaches train a classifier on high-quality data and predict the quality of text for filtering, whereas heuristics-based employ some rules for filtering like language, metrics, statistics, and keywords. #### Iii-G2 Data Deduplication Duplicated data can affect model performance and increase data memorization; therefore, to train LLMs, data deduplication is one of the preprocessing steps. This can be performed at multiple levels, like sentences, documents, and datasets. #### Iii-G3 Privacy Reduction Most of the training data for LLMs is collected through web sources. This data contains private information; therefore, many LLMs employ heuristics-based methods to filter information such as names, addresses, and phone numbers to avoid learning personal information. ### _Architectures_ Here we discuss the variants of the transformer architectures at a higher level which arise due to the difference in the application of the attention and the connection of transformer blocks. An illustration of attention patterns of these architectures is shown in Figure 4. #### Iii-G1 Encoder Decoder Transformers were originally designed as sequence transduction models and followed other prevalent model architectures for machine translation systems. They selected encoder-decoder architecture to train human language translation tasks. This architecture is adopted by [11, 15]. In this architectural scheme, an encoder encodes the input sequences to variable length context vectors, which are then passed to the decoder to maximize a joint objective of minimizing the gap between predicted token labels and the actual target token labels. #### Iii-G2 Causal Decoder The underlying objective of an LLM is to predict the next token based on the input sequence. While additional information from the encoder binds the prediction strongly to the context, it is found in practice that the LLMs can perform well in the absence of encoder [73], relying only on the decoder. Similar to the original encoder-decoder architecture's decoder block, this decoder restricts the flow of information backward, i.e., the predicted token \(t_{k}\) only depends on the tokens preceded by and up to \(t_{k-1}\). This is the most widely used variant in the state-of-the-art LLMs. #### Iii-G3 Prefix Decoder The causal masked attention is reasonable in the encoder-decoder architectures where the encoder can attend to all the tokens in the sentence from every position using self-attention. This means that the encoder can also attend to tokens \(t_{k+1}\) to \(t_{n}\) in addition to the tokens from \(t_{1}\) to \(t_{k-1}\) while calculating the representation for \(t_{k}\). But when we drop the encoder and only keep the decoder, we also lose this flexibility in attention. A variation in the decoder-only architectures is by changing the mask from strictly causal to fully visible on a portion of the input sequence, as shown in Figure 4. The Prefix decoder is also known as non-causal decoder architecture. ### _Pre-Training Objectives_ This section describes LLMs pre-training objectives. For more details see the paper [74]. #### Iii-G1 Full Language Modeling An autoregressive language modeling objective where the model is asked to predict future tokens given the previous tokens, an example is shown in Figure 5. Fig. 4: An example of attention patterns in language models, image is taken from [74]. #### Iv-C2 Prefix Language Modeling A non-causal training objective, where a prefix is chosen randomly and only remaining target tokens are used to calculate the loss. An example is shown in Figure 5. _3. Masked Language Modeling:_ In this training objective, tokens or spans (a sequence of tokens) are masked randomly and the model is asked to predict masked tokens given the past and future context. An example is shown in Figure 5. _4. Unified Language Modeling:_ Unified language modeling [75] is a combination of causal, non-causal, and masked language training objectives. Here in masked language modeling, the attention is not bidirectional but unidirectional, attending either left-to-right or right-to-left context. ### _Model Adaptation_ This section discusses the fundamentals of LLMs adaptation stages, from pre-training to fine-tuning for downstream tasks and utilization. An example of different training stages and inference in LLMs is shown in Figure 6. In this paper, we refer alignment-tuning to aligning with human preferences, while occasionally the literature uses the term alignment for different purposes. _1. Pre-Training:_ In the very first stage, the model is trained in a self-supervised manner on a large corpus to predict the next tokens given the input. The design choices of LLMs vary from encoder-decoder to decoder-only architectures with different building blocks and loss functions in sections II-F, II-E, II-K. _2. Fine-Tuning:_ There are different styles to fine-tune an LLM. This section briefly discusses fine-tuning approaches. _Transfer Learning:_ The pre-trained LLMs perform well for various tasks [8, 14]. But to improve the performance for a downstream task, pre-trained models are fine-tuned with the task-specific data [11, 12], known as transfer learning. _Instruction-tuning:_ To enable a model to respond to user queries effectively, the pre-trained model is fine-tuned on instruction formatted data i.e., instruction and an input-output pair. Instructions generally comprise multi-task data in plain natural language, guiding the model to respond according to the prompt and the input. This type of fine-tuning improves zero-shot generalization and downstream task performance. Details on formatting instruction data and its various styles are available in [25, 33, 24]. _Alignment-tuning:_ LLMs are prone to generate false, biased, and harmful text. To make them helpful, honest, and harmless models are aligned using human feedback. Alignment involves asking LLMs to generate unexpected responses and then updating their parameters to avoid such responses [76, 77, 78]. It ensures LLMs operate according to human intentions and values. A model is defined to be an "aligned" model if the model fulfills three criteria of helpful, honest, and harmless or "HHH" [79]. Researchers employ reinforcement learning with human feedback (RLHF) [80] for model alignment. In RLHF, a fine-tuned model on demonstrations is further trained with reward modeling (RM) and reinforcement learning (RL), shown in Figure 6. Below we briefly discuss RM and RL pipelines in RLHF. _Reward modeling:_ trains a model to rank generated responses according to human preferences using a classification objective. To train the classifier humans annotate LLMs generated responses based on HHH criteria. _Reinforcement learning:_ in combination with the reward model is used for alignment in the next stage. The previously trained reward model ranks LLM-generated responses into preferred vs. dispreferred, which is used to align the model with proximal policy optimization (PPO). This process repeats iteratively until convergence. _Parameter-Efficient Tuning:_ LLMs require bigger memory and computing for training. To train them using fewer resources, researchers suggested various parameter-efficient fine-tuning techniques by updating few parameters, either by adding new parameters to the model or the existing ones. Some of the commonly used methods are discussed below. _Prompt Tuning:_[30, 81] adds trainable prompt token embeddings as prefixes or free-style to the input token embeddings. During fine-tuning only these embedding parameters are trained for the downstream task while keeping the rest of the weights frozen. _Prefix Tuning:_[31] adds task-specific trainable prefix vectors to the transformer layers, where only prefix parameters are fine-tuned, and the rest of the model stays frozen. The input sequence tokens can attend prefixes acting as virtual tokens. _Adapter Tuning:_ module is an encoder-decoder architecture that is placed either sequential or parallel to the attention and feed-forward layers in the transformer block [28, 29, 82]. Only these layers are fine-tuned, and the rest of the model is kept frozen. _3. Prompting/Utilization:_ Prompting is a method to query trained LLMs for generating responses, as illustrated in Figure 6. LLMs can be prompted in various prompt setups, where they can be adapted to the instructions without fine-tuning and in other cases with fine-tuning on data containing different prompt styles [25, 83, 84]. A good guide on prompt engineering is available at [85]. Below, we will discuss various widely used prompt setups. _Zero-Shot Prompting:_ LLMs are zero-shot learners and capable of answering queries never seen before. This style of prompting requires LLMs to answer user questions without seeing any examples in the prompt. _In-context Learning:_ Also known as few-shot learning, here, multiple input-output demonstration pairs are shown to the model to generate the desired response. This adaptation style is also called few-shot learning. A discussion on formatting Fig. 5: An example of language model training objectives, image from [74]. in-context learning (ICL) templates is available in [86, 33, 26, 25]. _Reasoning in LLMs:_ LLMs are zero-shot reasoners and can be provoked to generate answers to logical problems, task planning, critical thinking, etc. with reasoning. Generating reasons is possible only by using different prompting styles, whereas to improve LLMs further on reasoning tasks many methods [25, 24] train them on reasoning datasets. We discuss various prompting techniques for reasoning below. _Chain-of-Thought (CoT):_ A special case of prompting where demonstrations contain reasoning information aggregated with inputs and outputs so that the model generates outcomes with step-by-step reasoning. More details on CoT prompts are available in [87, 88, 83]. _Self-Consistency:_ Improves CoT performance by generating multiple responses and selecting the most frequent answer [89]. _Tree-of-Thought (ToT):_ Explores multiple reasoning paths with possibilities to look ahead and backtrack for problem-solving [90]. _Single-Turn Instructions:_ In this prompting setup, LLMs are queried only once with all the relevant information in the prompt. LLMs generate responses by understanding the context either in a zero-shot or few-shot setting. _Multi-Turn Instructions:_ Solving a complex task requires multiple interactions with LLMs, where feedback and responses from the other tools are given as input to the LLM for the next rounds. This style of using LLMs in the loop is common in autonomous agents. ## III Large Language Models This section reviews LLMs, briefly describing their architectures, training objectives, pipelines, datasets, and fine-tuning details. ### _Pre-Trained LLMs_ Here, we provide summaries of various well-known pre-trained LLMs with significant discoveries, changing the course of research and development in NLP. These LLMs have considerably improved the performance in NLU and NLG domains, and are widely fine-tuned for downstream tasks. _1. General Purpose:_ _1.1 TS [11]:_ An encoder-decoder model employing a unified text-to-text training for all NLP problems, shown in Figure 7. T5 places layer normalization outside the residual path in a conventional transformer model [44]. It uses masked Fig. 6: A basic flow diagram depicting various stages of LLMs from pre-training to prompting/utilization. Prompting LLMs to generate responses is possible at different training stages like pre-training, instruction-tuning, or alignment tuning. language modeling as a pre-training objective where spans (consecutive tokens) are replaced with a single mask instead of separate masks for each token. This type of masking speeds up the training as it produces shorter sequences. After pre-training, the model is fine-tuned using adapter layers [82] for downstream tasks. #### 1.2. GPT-3 [8] The GPT-3 architecture is same as the GPT-2 [91] but with dense and sparse attention in transformer layers similar to the Sparse Transformer [45]. It shows that large models can train on larger batch sizes with a lower learning rate; in order to decide the batch size during training, GPT-3 uses the gradient noise scale as in [92]. Overall, GPT-3 increases model parameters to 175B showing that the performance of large language models improves with the scale and is competitive with the fine-tuned models. #### 1.3 mT5 [12] A multilingual T5 model [11] trained on the mC4 dataset with 101 languages. The dataset is extracted from the public common crawl scrape. The model uses a larger vocab size of 250,000 to cover multiple languages. To avoid over-fitting or under-fitting for a language, mT5 employs a data sampling procedure to select samples from all languages. The paper suggests using a small amount of pre-training datasets, including all languages when fine-tuning for a task using English language data. This allows the model to generate correct non-English outputs. #### 1.4 PanGu-\(\alpha\)[93] An autoregressive model that has a query layer at the end of standard transformer layers, example shown in Figure 8, with aim to predict next token. Its structure is similar to the transformer layer but with an additional embedding for the next position in the attention mechanism, given in Eq. 7. \[a=p_{n}W_{h}^{q}W_{h}^{k}TH_{L}^{T} \tag{7}\] #### 1.5 Cpm-2 [13] Cost-efficient Pre-trained language Models (CPM-2) pre-trains bilingual (English and Chinese) 11B and 198B mixture-of-experts (MoE) models on the WuaDaoCorpus [94] dataset. The tokenization process removes " " white space tokens in the sentencepiece tokenizer. The models are trained with knowledge inheritance, starting with only the Chinese language in the first stage and then adding English and Chinese data. This trained model gets duplicated multiple times to initialize the 198B MoE model. Moreover, to use the model for downstream tasks, CPM-2 experimented with both complete fine-tuning and prompt fine-tuning as in [27] where only prompt-related parameters are updated by inserting prompts at various positions, front, middle, and back. CPM-2 also proposes INFMOE, a memory-efficient framework with a strategy to dynamically offload parameters to the CPU for inference at a 100B scale. It overlaps data movement with inference computation for lower inference time. #### 1.6 Ernie 3.0 [95] ERNIE 3.0 takes inspiration from multi-task learning to build a modular architecture using Transformer-XL [96] as the backbone. The universal representation module is shared by all the tasks, which serve as the basic block for task-specific representation modules, which are all trained jointly for natural language understanding, natural language generation, and knowledge extraction. This LLM is primarily focused on the Chinese language, claims to train on the largest Chinese text corpora for LLM training, and achieved state-of-the-art in 54 Chinese NLP tasks. #### 1.7 Jurassic-1 [97] A pair of auto-regressive language models, including a 7B-parameter J1-Large model and a 178B-parameter J1-Jumbo model. The training vocabulary of Jurassic-1 comprise word pieces, complete words, and multi-word expressions without any word boundaries, where possible out-of-vocabulary instances are interpreted as Unicode bytes. Compared to the GPT-3 counterparts, the Jurassic-1 models apply a more balanced depth-to-width self-attention architecture [98] and an improved tokenizer for a faster prediction based on broader resources, achieving a comparable performance in zero-shot learning tasks and a superior performance in few-shot learning tasks given the ability to feed more examples as a prompt. #### 1.8 HyperCLOVA [99] A Korean language model with GPT-3 architecture. #### 1.9 Yuan 1.0 [100] Trained on a Chinese corpus with 5TB of high-quality text collected from the Internet. A Massive Data Filtering System (MDFS) built on Spark is developed to process the raw data via coarse and fine filtering techniques. To speed up the training of Yuan 1.0 with the aim of saving energy expenses and carbon emissions, various factors that improve the performance of distributed training are incorporated in architecture and training like increasing the number of hidden size improves pipeline and tensor parallelism performance, larger micro batches improve pipeline parallelism performance, and higher global batch size improve Fig. 8: The image is the article of [93], showing an example of PanGu-\(\alpha\) architecture. Fig. 7: Unified text-to-text training example, source image from [11]. data parallelism performance. In practice, the Yuan 1.0 model performs well on text classification, Winograd Schema, natural language inference, and reading comprehension tasks. #### 1.1.0 Gopher [101] The Gopher family of models ranges from 44M to 280B parameters in size to study the effect of _scale_ on the LLMs performance. The 280B model beats GPT-3 [8], Jurassic-1 [97], MT-NLG [21], and others on 81% of the evaluated tasks. #### 1.1.1 Ernie 3.0 Titan [102] ERNIE 3.0 Titan extends ERNIE 3.0 by training a larger model with 26x the number of parameters of the latter. This bigger model outperformed other state-of-the-art models in 68 NLP tasks. LLMs produce text with incorrect facts. In order to have control of the generated text with factual consistency, ERNIE 3.0 Titan adds another task, _Credible and Controllable Generations_, to its multi-task learning setup. It introduces additional self-supervised adversarial and controllable language modeling losses to the pre-training step, which enables ERNIE 3.0 Titan to beat other LLMs in their manually selected Factual QA task set evaluations. #### 1.1.2 GPT-NeoX-20B [103] An auto-regressive model that largely follows GPT-3 with a few deviations in architecture design, trained on the Pile dataset without any data deduplication. GPT-NeoX has parallel attention and feed-forward layers in a transformer block, given in Eq. 8, that increases throughput by 15%. It uses rotary positional embedding [48], applying it to only 25% of embedding vector dimension as in [104]. This reduces the computation without performance degradation. Opposite to GPT-3, which uses dense and sparse layers, GPT-NeoX-20B uses only dense layers. The hyperparameter tuning at this scale is difficult; therefore, the model chooses hyperparameters from the method [8] and interpolates values between 13B and 175B models for the 20B model. The model training is distributed among GPUs using both tensor and pipeline parallelism. \[x+Attn(LN_{1}(x))+FF(LN_{2}(x)) \tag{8}\] #### 1.1.3 Opt [10] It is a clone of GPT-3, developed with the intention to open-source a model that replicates GPT-3 performance. Training of OPT employs dynamic loss scaling [105] and restarts from an earlier checkpoint with a lower learning rate whenever loss divergence is observed. Overall, the performance of OPT-175B models is comparable to the GPT3-175B model. #### 1.1.4 Bloom [9] A causal decoder model trained on ROOTS corpus with the aim of open-sourcing an LLM. The architecture of BLOOM is shown in Figure 9, with differences like ALBi positional embedding, an additional normalization layer after the embedding layer as suggested by the bitsandbytes1 library. These changes stabilize training with improved downstream performance. Footnote 1: [https://github.com/TimDettmers/bitsandbytes](https://github.com/TimDettmers/bitsandbytes) #### 1.1.5 GlaM [106] Generalist Language Model (GLaM) represents a family of language models using a sparsely activated decoder-only mixture-of-experts (MoE) structure [107, 108]. To gain more model capacity while reducing computation, the experts are sparsely activated where only the best two experts are used to process each input token. The largest GLaM model, GLaM (64B/64E), is about 7\(\times\) larger than GPT-3 [8], while only a part of the parameters is activated per input token. The largest GLaM (64B/64E) model achieves better overall results as compared to GPT-3 while consuming only one-third of GPT-3's training energy. #### 1.1.6 Mt-Nlg [21] A 530B causal decoder based on GPT-2 architecture that is roughly 3\(\times\) GPT-3 model parameters. MT-NLG is trained on filtered high-quality data collected from various public datasets and blends various types of datasets in a single batch, which beats GPT-3 on a number of evaluations. #### 1.1.7 Chinchilla [109] A causal decoder trained on the same dataset as the Gopher [101] but with a little different data sampling distribution (sampled from MassiveText). The model architecture is similar to the one used for Gopher, with the exception of AdamW optimizer instead of Adam. Chinchilla identifies the relationship that model size should be doubled for every doubling of training tokens. Over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens are trained to get the estimates for compute-optimal training under a given budget. The authors train a 70B model with the same compute budget as Gopher (280B) but with 4 times more data. It outperforms Gopher [101], GPT-3 [8], and others on various downstream tasks, after fine-tuning. #### 1.1.8 AlexaTM [110] An encoder-decoder model, where encoder weights and decoder embeddings are initialized with a pre-trained encoder to speedup training. The encoder stays frozen for initial 100k steps and later unfreezed for end-to-end training. The model is trained on a combination of denoising and causal language modeling (CLM) objectives, concatenating \([CLM]\) token at the beginning for mode switching. During training, the CLM task is applied for 20% of the time, which improves the in-context learning performance. #### 1.1.9 PaLM [14] A causal decoder with parallel attention and feed-forward layers similar to Eq. 8, speeding up training 15 times faster. Additional changes to the conventional transformer model include SwiGLU activation, RoPE embeddings, multi-query attention that saves computation cost during decoding, and shared input-output embeddings. During training, loss spiking was observed, and to fix it, model training was restarted from a 100 steps earlier checkpoint by skipping 200-500 batches around the spike. Moreover, the Fig. 9: The BLOOM architecture example sourced from [9]. model was found to memorize around 2.4% of the training data at the 540B model scale, whereas this number was lower for smaller models. _PaLM-2 [111]:_ A smaller multi-lingual variant of PaLM, trained for larger iterations on a better quality dataset. The PaLM-2 shows significant improvements over PaLM, while reducing training and inference costs due to its smaller size. To lessen toxicity and memorization, it appends special tokens with a fraction of pre-training data, which shows reduction in generating harmful responses. _1.20 U-PaLM [20]:_ This method trains PaLM for 0.1% additional compute with UL2 (also named as UL2Restore) objective [15] using the same dataset and outperforms baseline significantly on various NLP tasks, including zero-shot, few-shot, commonsense reasoning, CoT, etc. Training with UL2R involves converting a causal decoder PaLM to a non-causal decoder PaLM and employing 50% sequential denoising, 25% regular denoising, and 25% extreme denoising loss functions. _1.21 UL2 [15]:_ An encoder-decoder architecture trained using a mixture of denoisers (MoD) objectives. Denoisers include 1) R-Denoiser: a regular span masking, 2) S-Denoiser: which corrupts consecutive tokens of a large sequence and 3) X-Denoiser: which corrupts a large number of tokens randomly. During pre-training, UL2 includes a denoiser token from \(R,S,X\) to represent a denoising setup. It helps improve fine-tuning performance for downstream tasks that bind the task to one of the upstream training modes. This MoD style of training outperforms the T5 model on many benchmarks. _1.22 GLM-130B [112]:_ GLM-130B is a bilingual (English and Chinese) model trained using an auto-regressive mask infilling pre-training objective similar to the GLM [113]. This training style makes the model bidirectional as compared to GPT-3, which is unidirectional. Opposite to the GLM, the training of GLM-130B includes a small amount of multi-task instruction pre-training data (5% of the total data) along with the self-supervised mask infilling. To stabilize the training, it applies embedding layer gradient shrink. _1.23 LLaMA [114, 77]:_ A set of decoder-only language models varying from 7B to 70B parameters. LLaMA models series is the most famous among the community for parameter-efficient and instruction tuning. _LLaMA-1 [114]:_ Implements efficient causal attention [115] by not storing and computing masked attention weights and key/query scores. Another optimization is reducing number of activations recomputed in backward pass, as in [116]. _LLaMA-2 [77]:_ This work is more focused towards fine-tuning a safer and better LLaMA-2-Chat model for dialogue generation. The pre-trained model has 40% more training data with a larger context length and grouped-query attention. _1.24 PanGu-2 [117]:_ An autoregressive model with parameters copied from PanGu-\(\alpha\) and extended to a trillion scale with Random Routed Experts (RRE), the architectural diagram is shown in Figure 10. RRE is similar to the MoE architecture, with distinctions at the second level, where tokens are randomly routed to experts in a domain instead of using a learnable gating method. The model has bottom layers densely activated and shared across all domains, whereas top layers are sparsely activated according to the domain. This training style allows extracting task-specific models and reduces catastrophic forgetting effects in case of continual learning. _2. Coding:_ _2.1 CodeGen [118]:_ CodeGen has similar architecture to the PaLM [14], i.e., parallel attention, MLP layers, and RoPE embeddings. The model is trained on both natural language and programming language data sequentially (trained on the first dataset, then the second and so on) on the following datasets 1) PILE, 2) BIGQUERY and 3) BIGPYTHON. CodeGen proposed a multi-step approach to synthesizing code. The purpose is to simplify the generation of long sequences where the previous prompt and generated code are given as input with the next prompt to generate the next code sequence. CodeGen opensource a Multi-Turn Programming Benchmark (MTPB) to evaluate multi-step program synthesis. _2.2 Codex [119]:_ This LLM is trained on a subset of public Python Github repositories to generate code from docstrings. Computer programming is an iterative process where the programs are often debugged and updated before fulfilling the requirements. Similarly to this, Codex generates 100 versions of a program by repetitive sampling for a given description, which produces a working solution for 77.5% of the problems passing unit tests. Its powerful version powers Github Copilot2. Footnote 2: [https://github.com/features/copilot](https://github.com/features/copilot) _2.3 AlphaCode [120]:_ A set of large language models, ranging from 300M to 41B parameters, designed for competition-level code generation tasks. It uses the multi-query attention [121] to reduce memory and cache costs. Since competitive programming problems highly require deep reasoning and an understanding of complex natural language algorithms, the AlphaCode models are pre-trained on filtered GitHub code in popular languages and then fine-tuned on a new competitive programming dataset named CodeContests. The CodeContests dataset mainly contains problems, solutions, and test cases collected from the Codeforces platform3. The pre-training employs standard language modeling objectives, while GOLD [122] with tempering [123] serves as the training objective for the fine-tuning on CodeContests data. To evaluate the performance of AlphaCode, simulated programming competitions are hosted on the Codeforces platform: overall, AlphaCode ranks at the top 54.3% among over 5000 competitors, where its Codefores rating is within the top 28% of recently participated users. Footnote 3: [https://codeforces.com/](https://codeforces.com/) _2.4 CodeT5+ [124]:_ CodeT5+ is based on CodeT5 [125], with shallow encoder and deep decoder, trained in multiple stages initially unimodal data (code) and later bimodal data (text-code pairs). Each training stage has different training objectives and activates different model blocks encoder, decoder, or both according to the task. The unimodal pre-training includes span denoising and CLM objectives, whereas bimodal pre-training objectives contain contrastive learning, matching, and CLM for text-code pairs. CodeT5+ adds special tokens with the text to enable task modes, for example, \([CLS]\) for contrastive loss, \([Match]\) for text-code matching, etc. _StarCoder [126]:_ A decoder-only model with SantaCoder architecture, employing Flash attention to scale up the context length to 8k. The StarCoder trains an encoder to filter names, emails, and other personal data from the training data. Its fine-tuned variant outperforms PaLM, LLaMA, and LAMDA on HumanEval and MBPP benchmarks. _3. Scientific Knowledge:_ _3.1 Galactica [127]:_ A large curated corpus of human scientific knowledge with 48 million papers, textbooks, lecture notes, millions of compounds and proteins, scientific websites, encyclorebials, and more are trained using metaseal library3, which is built on PyTorch and fairscale [128]. The model wraps reasoning datasets with \(<work>\) token to provide step-by-step reasoning context to the model, which has been shown to improve the performance on reasoning tasks. _4. Dialog:_ _4.1 LaMDA [129]:_ A decoder-only model pre-trained on public dialog data, public dialog utterances, and public web documents, where more than 90% of the pre-training data is in English. LaMDA is trained with the objective of producing responses that exhibit high levels of quality, safety, and groundedness. To achieve this, discriminative and generative fine-tuning techniques are incorporated to enhance the model's safety and quality aspects. As a result, the LaMDA models can be utilized as a general language model performing various tasks. _5. Finance:_ _5.1 BloombergGPT [130]:_ A non-causal decoder model trained using both financial ("FINIPLE" from the Bloomberg archive) and general-purpose datasets. The model's architecture is similar to the BLOOM [9] and OPT [10]. It allocates 50B parameters to different blocks of the model using the approach [131]. For effective training, BloombergGPT packs documents together with \(<|endoftext|>\) to use maximum sequence length, use warmup batch size starting from 1024 to 2048, and manually reduces the learning rate multiple times during the training. _5.2 Xuan Yuan 2.0 [132]:_ A Chinese financial chat model with BLOOM's [9] architecture trained on a combination of general purpose, financial, general purpose instructions, and financial institutions datasets. Xuan Yuan 2.0 combined the pre-training and fine-tuning stages to avoid catastrophic forgetting. ### _Fine-Tuned LLMs_ Pre-trained LLMs have excellent generalization abilities to unseen tasks. However, because they are generally trained with the objective of next token prediction, LLMs have limited capacity to follow user intent and are prone to generate unethical, toxic or inaccurate responses [76]. For their effective utilization, LLMs are fine-tuned to follow instructions [25, 22, 24] and generate safe responses [76], which also results in increasing zero-shot, few-shot, and cross-task generalization [24, 25, 26], with minimal compute increment, e.g., 0.2% of the total pre-training for PaLM 540B [25]. We review various fine-tuned LLMs and strategies for effective fine-tuning in this section. _1. Instruction-Tuning with Manually Created Datasets:_ Numerous hand-crafted instruction-tuning datasets with different design choices are proposed in the literature to instruction-tune LLMs. The performance of fine-tuned LLMs depends on multiple factors, such as dataset, instruction diversity, prompting templates, model size, and training objectives. Keeping this in view, diverse fine-tuned models have emerged in the literature using manually created datasets. The models T0 [22] and mT0 (multi-lingual) [134] employ templates to convert existing datasets into prompt datasets. They have shown improvements in generalization to zero-shot and held-out tasks. Tk-Instruct [26] fine-tuned the T5 model with in-context instructions to study generalization on unseen tasks when given in-context instructions during test time. The model outperformed Instruct-GPT, despite being smaller in size, i.e., 11B parameters as compared to 175B of GPT-3. _Increasing Tasks and Prompt Setups:_ Zero-shot and few-shot performance improves significantly by expanding task collection and prompt styles. OPT-IML [24] and Flan [25] curated larger 2k and 1.8k task datasets, respectively. While increasing task size alone is not enough, OPT-IML and Flan add more prompting setups in their datasets, zero-shot, few-shot, and CoT. In continuation, CoT Collection [83] fine-tunes Flan-T5 further on 1.88M CoT samples. Another method [84] uses symbolic tasks with tasks in T0, Flan, etc. _2. Instruction-Tuning with LLMs Generated Datasets:_ Generating an instruction-tuning dataset requires carefully writing instructions and input-output pairs, which are often written by humans, smaller in size, and less diverse. To overcome this, self-instruct [135] proposed an approach to prompt available LLMs to generate instruction-tuning datasets. Self-instruct outperformed models trained on manually created dataset SUPER-NATURALINSTRUCTIONS (a dataset with 1600+ tasks) [26] by 33%. It starts with a seed of 17 tasks, 1 instruction, and 1 sample per task and iteratively generates new instructions (52k) and instances (82k input-output pairs) using GPT-3 [8]. Contrary to this, Dynosaur [136] uses the meta-data of datasets on Huggingface to prompt LLMs to Fig. 10: This example illustrates the PanGu-\(\sum\) architecture, as depicted in the image sourced from [117]. \begin{table} \begin{tabular}{p{14.2pt} p{284.5pt}} \hline \hline Models & Findings \& Insights \\ \hline \hline T5 & * Encoder and decoder with shared parameters perform equivalently when parameters are not shared * Fine-tuning model layers (adapter layers) work better than the conventional way of training on only classification layers \\ \hline GPT-3 & * Few-shot performance of LLMs is better than the zero-shot, suggesting that LLMs are meta-learners \\ \hline mT5 & * Large multi-lingual models perform equivalently to single language models on downstream tasks. However, smaller multi-lingual models perform worse \\ \hline PanGu-\(\alpha\) & * LLMs are good at a few shot capabilities \\ \hline \multirow{4}{*}{CPM-2} & * Prompt fine-tuning requires updating very few parameters while achieving performance comparable to full model fine-tuning * Prompt fine-tuning takes more time to converge as compared to full model fine-tuning * Inserting prompt tokens in-between sentences can allow the model to understand relations between sentences and long sequences * In an analysis, CPM-2 finds that prompts work as a provider (additional context) and aggregator (aggregate information with the input text) for the model \\ \hline \multirow{4}{*}{Codex} & * This LLM focuses on code evaluations and introduces a novel way of selecting the best code samples. * The results indicate it is possible to accurately select code samples using heuristic ranking in lieu of a detailed evaluation of each sample, which may not be feasible or feasible in some situations. \\ \hline \multirow{4}{*}{ERNIE 3.0} & * ERNIE 3.0 shows that a modular LLM architecture with a universal representation module and task-specific representation module helps in finentuning phase. * Optimizing the parameters of a task-specific representation network during the fine-tuning phase is an efficient way to take advantage of the powerful pretrained model. \\ \cline{2-3} & * The performance of an LLM is highly related to the network size. * To improve runtime performance, more operations can be performed in parallel (width) rather than sequentially (depth). * To efficiently represent and fit more text in the same context length, the model uses a larger vocabulary to train a SentencePiece tokenizer without restricting it to word boundaries. This tokenizer improvement can further benefit few-shot learning tasks. \\ \hline HyperCLOVA & * By employing prompt-based tuning, the performances of models can be improved, often surpassing those of state-of-the-art * The model architecture that excels in pre-training and fine-tuning cases may exhibit contrasting behavior in zero-shot and few-shot learning. \\ \cline{2-3} Gopher & * Relative encodings enable models to be evaluated for longer sequences than those on which it was trained. \\ \cline{2-3} & * This LLM builds on top of ERNIE 3.0 and add a self-supervised adversarial loss to distinguish whether a text is generated or the original one. * This distinction ability between real and generate text improves the LLM’s performance as compared to ERNIE 3.0. \\ \cline{2-3} & * Parallel attention + FF layers speed-up training 15% with the same performance as with cascaded layers * Initializing feed-forward output layers before residuals with scheme in [133] avoids activations from growing with increasing depth and width * Training on Pile outperforms GPT-3 on five-shot \\ \cline{2-3} & * Restart training from an earlier checkpoint with a lower learning rate if loss diverges * Model is prone to generate repetitive text and stuck in a loop \\ \cline{2-3} & * None \\ \cline{2-3} & * Galactic’s performance has continued to improve across validation set, in-domain, and out-of-domain benchmarks, even with multiple repetitions of the corpus, which is superior to existing research on LLMs. \\ \cline{2-3} Galactic & * A working memory token approach can achieve strong performance over existing methods on mathematical MMLU and MATI benchmarks. It sets a new state-of-the-art on several downstream tasks such as PubMedQA (77.6%) and MedMCQA dev (52.9%). \\ \cline{2-3} & * The feed-forward component of each Transformer layer can be replaced with a mixture-of-experts (MoE) module consisting of a set of independent feed-forward networks (_i.e._, the ‘experts’). By sparsely activating these experts, the model capacity can be maintained while much computation is saved. * By leveraging sparsity, we can make significant strides toward developing high-quality NLP models while simultaneously reducing energy consumption. Consequently, MoE emerges as a robust candidate for future scaling endeavors. \\ \cline{2-3} GLaM & * The model trained on filtered data shows consistently better performances on both NLG and NLU tasks, where the effect of filtering is more significant on the former tasks. * Filtered pretraining corpora plays a crucial role in the generation capability of LLMs, especially for the downstream tasks. * The scaling of GLaM MoE models can be achieved by increasing the size or number of experts in the MoE layer. Given a fixed budget of computation, more experts contribute to better predictions. \\ \cline{2-3} LaMDA & * The model can be fine-tuned to learn to call different external information resources and tools. \\ \cline{2-3} MT-NLG & * None. \\ \cline{2-3} & * For higher effectiveness and efficiency, a transformer model can be asymmetrically constructed with a shallower encoder and a deeper decoder. \\ \cline{2-3} AlphaCode & * To achieve better performances, it is necessary to employ strategies such as massively scaling up sampling, followed by the filtering and clustering of samples into a compact set. * The utilization of novel sampling-efficient transformer architectures designed to facilitate large-scale sampling is crucial. * Simplifying problem descriptions can effectively improve the model’s performance. \\ \hline \end{tabular} \end{table} TABLE I: Noteworthy findings and insights from _pre-trained_ Large Language Model. generate multiple task instruction-tuning datasets. _LLaMA Tuned_ Various models in literature instruction-tune LLaMA [137] with GPT-3 [8] or GPT-4 [138] generated datasets. Among these, Alpaca [139], Vicuna [140], and LLaMA-GPT-4 [141] are a few general-purpose fine-tuned models, where Alpaca is trained on 52k samples from text-davinci-003, Vicuna on 70k samples from ShareGPT.com, and LLaMA-GPT-4 by re-creating Alpaca instructions from GPT-4. Gou [142] fine-tunees LLaMA for arithmetic tasks (1 million samples) by generating data from ChatGPT and outperforms GPT-4, PaLM, BLOOM, OPT, etc, attributing its success to the LLaMA's consistent tokenization of numbers. HuaTuo [143] is a medical knowledge model, fine-tuned with a generated QA dataset of 8k instructions. _Complex Instructions_ Evol-Instruct [144], [145] prompts LLMs to convert given instructions into a more complex set. The instructions are iteratively evolved with re-writing instructions in complex wording and creating new instructions. With this style of automated instruction generation, WizardLM [144] (fine-tuned LLaMA on 250k instructions), outperforms Vicuna and Alpaca, and WizardCoder [145] (fine-tuned StarCoder) beats Claude-Plus, Bard, and others. Fig. 11: An example image shows an instance of the Flan training paradigm, taken from [25]. #### Vi-B3 Aligning with Human Preferences Incorporating human preferences into LLMs presents a significant advantage in mitigating undesirable behaviors and ensuring accurate outputs. The initial work on alignment, such as InstructGPT [76] aligns GPT-3 using a 3-step approach, instruction-tuning, reward modeling, and fine-tuning with reinforcement learning (RL). The supervised fine-tuned GPT-3 on demonstrations is queried to generate responses, which human labelers rank according to human values, and a reward model is trained on the ranked data. Lastly, the GPT-3 is trained with proximal policy optimization (PPO) using rewards on the generated data from the reward model. LLaMA 2-Chat [77] improves alignment by dividing reward modeling into helpfulness and safety rewards and using rejection sampling in addition to PPO. The initial four versions of LLaMA 2-Chat are fine-tuned with rejection sampling and then with PPO on top of rejection sampling. _Aligning with Supported Evidence:_ This style of alignment allows the model to generate responses with proofs and facts, reduces hallucination, and assists humans more effectively, which increases trust in the model's output. Similar to the RLHF training style, a reward model is trained to rank generated responses containing web citations in answers to questions, which is later used to train the model, as in GopherCite [146], WebGPT [147], and Sparrow [148]. The ranking model in Sparrow [148] is divided into two branches, preference reward and rule reward, where human annotators adversarial probe the model to break a rule. These two rewards together rank a response to train with RL. _Aligning Directly with SFT:_ The PPO in the RLHF pipeline is complex, memory-intensive, and unstable, requiring multiple models, reward, value, policy, and reference models. Avoiding this sophisticated alignment pipeline is possible by incorporating minimal changes in the supervised fine-tuning (SFT) pipeline as in [149, 150, 151], with better or comparable performance to PPO. Direct preference \begin{table} \begin{tabular}{l c} \hline \hline **Models** & **Findings \& Insights** \\ \hline \hline \multirow{7}{*}{T0} & \(\bullet\) Multi-task prompting enables zero-shot generalization and outperforms baselines \\ & \(\bullet\) Even a single prompt per dataset task is enough to improve performance \\ \cline{2-3} & \(\bullet\) The answer quality of LLMs can be further improved with human feedback. \\ & \(\bullet\) To aid the model in effectively filtering and utilizing relevant information, human labelers play a crucial role in answering questions regarding the usefulness of the retrieved documents. \\ WebGPT & Interacting a fine-tuned language model with a text-based web-browsing environment can improve end-to-end retrieval and synthesis via imitation learning and reinforcement learning. \\ & \(\bullet\) Generating answers with references can make labelers easily judge the factual accuracy of answers. \\ \cline{2-3} & \(\bullet\) Instruction tuning leads to a stronger generalization of unseen tasks \\ \cline{2-3} & \(\bullet\) More tasks improve generalization whereas only increasing task instances does not help \\ Tk-INSTRUCT & \(\bullet\) Supervised trained models are better than generalized models \\ \cline{2-3} & \(\bullet\) Models pre-trained with instructions and examples perform well for different types of inputs \\ \cline{2-3} & \(\bullet\) Instruction tuning enables zero-shot generalization to the tasks never seen before \\ \cline{2-3} & \(\bullet\) Multi-lingual training leads to even better zero-shot generalization for both English and non-English \\ mT0 and BLOOMZ & Training on machine-translated prompts improves performance for held-out tasks with non-English prompts \\ \cline{2-3} & \(\bullet\) English only fine-tuning on multilingual pre-trained language model is enough to generalize to other pre-trained language tasks \\ \cline{2-3} & \(\bullet\) Task size sampling to create a batch with most of the task examples is important for better performance \\ \cline{2-3} & \(\bullet\) Only example proportional sampling is not enough, training datasets/benchmarks should also be proportional for better \\ \cline{2-3} & \(\bullet\) Penalty held-out and partially supervised tasks performance improves by scaling tasks or categories whereas fully supervised \\ \cline{2-3} & \(\bullet\) tasks have no effect \\ \cline{2-3} & \(\bullet\) Including small amounts i.e. 5\% of pretraining data during fine-tuning is effective \\ \cline{2-3} & \(\bullet\) Only 1\% reasoning data improves the performance, adding more deteriorates performance \\ \cline{2-3} & \(\bullet\) Adding dialogue data makes the performance worse \\ \cline{2-3} & \(\bullet\) Finentuing with CoT improves performance on held-out tasks \\ \cline{2-3} & \(\bullet\) Fine-tuning along with CoT data improves reasoning \\ \cline{2-3} & \(\bullet\) CoT tuning improves zero-shot reasoning \\ \cline{2-3} & \(\bullet\) Performance improves with more tasks \\ \cline{2-3} & \(\bullet\) Instruction fine-tuning improves usability which otherwise is challenging for pre-trained models \\ \cline{2-3} & \(\bullet\) Improving the model’s performance with instruction tuning is compute-efficient \\ \cline{2-3} & \(\bullet\) Multitask prompting enables zero-shot generalization abilities in LLM \\ \cline{2-3} & \(\bullet\) The judgments of labelers and the alignments with defined rules can help the model generate better responses. \\ \cline{2-3} & \(\bullet\) Good dialogue goals can be broken down into detailed natural language rules for the agent and the raters. \\ \cline{2-3} Sparrow & The combination of reinforcement learning (RL) with reranking yields optimal performance in terms of preference win rates \\ \cline{2-3} & \(\bullet\) Fine-tuning with re-written instruction-tuning data into a complex set improves the performance significantly \\ \cline{2-3} WizardCoder & \(\bullet\) Model learns to write safe responses with fine-tuning on safe demonstrations, while additional RLHF step further improves \\ \cline{2-3} & \(\bullet\) model safety and make it less prone to jailbreak attacks \\ \cline{2-3} LLMA & \(\bullet\) Less high quality data is enough for fine-tuned model generalization \\ \hline \end{tabular} \end{table} TABLE II: Key insights and findings from the study of _instruction-tuned_ Large Language Models. optimization (DPO) [149] trains a model directly on the human-preferred responses to maximize the likelihood of preferred against unpreferred responses, with per-sample importance weight. Reward ranked fine-tuning RAFT [150] fine-tunes the model on ranked responses by the reward model. Preference ranking optimization (PRO) [152] and RRHF [151] penalize the model to rank responses with human preferences and supervised loss. On the other hand, chain-of-hindsight (CoH) [153] provides feedback to the model in language rather than reward, to learn good versus bad responses. _Aligning with Synthetic Feedback:_ Aligning LLMs with human feedback is slow and costly. The literature suggests a semi-automated process to align LLMs by prompting LLMs to generate helpful, honest, and ethical responses to the queries, and fine-tuning using the newly created dataset. Constitutional AI [154] replaces human feedback in RLHF with AI, calling it RL from AI feedback (RLAIF). AlpacaFarm [155] designs prompts to imitate human feedback using LLMs APIs. Opposite to constitutional AI, AlpacaFarm injects noise in feedback to replicate human mistakes. Self-Align [78] prompts the LLM with ICL examples, instructing the LLM about what the response should contain to be considered useful and ethical. The same LLM is later fine-tuned with the new dataset. _Aligning with Prompts:_ LLMs can be steered with prompts to generate desirable responses without training [156, 157]. The self-correction prompting in [157] concatenates instructions and CoT with questions, guiding the model to answer its instruction following strategy to ensure moral safety before the actual answer. This strategy is shown to reduce the harm in generated responses significantly. _Red-Teaming/Jailbreaking/Adversarial Attacks:_ LLMs exhibit harmful behaviors, hallucinations, leaking personal information, and other shortcomings through adversarial probing. The models are susceptible to generating harmful responses even though they are aligned for safety [158, 159]. Red-teaming is a common approach to address illicit outputs, where the LLMs are prompted to generate harmful outputs [159, 160]. The dataset collected through red-teaming is used to fine-tune models for safety. While red-teaming largely relies on human annotators, another work [161] red-team LLMs to find prompts that lead to harmful outputs of other LLMs. _4. Continue Pre-Training:_ Although fine-tuning boosts a model's performance, it leads to catastrophic forgetting of previously learned information. Concatenating fine-tuning data with a few randomly selected pre-training samples in every iteration avoids network forgetting [162, 132]. This is also effective in adapting LLMs for cases where fine-tuning data is small and the original capacity is to be maintained. Prompt-based continued pre-training (PCP) [163] trains the model with text and instructions related to tasks and then finally instruction-tunes the model for downstream tasks. _5. Sample Efficiency:_ While fine-tuning data is generally many-fold smaller than the pre-training data, it still has to be large enough for acceptable performance [25, 24, 26] and requires proportional computing resources. To study the effects on performance with less data, existing literature [164, 165] finds that the models trained on lesser data can outperform models trained with more data. In [164], 25% of the total downstream data is found enough for state-of-the-art performance. Selecting coreset-based 0.5% of the total instruction-tuning data improves the model performance by 2% in [165], as compared to the complete data tuning. Less is more for alignment (LIMA) [166] uses only 1000 carefully created demonstrations to fine-tune the model and has achieved comparable performance to GPT-4. ### _Increasing Context Window_ LLMs are trained with limited context windows due to expensive attention and high memory requirements. A model trained on limited sequence lengths fails to generalize to unseen lengths at inference time [167, 168]. Alternatively, LLMs with ALBi [47] positional encodings can perform zero-shot length extrapolation. However, ALBi has less expressive power [48] and inferior performance on multiple benchmarks [169], and many LLMs use RoPE positional embedding that is unable to perform zero-shot extrapolation. A larger context length has benefits such as a better understanding of longer documents, more samples in in-context learning, execution of bigger reasoning processes, etc. Expanding context length during fine-tuning is slow, inefficient, and computationally expensive [168]. Therefore, researchers employ various context window extrapolation techniques discussed below. _Position Interpolation:_ Rather than extrapolating, [168] shows that interpolating position encodings within the pre-trained context window are more effective. The work demonstrates that only 1000 steps of fine-tuning are enough to achieve better results on larger windows without performance loss compared to the original context size. Giraffe [169] uses power scaling in RoPE, and YaRN [170] proposed NTK-aware interpolation. _Efficient Attention Mechanism:_ Dense global attention is one of the major constraints in training larger context window LLMs. Using efficient attention variants, such as local, sparse, and dilated attention, reduces the computation cost significantly. LongT5 [171] proposes transient global attention (TGlobal), applying attention to local and global tokens (windowing token averaging). The model replaces attention in T5 [11] with TGlobal attention, pre-trains the model on 4098 sequence length, fine-tunes on larger window sizes, as large as 16, and improves task performance with longer inputs. This shows the extrapolation ability of TGlobal attention with only fine-tuning. COLT5 [172] uses two branches, one with lightweight and the other with heavyweight attention and feed-forward layers. All tokens are processed from the lightweight branch, and only important tokens are routed to the heavyweight branch. LongNet [173] replaces standard attention with dilated attention, expanding sequence length to 1 billion tokens. LongLoRA [174] proposes shift-short attention, used during fine-tuning to reduce dense attention costs, while the model during inference can use dense attention and achieve similar performance as full attention fine-tuning. _Extrapolation without Training:_ LM-Infinite [167] and parallel context windows (PCW) [175] show length extrapolation is possible using pre-trained LLMs. LM-Infinite suggested \(\Lambda\)-shaped attention applied within the original context window limits. Likewise, PCW chunks larger inputs into the pre-trained context lengths and applies the same positional encodings to each chunk. ### _Robotics_ LLMs have been rapidly adopted across various domains in the scientific community due to their multipurpose capabilities [33]. In robotics research, the LLMs have very promising applications as well, such as enhancing human-robot interaction [176, 177, 178, 179], task planning [180, 181, 182], navigation [183, 184], and learning [185, 186]. They can enable robots to understand and generate natural language, aiding in instruction following, data annotation, and collaborative problem-solving. They can facilitate continuous learning by allowing robots to access and integrate information from a wide range of sources. This can help robots acquire new skills, adapt to changes, and refine their performance based on real-time data. LLMs have also started assisting in simulating environments for testing and offer potential for innovative research in robotics, despite challenges like bias mitigation and integration complexity. The work in [187] focuses on personalizing robot household cleanup tasks. By combining language-based planning and perception with LLMs, such that having users provide object placement examples, which the LLM summarizes to generate generalized preferences, they show that robots can generalize user preferences from a few examples. An embodied LLM is introduced in [188], which employs a Transformer-based language model where sensor inputs are embedded alongside language tokens, enabling joint processing to enhance decision-making in real-world scenarios. The model is trained end-to-end for various embodied tasks, achieving positive transfer from diverse training across language and vision domains. LLMs have also been explored as zero-shot human models for enhancing human-robot interaction. The study in [176] demonstrates that LLMs, trained on vast text data, can serve as effective human models for certain HRI tasks, achieving predictive performance comparable to specialized machine-learning models. However, limitations were identified, such as sensitivity to prompts and difficulties with spatial/numerical reasoning. In another study [189], the authors enable LLMs to reason over sources of natural language feedback, forming an "inner monologue" that enhances their ability to process and plan actions in robotic control scenarios. They combine LLMs with various forms of textual feedback, allowing the LLMs to incorporate conclusions into their decision-making process for improving the execution of user instructions in different domains, including simulated and real-world robotic tasks involving tabletop rearrangement and mobile manipulation. All of these studies employ LLMs as the core mechanism for assimilating everyday intuitive knowledge into the functionality of robotic systems. _Planning:_ LLMs are increasingly integral in robotics, particularly for strategic planning [180, 190, 191]. Their proficiency in processing and generating natural language is crucial for enhancing human-robot interaction and enabling robots to understand and execute complex tasks based on verbal instructions. LLMs also play a key role in task planning, a higher-level cognitive process involving the determination of sequential actions needed to achieve specific goals. This proficiency is crucial across a spectrum of applications, from autonomous manufacturing processes to household chores, where the ability to understand and execute multi-step instructions is of paramount significance. _Manipulation:_ In the area of manipulation [192, 193, 194, 195], LLMs enhance a robot's dexterity and adaptability, excelling in tasks like object recognition, grasping, and collaboration. They analyze visual and spatial information to determine the most effective approach to interact with objects, proving invaluable in operations requiring precision and flexibility, such as surgical procedures or assembly line tasks. They also enable the integration of sensor inputs and linguistic cues in an embodied framework, enhancing decision-making in real-world scenarios. It enhances the model's performance across various embodied tasks by allowing it to gather insights and generalize from diverse training data spanning language and vision domains. _Navigation:_ LLMs have revolutionized the navigation in robotics [196, 197, 198, 199], offering significant potential to enhance a robot's ability to navigate complex environments with precision and adaptability. Motion planning [183], in particular, stands out as a critical domain where LLMs have shown remarkable promise, excelling in generating feasible paths and trajectories for robots, accounting for intricate environmental details. This ability proves particularly valuable in scenarios requiring precise and dynamically adaptable navigation, as observed in environments like warehouses, transport and healthcare facilities, and smart residences. LLMs have also played a key role in localization and mapping, which are foundational components for successful robot navigation. They empower robots to determine their precise position within an environment while concurrently constructing or updating a spatial representation of their surroundings. This capability is crucial for tasks demanding spatial awareness, including autonomous exploration, search and rescue missions, and the operations of mobile robots. They have also contributed significantly to the proficiency of collision-free navigation within the environment while accounting for obstacles and dynamic alterations, playing an important role in scenarios where robots are tasked with traversing predefined paths with accuracy and reliability, as seen in the operations of automated guided vehicles (AGVs) and delivery robots (e.g., SADRs - pedestrian sized robots that deliver items to customers without the involvement of a delivery person). ### _Multimodal LLMs_ Inspired by the success of LLMs in natural language processing applications, an increasing number of research works are now facilitating LLMs to perceive different modalities of information like image [200, 201, 202], video [203, 204, 205], audio [206, 205, 207], _etc_. Multimodal LLMs present substantial benefits compared to standard LLMs that process only text. By incorporating information from various modalities, MLLMs can achieve a deeper understanding of context, leading to more intelligent responses infused with a variety of expressions. Importantly, MLLMs align closely with human perceptual experiences, leveraging the synergistic nature of our multisensory inputs to form a comprehensive understanding of the world [207, 188]. Coupled with a user-friendly interface, MLLMs can offer intuitive, flexible, and adaptable interactions, allowing users to engage with intelligent assistants through a spectrum of input methods. According to the ways of constructing models, current MLLMs can be generally divided into three streams: pre-training, fine-tuning, and prompting. In this section, we will discuss more details of these main streams, as well as the important application of MLLMs in visual reasoning. _Pre-training:_ This stream of MLLMs intends to support different modalities using unified end-to-end models. For instance, Flamingo [200] applies gated cross-attention to fuse vision and language modalities, which are collected from pre-trained and frozen visual encoder and LLM, respectively. Moreover, BLIP-2 [201] proposes a two-stage strategy to pre-train a Querying Transformer (Q-Former) for the alignment between vision and language modalities: in the first stage, vision-language representation learning is bootstrapped from a frozen visual encoder; and in the second stage, a frozen LLM bootstraps vision-to-language generative learning for zero-shot image-to-text generation. Similarly, MiniGPT-4 [208] also deploys pre-trained and frozen ViT [209], Q-Former and Vicuna LLM [140], while only a linear projection layer needs to be trained for vision and language modalities alignment. _Fine-tuning:_ Derived from instruction tuning [25] for NLP tasks [76, 25, 24], researchers are now fine-tuning pre-trained LLMs using multimodal instructions. Following this method, LLMs can be easily and effectively extended as multimodal chatbots [208, 202, 210] and multimodal task solvers [211, 212, 213]. The key issue of this stream of MLLMs is to collect multimodal instruction-following data for fine-tuning [214]. To address this issue, the solutions of benchmark adaptation [211, 215, 216], self-instruction [135, 217, 218], and hybrid composition [219, 213] are employed, respectively. To mitigate the gap between the original language modality and additional modalities, the learnable interface is introduced to connect different modalities from frozen pre-trained models. Particularly, the learnable interface is expected to work in a parameter-efficient tuning manner: _e.g._, LLaMA-Adapter [220] applies an efficient transformer-based adapter module for training, and LaVIN [219] dynamically learns the multimodal feature weights using a mixture-of-modality adapter. Different from the learnable interface, the expert models can directly convert multimodalities into language: _e.g._, VideoChat-Text [203] incorporates Whisper [221], a speech recognition expert model, to generate the captions of given videos for the understanding of following LLMs. _Prompting:_ Different from the fine-tuning technique that directly updates the model parameters given task-specific datasets, the prompting technique provides certain context, examples, or instructions to the model, fulfilling specialized tasks without changing the model parameters. Since prompting can significantly reduce the need of large-scale multimodal data, this technique is widely used to construct MLLMs. Particularly, to solve multimodal Chain of Thought (CoT) problems [88], LLMs are prompted to generate both the reasoning process and the answer given multimodal inputs [222]. On this front, different learning paradigms are exploited in practice: for example, Multimodal-CoT [222] involves two stages of rationale generation and answer inference, where the input of the second stage is a combination of the original input and the output of the first stage; and CoT-PT [223] applies both prompt tuning and specific visual bias to generate a chain of reasoning implicitly. In addition to CoT problems, LLMs can also be prompted with multimodal descriptions and tools, effectively dividing complex tasks into sub-tasks [224, 225]. _Visual Reasoning Application:_ Recent visual reasoning systems [226, 227, 228, 229] tend to apply LLMs for better visual information analysis and visual-language integration. Different from previous works [230, 231] that rely on limited VQA datasets and small-scale neural networks, current LLM-aided methods offer benefits of stronger generalization ability, emergent ability, and interactivity [214]. To realize visual reasoning with the help of LLMs, prompting and fine-tuning techniques can also be utilized: for example, PointClip V2 [227] applies LLMs to generate 3D-specific prompts, which are encoded as textual features and then combined with visual features for 3D recognition; and GPT4Tools [217] employs LoRA [232] to fine-tune LLMs following tool-related instructions. Serving as a controller [229], decision maker [233], or semantics refiner [226, 234], LLMs significantly facilitates the progress of visual reasoning research. ## IV Findings & Insights Training a billion-scale model is difficult as compared to a smaller model. LLMs are prone to various instabilities during training, such as hardware failure and instability. Other than this, LLMs exhibit different behaviors such as emergent abilities, improved zero-shot, few-shot, and reasoning abilities. Researchers report these essential details in their papers for results reproduction and field progress. We identify critical information in Table I and II such as architecture, training strategies, and pipelines that improve LLMs' performance or other abilities acquired because of changes mentioned in section III. ## V Model Configurations We provide different statistics of pre-trained and instruction-tuned models in this section. This includes information such as publication venue, license type, model creators, steps trained, parallelism, etc in Table III and Table IV. Architecture details of pre-trained LLMs are available in Table V. Providing these details for instruction-tuned models is unnecessary because it fine-tunes pre-trained models for instruction datasets. Hence, architectural details are the same as the baselines. Moreover, optimization settings for various LLMs are available in Table VI and Table VII. We do not include details on precision, warmup, and weight decay in Table VII. Neither of these details are important as others to mention for instruction-tuned models nor provided by the papers. ## VI Datasets and Evaluation Generating training and evaluation datasets is expensive because of the large-scale data demand of LLMs. Hence, datasets for training and benchmarking these models are topics of key importance. In Fig. 12, we show the distribution of the existing datasets for various NLP tasks. We restrict our distribution to only the most important tasks in the literature by including tasks with at least 20 datasets. LLMs can directly benefit from these datasets for training and evaluation. A summary of the training and evaluation datasets commonly used by LLMs is provided next. ### _Training Datasets_ The performance of LLMs largely depends on the training data's quality, size, and diversity. Preparing training datasets of high quality at a large scale is laborious. Researchers have suggested various pre-training and fine-tuning datasets to enhance LLMs capabilities. We summarize these efforts in Table VIII. While numerous training datasets are available in the literature, we cover the most widely used ones in our summary. ### _Evaluation Datasets and Tasks_ The evaluation of LLMs is important in gauging their proficiency and limitations. This process measures the model's \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l l l} \hline \hline **Models** & **Fashion** & **License** & **Model** & **No of Commercial** & **Practical** & **Serge** & **Data** & **No of** & **Processing Train** & **Calculated** & **Train** & \\ \cline{2-19} **MeMe** & **Venue** & **Type** & **Creators** & **Purpose** & **Params** & **Use** & **Models** & **Transed** & **Takes** & **Processing Units** & **Unit Type** & **Time** & **Total** & **Postulation** & **Library** \\ \hline \hline WebGPT [14] & \(\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{ar} \text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{} \text{ar}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{}\text{ar}\text{ar}\text{ar}\text{}\text{ar}\text{ar}\text{}\text{ar}\text{}\text{ar}\text{}\text{ar}\text{}\text{ar}\text{}\text{ar}\text{}\ \begin{table} \begin{tabular}{l ability to comprehend, generate, and interact with human language across a spectrum of tasks. Evaluating a language model (LM) is divided into two broader categories: 1) natural language understanding (NLU) and 2) natural language generation (NLG). It is emphasized that tasks in NLU and NLG are softly categorized and are often used interchangeably in the literature. _Natural Language Understanding:_ This task measures the language understanding capacity of LMs. It encompasses multiple tasks, including sentiment analysis, text classification, natural language inference (NLI), question answering (QA), commonsense reasoning (CR), mathematical reasoning (MR), reading comprehension (RC), etc. _Natural Language Generation:_ This task assesses the language generation capabilities of LLMs by understanding the provided input context. It includes tasks such as summarization, sentence completion, machine translation (MT), dialogue generation, etc. Numerous datasets are proposed for each task, evaluating LLMs against different characteristics. To provide an overview of evaluation datasets, we briefly discuss a few famous datasets within each category and offer a comprehensive list of datasets in Table IX. Moreover, we show a detailed overview of the training datasets and evaluation tasks and benchmarks used by various pre-trained LLMs in Table X and fine-tuned LLMs in Table XI. We also compare the top-performing LLMs in various NLP tasks in Table XII. _1. Multi-task:_ _1.1 MMLU [242]:_ A benchmark that measures the knowledge acquired by models during pretraining and evaluates models in zero-shot and few-shot settings across 57 subjects, testing both world knowledge and problem-solving ability. _1.2 SuperGLUE [3]:_ A more challenging and diverse successor to the GLUE [244] benchmark, SuperGLUE includes a variety of language understanding tasks, such as question answering, natural language inference, and coreference resolution. It is designed to provide a rigorous test of language understanding and requires significant progress in areas like sample-efficient, transfer, multitasking, and unsupervised or self-supervised learning. _1.3 BIG-bench [243]:_ The BIG-bench (Behavior of Intelligent Generative Models Benchmark) is a large-scale benchmark designed to test the abilities of LLMs across a wide range of tasks, including reasoning, creativity, ethics, and understanding of specific domains. _1.4 GLUE [244]:_ The General Language Understanding Evaluation (GLUE) benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems. It includes a variety of tasks that test a wide range of linguistic phenomena, making it a comprehensive tool for evaluating language understanding in AI. _2. Language Understanding:_ _2.1 WinoGrande [289]:_ A large-scale dataset inspired by the original Winograd [292] Schema Challenge tests models on their ability to resolve pronoun ambiguity and encourages the development of models that understand the broad context in natural language text. _2.2 CoQA [251]:_ A conversational question-answering dataset, CoQA challenges models with questions that rely on conversation history and require free-form text answers. Its diverse content from seven domains makes it a rigorous test for models' ability to handle a wide range of topics and conversational contexts. _2.3 WiC [252]:_ This dataset assesses a model's ability to discern word meanings based on context, aiding in tasks related to Word Sense Disambiguation. _2.4 Wikitext103 [253]:_ With over 100 million tokens from Wikipedia's top articles, this dataset is a rich resource for tasks that require understanding long-term dependencies, such as language modeling and translation. _2.5 PG19 [254]:_ This is a digital library of diverse books from Project Gutenberg. It's specifically designed to facilitate research in unsupervised learning and language modeling, with a special focus on long-form content. _2.6 C4 [11]:_ A clean, multilingual dataset, C4 offers billions of tokens from web-crawled data. It's a comprehensive resource for training advanced Transformer models on various languages. _2.7 LCQMC [255]:_ The Large-scale Chinese Question Matching Corpus (LCQMC) is a dataset for evaluating the performance of models in semantic matching tasks. It contains pairs of questions in Chinese and their matching status, making it a valuable resource for research in Chinese language understanding. _3. Story Cloze and Sentence Completion:_ _3.1 StoryCloze [269]:_ It introduces a new "StoryCloze Test", a commonsense reasoning framework for evaluating \begin{table} \begin{tabular}{l r r r r r r r r r r} \hline \hline & \multicolumn{3}{c}{**Sequence**} & \multicolumn{3}{c}{**Optimizers**} & \multicolumn{3}{c}{**Grad**} \\ \cline{2-13} **Models** & \multicolumn{1}{c}{**Batch Size**} & **Length** & **LR** & **Warmup** & **LR\_Decay** & **AdaFactor** & **Adam** & **AdamW** & **Clip** & **Dropout** \\ \hline WebGPT (175B) & **BC512** & **RM:32** & - & 6e-5 & - & - & & ✓ & & - & - \\ \hline T0 (11B) & 1024 & 1280 & 1e-3 & - & - & ✓ & & - & ✓ \\ \hline Tk-Instruct (1B) & 1024 & - & 1e-5 & - & constant & - & - & - & - \\ \hline PPT-ML (175B) & 128 & 2048 & 5e-5 & \(\times\) & linear & & ✓ & & ✓ & ✓ \\ \hline Flan-U-pALM (540B) & 32 & - & 1e-3 & - & constant & ✓ & & & - & ✓ \\ Sparrow (70B) & RM: 8+16, R:1+6 & - & 2e-6 & ✓ & cosine decay to 10\% & ✓ & & & ✓ & \(\times\) \\ \hline WizardCoder (15B) & 512 & 2048 & 2e-5 & ✓ & cosine & - & - & - & - & - \\ Alpaca (13B) & 128 & 512 & 1e-5 & ✓ & cosine & - & - & ✓ & ✓ & \(\times\) \\ Vienna (13B) & 128 & -2048 & 2e-5 & ✓ & cosine & & & ✓ & - & \(\times\) \\ \hline LIMA (65B) & 32 & 2048 & 1e-5 & \(\times\) & linear & & & & ✓ & - & ✓ \\ \hline \hline \end{tabular} \end{table} TABLE VII: Summary of optimization settings used for instruction-tuned LLMs. Values for gradient clipping and dropout are the same as the pre-trained models, while no model uses weight decay for instruction tuning. story understanding, generation, and script learning. It considers a model's ability to understand and generate coherent and sensible stories. #### 3.2.2 Lambada [270] This dataset evaluates contextual text understanding through a word prediction task. Models must predict the last word of a passage, which is easy for humans when given the whole passage, but not when given only the last sentence. #### 3.2.4 Physical Knowledge and World Understanding: #### 3.2.4 Pjqa [275] A dataset that probes the physical knowledge of models, aiming to understand how well they are learning about the real world. #### 3.2.5 TriviaQA [276] A dataset that tests models on reading comprehension and open domain question answering (QA) tasks, with a focus on Information Retrieval (IR)-style QA. #### 3.3 Arc [277] A larger version of the ARC-Challenge, this dataset contains both easy and challenging grade-school level, multiple-choice science questions. It's a comprehensive test of a model's ability to understand and answer complex questions. #### 3.4 Arc-Easy [277] A subset of the ARC dataset, ARC-Easy, contains questions that are answered correctly by either a retrieval-based algorithm or a word co-occurrence algorithm. It's a great starting point for models beginning to \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Dataset** & **Type** & **Size/Samples** & **Task** & **Source** & **Creation** & **Comments** \\ \hline \hline C4 [11] & Pretrain & 806GB & - & Common Cras1 & Automated & A clean, multilingual dataset with billions of tokens \\ mC4 [12] & Pretrain & 38.49TB & - & Common Cras1 & Automated & A multilingual extension of the C4 dataset, mC4 & identifies over 100 languages using dids from 71 monthly web scrapes of Common Crawl. \\ \hline PLE [237] & Pretrain & 825GB & - & \begin{tabular}{c} Common Cras1, PubMed Central, \\ OpenWebText2, ArXiv, GitHub, \\ Books3, and others \\ \end{tabular} & Automated & A massive dataset comprised of 22 constituent sub-datasets \\ \hline ROOTs [238] & Pretrain & 1.61TB & - & 498 Hyperging Face datasets & Automated & 46 natural and 13 programming languages \\ \hline MassiveText [101] & Pretrain & 10.5TB & - & \begin{tabular}{c} MassiveWeb, Books, News, \\ Wikipedia, Github, C4 \\ \end{tabular} & Automated & 99\% of the data is in English \\ \hline Wikipedia [17] & Pretrain & - & - & Wikipedia & Automated & Dump of wikipedia \\ \hline RedPajama [239] & Pretrain & 5TB & - & \begin{tabular}{c} CommonCras1, Cat, Wikipedia, \\ Github, Books, Stack Exchange \\ \end{tabular} & Automated & Open-source replica of LLaMA dataset \\ \hline PushShift\_to Reddit & Pretrain & 21.1GB & - & Reddit & Automated & Submissions and comments on Reddit from 2005 to 2019 \\ \hline BigPython [118] & Pretrain & 5.5TB & Coding & GitHub & Automated & - \\ \hline Pool of Prompt (P35) [22] & Instructions & 12M & 62 & PromptSource & Manual & A Subset of PromptSource, created from 177-datasets including summarization, QA, classification, etc. \\ \hline \hline AP3 [14] & Instructions & 81M & 71 & P3+Multilingual datasets & Manual & Extending P3 to total 46 languages \\ \hline Super-NaturalInstructions (SN) [26] & Instructions & 12.4M & 1616 & Multiple datasets & Manual & Extending P3 with additional multi-lingual datasets, total 46 languages \\ \hline Flan [25] & Instructions & 15M & 1836 & \begin{tabular}{c} MultiIm*T0:SF+NIV2 \\ \end{tabular} & Manual & Total 60 languages \\ \hline OPT-ML [24] & Instructions & 18.1M & 1667 & - & Manual & - \\ \hline Self-Instruct [135] & Instructions & 82k & 175 & - & Automated & Generated 52k instructions with 82k samples from 175 used tasks using GPT3. \\ \hline Alpaca [139] & Instructions & 52k & - & - & Automated & 175 used tasks using GPT3 \\ \hline Vicuna [140] & Instructions & 125k & - & ShareQPT & Automated & Conversations shared by users on ShareQPT using public APIs \\ \hline LLaMA-GPT-4 [141] & Instructions & 52k & - & Alexa & Automated & Recatied Alpaca dataset with GPT-4 in English and Chinese \\ \hline Unnatural Instructions [240] & Instructions & 68k & - & 15-Seeds (SNI) & Automated & * \\ \hline LIMA [166] & Instructions & 1k & - & Multiple datasets & Manual & Carefully created samples to test performance with fine-tuning on less data \\ \hline Anthropic-HH-RLHZ [241] & Alignment & 142k & - & - & Manual & Manual \\ \hline Anthropic-HH-RLHZ-2 [159] & Alignment & 39k & - & - & Manual & \\ \hline \end{tabular} \end{table} TABLE VIII: Details of various well-known pre-training and fine-tuning datasets. Here, alignment means aligning with human preferences. Fig. 12: A distribution of datasets proposed for different NLP tasks. We include only the tasks for which at least 20 datasets have already been proposed. explore advanced question-answering. #### 4.5.4 Arc-Challenge [277] A rigorous question-answering dataset, ARC-Challenge includes complex, grade-school level questions that demand reasoning beyond simple retrieval, testing the true comprehension capabilities of models. #### 4.5.5 Contextual Language Understanding #### 4.5.1 Race [282] The RACE is a reading comprehension dataset collected from English examinations in China, which benchmarks AI models for understanding and answering questions on long and complex passages, simulating the challenge of a real-world examination. #### 4.5.2 Race-Middle [282] Another subset of the RACE [282] dataset, RACE-Middle, contains middle school-level English exam questions. It offers a slightly less challenging but academically oriented evaluation of a model's comprehension skills. #### 4.5.3 RACE-High [282] A subset of the RACE [282] dataset, RACE-High consists of high school-level English exam questions. It is designed to evaluate the comprehension ability of models in a more academic and challenging context. #### 4.4 QuAC [283] This dataset simulates an information-seeking dialog between students and teachers using hidden Wikipedia text. It introduces unique challenges not found in machine comprehension datasets, making it a valuable resource for advancing dialog systems. #### 4.5.6 Commonsense Reasoning #### 4.5.7 HellaSwag [290] A dataset that challenges models to pick the best ending to a context uses Adversarial Filtering to create a 'Goldilocks' zone of complexity, where generated text is absurd to humans but often misclassified by models. #### 4.5.8 Corpa [337] This dataset evaluates a model's progress in open-domain commonsense causal reasoning. Each question comprises a premise and two alternatives, and the model must select the more plausible alternative, testing a model's ability to understand and reason about cause and effect. #### 4.5.9 Wsc [292] The Winograd Schema Challenge (WSC) is a reading comprehension task in which a system must resolve references in a text, often requiring world knowledge and reasoning about the text. #### 4.5.1 Csqa [293] The CommonsenseQA is a question-answering dataset that requires commonsense knowledge to answer the ability of AI models to understand and answer questions that require commonsense reasoning. #### 4.5.1 Reading Comprehension #### 4.5.2 BoolQ [298] A dataset derived from Google search queries, BoolQ challenges models to answer binary (yes/no) with a paragraph from a Wikipedia article containing the answer. It's a test of reading comprehension and reasoning. #### 4.5.3 SquADv2 [299] The Stanford Question Answering Dataset (SQuAD) [297] is a collection of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text from the corresponding \begin{table} \begin{tabular}{l|l} \hline **Type** & **Datasets/Benchmarks** \\ \hline Multi-Task & MMLU [242], SuperGLUE [3], BIG-bench [243], GLUE [244], BBH [243], CUGE [245], ZeroCLUE [246], FevCLUE [247], Blended Skill Talk [248], HELM [249], KLUE-STS [250] \\ \hline Language Understanding & CoQA [251]; WC [252], Wikitext103 [253], PG19 [254], LCQMC [255], QQP [256], WinoGender [257], CB [258], FinRE [259], SanWen [260], APQMC [246], BG Corpus [261], CNSS [262], CKBQA 13 [263], CLUENER [246], Weibo [264], AQuA [265], OntoNotes [266], HeadQA [267], Twitter Dataset [268] \\ \hline Story Cloze and Sentence Completion & Story Cloze [269], LAMBADA [270], LCSTS [271], AdGen [272], EZE [273], CHID [274], CHID-FC [247] \\ \hline Physical Knowledge and World Understanding & PIQA [275], TriviaQA [276], ARC [277], ARC-Easy [277], ARC-Challenge [277], PROST [278], OpenBokQA [279], WebNLG [280], DogWhistle Insider \& Outsider [281] \\ \hline Contextual Language & RACE [282], RACE-Middle [282], RACE-High [282], QuACK [283], StrategyQA [284], Quiz Bowl [285], \\ Understanding & chMedQA [286], CMedQA2 [287], MATIIT-QA [288] \\ \hline Commonsense Reasoning & WinGrande [289], HellasWag [290], COPA [291], WSC [292], CSQA [293], SIQA [294], C\({}^{3}\)[295], CLUEWSC2020 [246], CLUEWSC [246], CLUEWSC-FC [247], ReCoRD [296] \\ \hline Reading Comprehension & SQuAD [297], BoolQ [DS], SQuADv2 [299], DROP [300], RIDE [301], WebQDA [302], CMRC2017 [303], CMRC2018 [304], CMRC2019 [305], CHCHC-BD [306], CHC-PD [306], CUTE-HW [306], MultiRC [307], Natural Questions [308], CNSE [262], DRCD [309], DuReader [310], DuReader-tobate [311], DuReader-QG [310], SciQ [312], Sogou-log [313], DuReader\({}_{\text{robust}}\)-QG [311], QAMMEe [314], KorQuAD 1.0 [315], CAIL2018-Task1 & \& Task2 [316] \\ \hline Mathematical Reasoning & MATH [317], Math2k3 [318], GSM8K [319], MathQA [320], MGSM [321], MultiArith [322], ASDInv [323], MAWPS [324], SVAMP [325] \\ \hline Problem Solving & HumanEval [326], DS-1000 [327], MBPP [282], APPS [317], CodeContests [210] \\ \hline Natural Language Inference \& ANI+I [329], MNI-IL-(130), MNI-IL-mm [330],QNLI [397], WNLI [292], OCNLI [246], CNNLI [246], ANLI \\ \& Logical Reasoning & R1 [329], ANLI R2 [329], ANLI R3 [329], HANS [331], OCNLI-FC [247], LogiQA [332], StrategyQA [284] \\ \hline Cross-Lingual Understanding & MLQA [333], XNLI [34], PAWS-X [335], XSum [336], XCOPA [337], XWinograd [338], TyDiQA-GoldP [339], MLSum [340] \\ \hline Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking & Truthfulness and Fact Checking \\ \hline Language Translation & WMT [351], WMT20 [352], WMT20-en [352], EPRSITMT [247], CCPM [353] \\ \hline Scientific Knowledge & Amino Probe [127], BiolAMA [127], Chemical Reactions [127], Galaxy Clusters [127], Mineral Groups [127] \\ \hline Dialogue & Wizard of Wikipedia [354], Empathetic Dialogues [355], DPC-generated [109] dialogues, ConvAI2 [356], KdConv [357] \\ \hline Topic Classification & TNEWS-FC [247], YNAT [250], KLUE-TC [250], CSL [246], CSL-FC [247], HFLYTEK [358] \\ \hline \end{tabular} \end{table} TABLE IX: Categorized evaluation datasets used in evaluating LLMs. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{13.8pt}|p{113. reading passage. SQuADv2 combines the original SQuAD1.1 dataset with over 50,000 unanswerable questions. The aim is to evaluate a model's ability to understand and answer questions based on a given context and to determine when a question is unanswerable. #### 7.3. Drop [300] DROP, or Discrete Reasoning Over the content of Paragraphs, is designed to test a model's ability to understand a wide variety of reading phenomena. It encourages comprehensive and reliable evaluation of reading comprehension capabilities. #### 7.4. RTE [301] The Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions on textual entailment, predicting whether a given sentence logically follows from another and evaluating a model's understanding of logical relationships in a text. #### 7.5. WebQA [302] A dataset for open-domain question answering, WebQA offers a large collection of web-based question-answer pairs. It is designed to assess the ability of AI models to understand and answer questions based on web content. #### 7.6. CMRC2018 [304] This dataset is a test of Chinese language models' ability to reason comprehensively and is designed with a challenging span-extraction format that pushes the boundaries of machine performance. #### 7.8. Mathematical Reasoning #### 7.8.1 MATH [317] This dataset is a platform for evaluating the mathematical problem-solving abilities of AI models. It contains a diverse set of math problems, ranging from arithmetic to calculus, and is designed to test the model's ability to understand and solve complex mathematical problems. #### 7.8.2 Math23k [318] This one challenges a model's ability to understand and solve mathematical word problems. It contains 23,000 Chinese arithmetic word problems that require models to perform reasoning and computation based on the problem description. #### 7.8.3 GSM8K [319] A dataset of diverse grade school math word problems, testing a model's ability to perform multi-step mathematical reasoning. #### 7.9. Problem Solving and Logical Reasoning #### 7.9.1 ANLI [329] A large-scale dataset designed to test the robustness of machine learning models in Natural Language Inference (NLI) is created through an iterative, adversarial process where humans try to generate examples that models cannot correctly classify. #### 7.9.2 HumanEval [326] A dataset for the problem-solving ability of AI models, which includes a diverse set of tasks that require various cognitive abilities, makes it a comprehensive tool for assessing general intelligence in AI. #### 7.9.3 StrategyQA [284] A question-answering dataset that requires reasoning over multiple pieces of evidence to evaluate the strategic reasoning ability of AI models, pushing the boundaries of what machines can understand and answer. #### 7.9.1 10. XNLI [334] A cross-lingual benchmark, XNLI extends the MultiNLI [366] corpus to 15 languages, including low-resource ones like Urdu. It tests models on cross-lingual sentence understanding, with 112,500 annotated pairs across three categories: entailment, contradiction, and neutral. #### 7.9.2 Paws-X [335] PAWS-X, or Cross-lingual Paraphrase Adversaries from Word Scrambling, is a multilingual version of the PAWS [367] dataset for paraphrase identification. It includes examples in seven languages and is designed to evaluate the performance of cross-lingual paraphrase identification models. #### 7.9.3 Truthfulness #### 7.9.4 Truthfulness A unique benchmark that measures a language model's truthfulness when generating answers. The dataset includes questions across various categories like health, law, and politics, some designed to test the model against common human misconceptions. #### 7.9.5 Biases and Ethics in AI #### 7.9.6.1 ETHOS [344] ETHOS is a hate speech detection dataset built from YouTube and Reddit comments. It's a tool in the fight against online hate speech, offering binary and multi-label variants for robust content moderation. #### 7.9.2 StereoSet [345] StereoSet is a comprehensive dataset designed to measure and evaluate the presence of stereotypical biases in language models. It focuses on four key domains: gender, profession, race, and religion. Contrasting stereotypical bias against language modeling ability provides a valuable tool for understanding and mitigating biases in large language models. ## VII Summary and Discussion ### _Architecture_ Due to the gigantic scale of LLMs, minor changes in architecture and training strategies have a big impact \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**Training Dataset**} & **HIG:** **Bench** & **MMIU** & **BBH** & **RAFT** & **FLAN** & **SN** & **PramposeSource** & **TBDQA** & **HumanEval** & **MMPP** & **Thirty** \\ \hline \hline 10 & Post of Pompes & & & & & & & & & & & & **Toxicity** \\ \hline WordP & ELIS [341], ELIS [346], ELIS [347], Triv-iQA [276], ELIS [277], ARC-Challenge [277], ARC-Energy [277], Harnd-arting data, Demo-artments of Humans, Computations between model-generated answers & & & & & & & & & & & \\ \hline 11-NLISTRUCT & SNLI [26] & & & & & & & & & & & \\ \hline 11-NLISTRUCT & SNLI [26] & & & & & & & & & & & & \\ \hline 12-NLT & & ProspSource [22], FLAN [255], SNI [365], UnifiedStock [263], CrossFit [364], ELIS [365], Reconing & & & & & & & & & & \\ \hline 13-Finn & Maffin, TQ-Net, NLIS, GOT & & & & & & & & & & & \\ \hline [MISSING_PAGE_POST] on performance and stability. Here, we summarize key architectural modules used in various LLMs, leading to better performance, reduced training time and memory, and better training stability. _Layer Normalization_ is found to have a significant effect on the performance and training stability of LLMs. Pre-norm, that is normalizing inputs rather than outputs, is more common among LLMs stabilizing the training [8, 114, 93]. BLOOM [9] and AlexaTM [110] utilize an additional layer normalization before embedding layer to stabilize the training of large-scale models, while the model's zero-shot generalization ability can be negatively impacted [9]. However, another study [112] finds that pre-norm degrades fine-tuned model performance as compared to post-norm, and there are no stability benefits of pre-norm beyond the 100B scale. Therefore, GLM-130B [112] used deep-norm which is a variant of post-norm for better downstream task performance after fine-tuning. _Positional Encoding_ effect performance and training stability of LLMs like other building blocks of a model. BLOOM [9] finds ALiBi outperforming learned and rotary positional encodings. Contrary to this, GLM-130B [112] identifies rotary positional encoding better than ALiBi. So, there is no conclusion in literature about the positional encodings yet. _Parallel Attention_ where attention and feed-forward layers are parallel to each other rather than sequential in transformer block has shown to reduce training time by 15%. There is no evidence of performance drop due to this change in literature and used by the models PaLM [14], GPT-NeoX [103], and CodeGen [118]. _Multi-Query Attention_ has shared key and value attention heads in a transformer block while query attention heads are projected as usual. This reduces memory usage and speeds up sampling in autoregressive decoding. No performance degradation has been observed with this change and makes the training efficient allowing larger batch sizes. Multi-query attention is used in [14, 120]. _Mixture of Experts_ allows easily scaling model to trillion of parameters [117, 106]. Only a few experts are activated during the computation making them compute-efficient. The \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Task** & **Dataset/Benchmark** & **Model** & **Model Size** & **N-Shots** & **Score** \\ \hline \hline \multirow{3}{*}{Multi-Task} & \multirow{3}{*}{BIG-bench (B)} & Chinchilla & 70B & 5-shot & 65.1 \\ \cline{3-5} & & Gopher & 280B & 5-shot & 53.97 \\ \cline{3-5} & & PaLM & 540B & 5-shot & 53.7 \\ \cline{3-5} & \multirow{3}{*}{MMLU (B)} & GPT-4 & - & 5-shot & 86.4 \\ \cline{3-5} & & Flan-PaLM-2\({}_{(f)}\) & Large & 5-shot & 81.2 \\ \cline{3-5} & & PalM-2 & Large & 5-shot & 78.3 \\ \hline \multirow{2}{*}{Language Understanding} & \multirow{3}{*}{SuperGLUE (B)} & ERNIE 3.0 & 12B & - & 90.6 \\ \cline{3-5} & & PaLM\({}_{(f)}\) & 540B & - & 90.4 \\ \cline{3-5} & & T5 & 11B & - & 88.9 \\ \hline \multirow{3}{*}{Story Comprehension and Generation} & \multirow{3}{*}{HellaSwag} & GPT-4 & - & 10-shot & 95.3 \\ \cline{3-5} & & PaLM-2 & Large & one shot & 86.8 \\ \cline{3-5} & & LLaMA-2 & 70B & zero shot & 85.3 \\ \cline{3-5} & \multirow{3}{*}{StoryCloze} & GPT3 & 175B & few shot & 87.7 \\ \cline{3-5} & & PalM-2 & Large & one shot & 87.4 \\ \cline{3-5} & & OPT & 175B & - & 79.82 \\ \hline \multirow{3}{*}{Physical Knowledge and World Understanding} & \multirow{3}{*}{PIQA} & PaLM-2 & Large & one shot & 85.0 \\ \cline{3-5} & & LLaMa & 65B & zero shot & 82.8 \\ \cline{3-5} & & MT-NLG & 530B & zero shot & 81.99 \\ \cline{3-5} & \multirow{3}{*}{TriviaQA} & PaLM-2 & Large & one shot & 86.1 \\ \cline{3-5} & & LLaMA-2 & 70B & one shot & 85.0 \\ \cline{3-5} & & PaLM & 540B & one shot & 81.4 \\ \hline Contextual Language Understanding & \multirow{3}{*}{LAMBADA} & PaLM & 540B & few shot & 89.7 \\ \cline{3-5} & & MT-NLG & 530B & few shot & 87.15 \\ \cline{3-5} & & PaLM-2 & Large & one shot & 86.9 \\ \hline \multirow{3}{*}{Commonsense Reasoning} & \multirow{3}{*}{WinoGrande} & GPT-4 & - & 5-shot & 87.5 \\ \cline{3-5} & & PaLM-2 & Large & one shot & 83.0 \\ \cline{3-5} & & PaLM & 540B & zero shot & 81.1 \\ \cline{3-5} & & LaLaMA & 65B & zero shot & 52.3 \\ \cline{3-5} & \multirow{3}{*}{SIQA} & Chinchilla & 70B & zero shot & 51.3 \\ \cline{3-5} & & Gopher & 280B & zero shot & 50.6 \\ \hline Reading Comprehension & \multirow{3}{*}{BoolQ} & PaLM\({}_{(f)}\) & 540B & - & 92.2 \\ \cline{3-5} & & T5 & 11B & - & 91.2 \\ \cline{3-5} & & PaLM-2 & Large & one shot & 90.9 \\ \hline Truthfulness & Truthful-QA & LLaMA & 65B & - & 57 \\ \hline \end{tabular} \end{table} TABLE XII: Performance comparison of top performing LLMs across various NLU and NLG tasks. Here, “N-Shots” indicate the number of example prompts provided to the model during the evaluation, representing its capability in few-shot or zero-shot learning settings, “f” represents the fine-tuned version, and “B” represents the benchmark. performance of MoE models is better than the dense models for the same amount of data and requires less computation during fine-tuning to achieve performance similar to the dense models as discussed in [106]. MoE architectures are less prone to catastrophic forgetting, therefore are more suited for continual learning [117]. Extracting smaller sub-models for downstream tasks is possible without losing any performance, making MoE architecture hardware-friendly [117]. _Sparse vs Dense Activated_ GPT-3 [8] uses sparse transformers [45] whereas GLaM [106] and PanGu-\(\sum\)[117] use MoE [107] architecture to lower computational costs and increase the model size and capacity. According to the literature, sparse modules do not degrade the model's performance [45]. However, more experiments are required to verify this statement. ### _Training Strategies_ Training models at a huge scale require some tricks to reduce training costs, avoid loss divergence and achieve better performance. We summarize and discuss some of these key tricks used in different LLMs. _Mixed Precision_ is a famous method for LLMs to reduce memory usage and improve training efficiency. In mixed precision, forward and backward passes are performed in FP16 format whereas optimizer states and master weights are kept in FP32 format [368]. A drawback associated with this format change is training instability due to a smaller value range resulting in loss spikes [112]. An alternative to FP16 is BF16 which has a comparatively larger range and performs some precision-sensitive operations like gradient accumulation and softmax in FP32 [9]. BF16 has better performance and training stability but uses more memory and is supported on specific hardware, for example, A100 GPUs. Therefore, its adoption in LLMs is limited. _Training Instability_ is a common issue in LLMs where loss divergence or spiking is observed multiple times during training. This happens in the presence of gradient clipping [14]. To mitigate this problem, many approaches suggest restarting training from an earlier checkpoint [14, 112, 106], skipping 200-500 earlier data batches at the point of divergence in [14] and re-shuffling batches in [106]. The embedding layer gradient shrink proves to further stabilize the training as its gradient norm is significantly larger than the other layers [112]. Another suggestion to improve training stability for larger models is not to use **biases** in dense and norm layers as in [14]. _Weight Initialization_ plays a significant role in model convergence and training stability. GPT-Neox [103] initializes feed-forward layers before residuals with \(\frac{2}{L\sqrt{d}}\) as in [133] and other layers with small initialization scheme [369]. This avoids activations growing exponentially with the increasing depth. MT-NLG [21] found higher variance for weight initialization leads to unstable training, hence validating small initialization scheme [369]. Various models perform random weight initialization which can cause bad initialization, Galactica [127] suggests a longer warmup to negate the effect. _Learning Rate_ is important for stable training. It is suggested to use a lower value [9, 14, 20] with warmup and decay (cosine or linear). Usually, the learning rate is within the range \(1e^{-4}\) to \(8e^{-4}\). Moreover, MT-NLG (530B) [21] and GPT-NeoX (20B) [103] suggest interpolating learning rates based on the model size using the GPT-3 [8] models ranging between 13B and 175B. This avoids tuning the learning rate hyperparameter. _Training Parallelism_ 3D parallelism, a combination of data, pipeline and tensor parallelism, is the most utilized training parallelism approach in LLMs [112, 14, 10, 9, 21, 100, 97]. In addition to the 3D parallelism, BLOOM [9] uses zero optimizer [61] to shard optimizer states. PanGu-\(\alpha\)[93] and PanGu-\(\Sigma\)[117] go beyond the 3D parallelism and apply 5D parallelism which additionally contains optimizer parallelism and rematerialization. _Mode Switching_ adds task-related tokens at the beginning of the text during training. These tokens refer to the natural language understanding and natural language generation tasks which are shown to improve the downstream task performance in [15, 20, 110]. During fine-tuning and inference, tokens are appended based on the downstream tasks. _Controllable Text Generation_ Generating credible and controlled text from a pre-trained model is challenging. GPT-3 [8] and other LLMs use in-context learning to control generated text. While in-context learning helps in controlling the generated text, ERNIE 3.0 Titan [102] suggests using adversarial loss to rank its generated text for credibility and soft prompts such as genre, topic, keywords, sentiment, and length for better control on generated text. ### _Pre-Training vs Instruction Tuning_ While pre-training is important for the generalization of LLMs, instruction-tuning improves the performance of LLMs further and makes them useable. Therefore, it is suggested to perform instruction fine-tuning of pre-trained LLMs to use them effectively [25, 26, 76, 24, 147]. ### _Supervised Models vs Generalized Models_ Although generalized models are capable of performing diverse tasks with good performance they have not yet outperformed models trained in supervised settings. The supervised trained models are still state-of-the-art in various NLP tasks by a large margin as shown in [8, 14, 26]. ### _Zero-Shot vs Few-Shot_ LLMs perform well in zero-shot and few-shot settings. But the performance difference between zero-shot and few-shot is large for pre-trained models [8, 14], naming LLMs as meta-learners [8]. LLMs zero-shot evaluations underperform unsupervised methods in neural machine translation [8]. The literature shows pre-training is not enough for good zero-shot performance [14, 25]. To improve the zero-shot performance the literature suggests using instruction fine-tuning that improves the zero-shot performance significantly and outperforms baselines. Instruction fine-tuning has also been shown to improve zero-shot generalization to unseen tasks. Another model Flan-PaLM [25] unlocks zero-shot reasoning with CoT training. ### _Encoder vs Decoder vs Encoder-Decoder_ Traditionally, these architectures perform well for different tasks, for example, encoder-only for NLU tasks, decoder-only for NLG, and encoder-decoder for sequence2sequence modeling. Encoder-only models are famous for smaller models such as Bert [5], RoBERTa [359], etc, whereas LLMs are either decoder-only [8, 103, 9] or encoder-decoder [11, 12, 110]. While decoder-only models are good at NLG tasks, various LLMs, PaLM [14], OPT [10], GPT-3 [8], BLOOM [9], LLaMA [137], are decoder-only models with significant performance gains on both NLU and NLG tasks. In contradiction to this, T5 [11] and UL2 [15] identify encoder-decoder models out-performing decoder-only models. In another study, PaLM [14] finds increasing the size of decoder-only models can reduce the performance gap between decoder-only and encoder-decoder architectures. Although decoder-only architectures have become a trend for LLMs, many recently proposed approaches [15, 110] use mode-switching tokens in text with encoder-decoder architectures to enable task-specific modes. Similarly, CodeT5+ [124] uses an encoder-decoder architecture with multiple training objectives for different tasks, activating the encoder, decoder, or both according to the tasks. These variations in architecture and training objectives allow a model to perform well in different settings. Because of this dynamic configuration, the future of LLMs can be attributed to encoder-decoder architectures. ## VIII Conclusion This paper has reviewed various LLMs, discussing the pros and cons of multiple models. Our review concluded significant findings and provided a detailed analysis of the design aspects of each LLM, including architecture, datasets, and training pipelines. We have identified crucial architectural components and training strategies employed by different LLMs and presented a summary and discussion. Moreover, we have compared the performance of LLMs in zero-shot and few-shot settings, explored the impact of fine-tuning, and compared supervised vs generalized models, and encoder vs decoder vs encoder-decoder architectures. This paper will serve as a valuable resource for researchers, offering insights into the recent advancements in LLMs and providing fundamental concepts and details to develop improved LLMs. ## IX Versioning We keep track of the versions of this paper we release as the content updates. _Version 1.0:_ We covered 30 pre-trained models and 6 instruction-tuned models, including their overview, findings, training, and evaluation datasets, and discussed important architectural and training tricks by various LLMs. _Version 2.0:_ Further pre-trained LLMs added along with discussion on on self-instruct LLMs. Categorized LLMs according to the application, provided descriptions of widely used evaluation datasets, added a section on robotics, and extended discussion in section VII. Tables have been updated. _Version 3.0:_ Added sections on Alignment tuning and multimodal LLMs. A performance comparison table on various benchmarks and datasets. Added LLaMA-2 and PaLM-2. _Version 4.0:_ Tables on training and evaluation datasets, a subsection on increasing context window, and minor improvements. **Note:** If you find any mistakes, or have issues and conflicts with the writing in this paper, please email us. We welcome suggestions to improve this paper.
2307.09706
RaTE: a Reproducible automatic Taxonomy Evaluation by Filling the Gap
Taxonomies are an essential knowledge representation, yet most studies on automatic taxonomy construction (ATC) resort to manual evaluation to score proposed algorithms. We argue that automatic taxonomy evaluation (ATE) is just as important as taxonomy construction. We propose RaTE, an automatic label-free taxonomy scoring procedure, which relies on a large pre-trained language model. We apply our evaluation procedure to three state-of-the-art ATC algorithms with which we built seven taxonomies from the Yelp domain, and show that 1) RaTE correlates well with human judgments and 2) artificially degrading a taxonomy leads to decreasing RaTE score.
Tianjian Gao, Phillipe Langlais
2023-07-19T01:37:31Z
http://arxiv.org/abs/2307.09706v1
# RaTE: a Reproducible automatic Taxonomy Evaluation by Filling the Gap ###### Abstract Taxonomies are an essential knowledge representation, yet most studies on automatic taxonomy construction (ATC) resort to manual evaluation to score proposed algorithms. We argue that automatic taxonomy evaluation (ATE) is just as important as taxonomy construction. We propose RaTE,1 an automatic label-free taxonomy scoring procedure, which relies on a large pre-trained language model. We apply our evaluation procedure to three state-of-the-art ATC algorithms with which we built seven taxonomies from the Yelp domain, and show that 1) RaTE correlates well with human judgments and 2) artificially degrading a taxonomy leads to decreasing RaTE score. Footnote 1: Our code repository is available at [https://github.com/CestLucas/RaTE](https://github.com/CestLucas/RaTE). ## 1 Introduction A domain taxonomy is a tree-like structure that not only aids in knowledge organization but also serves an integral part of many knowledge-rich applications including web search, recommendation systems and decision making processes. Taxonomies are also inevitably used as business and product catalogs and for managing online sales. Notable taxonomy products in this domain include Amazon Category Taxonomy,2 Google Product Taxonomy,3 Yelp Business Category4 and Google Content Categories.5 Footnote 2: [https://www.data4amazon.com/amazon-product-t](https://www.data4amazon.com/amazon-product-t) Footnote 3: [https://support.google.com/merchants/answer/6324436?hl=en](https://support.google.com/merchants/answer/6324436?hl=en) Footnote 4: [https://blog.yelp.com/businesses/yelp_category_list/](https://blog.yelp.com/businesses/yelp_category_list/) Footnote 5: [https://cloud.google.com/natural-language/do](https://cloud.google.com/natural-language/do) Footnote 6: cs/categories?hl=fr Recent years have witnessed interest in new automatic taxonomy construction (ATC) systems, but there are no systematic methods for objectively evaluating their figure of merit. For instance, TaxonGen (Zhang et al., 2018) -- see Section 3 -- was evaluated by asking at least three human evaluators if a taxonomy concept pair contains a hypernymy relationship, which can lead to bias and low reproducibility. It is not only difficult to compare or rank different algorithms, but changing the hyper-parameters or settings of a parameterized ATC system can also result in drastically different outputs, and make optimization unfeasible. Because ontologies and taxonomies in particular are typically created in contexts to address specific problems or achieve specific goals, e.g. classification, their evaluation is evidently context-dependent, and many researchers actually believe that a task-independent automatic evaluation remains elusive (Porzel and Malaka, 2004). Still, researchers have argued that objective evaluation metrics must be well available for significant progress in the development and deployment of taxonomies and ontologies (Brewster et al., 2004). In this work, we propose RaTE, a Reproducible procedure for Automatic Taxonomy Evaluation. RaTE does not require external knowledge but instead depends on masked language modelling (MLM) to query a large language model for subsumption relations. We show that with some care, MLM is a valuable proxy to human judgments. We apply RaTE to the Yelp corpus (a corpus of restaurant reviews) ranking seven taxonomies we extracted using three state-of-the-art ATC systems. We observe it correlates well with our manual evaluation of those taxonomies, and also show that artificially degrading a taxonomy leads to a decrease of score proportional to the level of noise injected. In the remainder, we discuss related work in Section 2. In Section 3, we describe the ATC systems we used for building up our taxonomies, and their evaluation procedures. We then present RaTE in Section 4 including refinements that we found necessary for our approach to work. We report in Section 5 the experiments we conducted to demonstrate the relevance of RaTE, and conclude in Section 6. ## 2 Related Works Systematic methods of evaluating ontologies and taxonomies are lacking. Because agreed upon quantitative metrics are lacking, research on taxonomy and ontology construction relies heavily on qualitative descriptions and the various perspectives of ontology engineers, system users or domain experts, which renders the results subjective and unreproducible Gomez-Perez (1999); Guarino (1998). Brank et al. (2005) summarized four principle ontology evaluation methods, by (1) comparing the target ontology to a "gold standard" (ground-truth) ontology Maedche and Staab (2002); (2) using the target ontology in an application and evaluating the application results ("application based") Porzel and Malaka (2004); (3) conducting coverage analysis comparing the target with a source of data (eg., a collection of documents) about a specific domain ("data driven") Brewster et al. (2004); (4) manual reviews done by human experts that assess how well the target ontology meets a set of predefined criteria, standards, and requirements Lozano-Tello and Gomez-Perez (2004). Gold Standard Evaluationfocuses on comparing and measuring the similarity of the target taxonomy with an existing ground truth such as WordNet Fellbaum (1998), Wikidata and ResearchCyc Ponzetto and Strube (2011). Semantic similarity metrics have been proposed, including Wu-Palmer Wu and Palmer (1994), Leacock-Chodorow Leacock and Chodorow (1998) and Lin Lin et al. (1998). We include in this category specific measures such as _topic coherence_Newman et al. (2010) which scores the quality of a word cluster which rely on similarity measures. There are several issues with such a process: mapping concepts from the output system to the ground truth is not trivial and gold standards do not necessarily cover well the domains of interest. Application-based Evaluationis an attractive alternative to gold-standard evaluation. Porzel and Malaka (2004) for instance proposed several possible applications for evaluation including concept-pair relation classification. Brank et al. (2005) underlines however that it is in fact hard to correlate ontology quality with the application performance. Data-driven Evaluationintends to select the ontology \(O\) with the best structural _fit_ to a target corpus \(C\), which boils down into estimating \(P(C|O)\) as in Brewster et al. (2004). Practically however, it remains unclear how to approximate such conditional probability. ## 3 Automatic Taxonomy Extractors In this work, we replicated results of three state-of-the-art ATC systems that are publicly available and that are producing quality results on selected datasets and domains. In this section, we describe those systems and discuss their corresponding evaluation methods. ### TaxoGen TaxoGen Zhang et al. (2018) is an adaptive text embedding and clustering algorithm leveraging various phrase-mining and clustering techniques including AutoPhrase Shang et al. (2018), caseO-LAP Liem et al. (2018) and spherical k-means clustering Banerjee et al. (2005). TaxoGen iteratively refines selected keywords and chooses cluster representative terms based on two criteria: _popularity_ which prefers term-frequency in a cluster and _concentration_ which assumes that representative terms should be more relevant to their belonging clusters than their sibling clusters. The system can be configured with several hyperparameters including the depth of the taxonomy, the number of children per parent term and the "representativeness" threshold. Experiments were conducted on DBLP and SP (Signal Processing) datasets and the system is quantitatively evaluated with relation accuracy and term coherency measures assessed by human evaluators (10 doctoral students). ### CoRel CoRel Huang et al. (2020) takes advantages of novel relation transferring and concept learning techniques and uses hypernym-hyponym pairs provided in a seeded taxonomy to train a BERT Devlin et al. (2018) relation classifier and expand the seeded taxonomy horizontally (width expansion) and vertically (depth expansion). Topical clusters are generated using pre-computed BERT embeddings and a discriminative embedding space is learned, so that each concept is surrounded by its representative terms. The clustering algorithms used by CoRel are _spectral co-clustering_[14] and _affinity propagation_[15], which automatically computes the optimal number of topic clusters. Compared to TaxoGen, CoRel does not require depth and cluster number specifications but a small seeding taxonomy as an input for enabling a weakly-supervised relation classifier. CoRel is quantitatively evaluated with term coherency, relation F1 and sibling distinctiveness judged by 5 computer science students on subsets of DBLP and Yelp datasets. The system generates outputs in the form of large hierarchical topic word clusters. ### HiExpan HiExpan [2] is a hierarchical tree expansion framework that aims to dynamically expand a seeded taxonomy horizontally (width expansion) and vertically (depth expansion) and performs entity linking with Microsoft's Probase [21] -- a probabilistic framework used to harness 2.7 million concepts mined from 1.68 billion web pages -- to iteratively grow a seeded taxonomy. As an entity is matched against a verified knowledge base, we perceive the accuracy of terms and concept relations to be higher than that of CoRel and TaxoGen. Authors of the HiExpan, as well as some volunteers assessed the taxonomy parent-child pair relations using ancestor- and edge-F1 scores. ### Observations Each of those taxonomy extractors face their own set of advantages and drawbacks. TaxoGen is the only parameterized systems in our experiments, and is the only one that does not require a seeded input for producing an output, which can be beneficial when prior knowledge of the corpus is lacking. It also generates alternative synonyms for each taxonomy topic, which increases the coverage and improves concept mapping between taxonomies and documents. However, it seems to depend on the keyword extraction quality and it is unclear how to determine the best hyper-parameter settings owing to the lack of automatic evaluation methods. CoRel uses the concept pairs provided in the seed taxonomy for mining similar relations, but this has become its Achilles' heel because same-sentence co-occurrence of valid parent-child topics is rare in real-world data. As a result, CoRel may fail to produce any output at all due to insufficient training examples for the relation classifier. It is also resource-intensive for making use of neural networks for relation transferring and depth expansion. Anecdotally, the output of CoRel may also not be entirely exhaustive and deterministic. For our experiments, HiExpan is perceived to produce the most consistent taxonomies thanks to the use of Probase for measuring topic similarities and locating related concepts. However, the set-expansion mechanism of HiExpan often ignores topic granularity and adds hyponyms and hypernyms found in similar contexts to the exact same taxonomy level (hence most HiExpan taxonomies are two-level only). It also cannot differentiate word senses such as virus as in _computer virus_ and a _viral disease_. ## 4 RaTE A critical part of taxonomy/ontology evaluation is knowledge about subsumptions, e.g. "is _fluorescence spectroscopy_ a type of _fluorescence technology?_" or "is _CRJ200_ a _Bombardier?_". Thus, RaTE measures the accuracy of the hypernym relations present in a taxonomy we seek to evaluate. The main difference between our work and earlier ones is that we do not rely on human judgments to determine the quality of a parent-child pair, nor do we consider an external reference (that often is not available or simply too shallow). Instead, we rely on a large language model tasked to check subsumption relations. Ultimately, an optimized language model should be able to generate an accurate list of the most canonical hypernyms for a given domain, similar to domain experts. But because we are mainly interested in domain-specific taxonomies, there is a high risk that specific terms of the domain are not well recognized by the model, and therefore, we investigate three methods for increasing the hit rate of hypernymy prediction of taxonomy subjects and reducing false negatives by (1) creating various prompts, (2) fine-tuning MLMs with different masking procedures, and (3) extending the model's vocabulary with concept names. ### Core idea We consider a taxonomy as a set of \(n\) parent-child pairs from adjacent taxonomy levels linked by single edges, denoted as \((p,c)\in\mathcal{T}\). For each parent-child pair \((p_{i},c_{i}),i\in 1,...,n\), we insert \(c_{i}\) and the "[MASK]" token into prompts containing "is a" patterns [1], then use LMs to unmask \(p^{\prime}_{1}(c_{i}),p^{\prime}_{2}(c_{i}),...,p^{\prime}_{k}(c_{i})\in p^{ \prime}(c_{i})\) per query as proxy parent terms of \(c_{i}\), where \(k\) is a recall threshold (we used \(k=10\) in this work). This process is illustrated in Table 1. A good pair of taxonomy concepts is therefore if the parent concept \(p_{i}\) can be found among the machine predictions \(p^{\prime}(c_{i})\). We consider a parent-child relation _positive_ if and only if the parent term is recalled one or more6 times in the top \(k\) predictions. This policy can obviously be adjusted, which we leave as future work. The measure of quality of \(\mathcal{T}\) is then simply the percentage of \((p,c)\) links in \(\mathcal{T}\) that are correct according this procedure. We note that for a taxonomy with no parent-child pairs, i.e. a single-level taxonomy, our evaluation score is \(0\). Footnote 6: A parent word can be predicted multiple times in singular and plural forms, misspellings, and so on, e.g. “desert”, “desserts” and “desert”. As an illustration, the taxonomy in Figure 1 would receive a score of 3/5 based on the predictions made in Table 1 where for instance, \(p^{\prime}_{1}(c_{i}),p^{\prime}_{2}(c_{i}),...,p^{\prime}_{5}(c_{i})\) equal _fish_, _dish_, _seafood_, _meat_, _soup_ for \(c_{i}=\textit{mussel}\), in which we find the real taxonomy parent \(p_{i}=\textit{seafood}=p^{\prime}_{3}(c_{i})\). We observe from Table 1 that not every prediction is factually correct (e.g. mussels are neither fish nor meat), and it remains evidently unreliable to depend solely upon pre-trained language models as ground-truth for all knowledge domains. Yet, we argue that we can regard the rankings of MLM predictions as a likelihood of a subsumption relation between the subject and the object of a query. In our example, the model is significantly more likely to predict "seafood" for _mussel_, _clam_ and _lobster_ (rank 3,3,1) than for _chicken_ and _beef_ (rank 73,57). ### Diversified Prompting Models can produce all sorts of trivial predictions, such as stop-words (e.g. "**this** is a kind of seafood"), or expressions and collocations found frequently in training samples (e.g. "seafood is a kind of **joke/disappointment**"). Differences in prompts used can actively impact a model's performance in hypernymy retrieval [13, 14]. Hanna and Marecek [1] reported that prompting BERT for hypernyms can actually outperform other unsupervised methods even in an unconstrained scenario, but the effectiveness of it depends on the actual queries. For example, they show that the query "A(n) \(x\)**is a [MASK]**" outperformed "A(n) \(x\)**is a type of [MASK]**" on the Battig dataset. As a result, instead of relying on a single query, we design five pattern groups (p1-p5) of hypernymy tests for pooling unmasking results. Those are illustrated in Table 2 for the parent-child pair (seafood,shrimp). While p2 to p4 follow standard Hearst-like patterns [1], p5a employs the "my favourite is" prompt which has demonstrated high P@1 and MRR in [14]. Patterns p1 have been created specifically for noun phrases that have a tendency to be split and considered as good taxonomy edges by ATC systems.7 Footnote 7: For instance, extractors tend to produce (salad,shrimp) for the pair (salad,shrimp salad). With this refined set of patterns, a topic pair has therefore a score of 1, as in the seafood-shrimp example, if the parent term is among the top-k machine predictions for any inquiries containing the child topic, and 0 vice versa. Again, more \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \(c\) & Pred 1 & Pred 2 & Pred 3 & Pred 4 & Pred 5 & Rank \\ \hline Mussel & fish (0.227) & dish (0.144) & seafood (0.140) & meat (0.037) & soup (0.033) & 3 \\ Clam & fish (0.203) & dish (0.095) & seafood (0.076) & crab (0.030) & thing (0.027) & 3 \\ Lobster & seafood (0.222) & dish (0.145) & lobster (0.131) & food (0.052) & sauce (0.052) & 1 \\ Chicken & dish (0.167) & meat (0.110) & chicken (0.079) & thing (0.058) & sauce (0.052) & 73 \\ Beef & meat (0.274) & beef (0.161) & dish (0.063) & food (0.027) & thing (0.024) & 57 \\ \hline \hline \end{tabular} \end{table} Table 1: Top-5 hypernym predictions made by a pre-trained BERT model (Bert-large-uncased-whole-word-masking) by prompting it with “_c is a type of [MASK]_”. The rank of seafood in the list is indicated in the last column. Figure 1: Excerpt from HiExpan1 for topic “seafood” elaborate decisions can be implemented. ### Fine-tuning the Language Model To improve hypernymy predictions, we must also address two issues with pre-trained language models: (1) the models are untrained on the evaluation domain; (2) the default model tokenizer and vocabulary are oblivious of some taxonomy topics, resulting in lower recall. Most research on MLM prompting only assessed the performance of pre-trained models. Yet, Peng et al. (2022) found an improvement when using FinBert models (Yang et al., 2020) pre-trained with massive financial corpora in retrieving financial hypernyms such as _equity_ and _credit_ for _"S&P 100 index is a/an -- index"_, compared to using BERTbase. Also, Dai et al. (2021) generated ultra-fine entity typing labels, e.g. "person, soldier, man, criminal" for "_he was confined at Dunkirk, escaped, set sail for India_" through inserting hypernym extraction patterns and training LMs to predict such patterns. Analogously, we compared six fine-tuned models, investigating different masking protocols, model vocabulary (see next section) and training sizes. Because we want the language models to concentrate on the taxonomy entities, particularly the parent terms and their surrounding contexts, we prioritize therefore masking the main topics (shown in Table 3) and parent terms of the taxonomies to evaluate, then other taxonomy entities (e.g. leaf nodes), followed by AutoPhrase entities if no taxonomy entities are present in the sentence and other random tokens from our training samples. In addition, we test entity masking by only masking _one_ taxonomy entity rather than 15% of sentence tokens to gain more sentence contexts. Our masking procedures are illustrated in Figure 2. ### Extended Vocabulary Domain-specific words such as food items are typically not predicted as a whole word, but rather as a sequence of subword units, such as _appetizer_ which is treated as _'app'_, _'#eti' and _'#zer'_ by the standard tokenizer. To avoid multi-unit words to be overlooked by the language model, we propose to extend its vocabulary. We enrich the vocabulary of models m1 and m2, by adding the lemmas (or singular forms) of parent terms from Table 3 that were not previously \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Prompt & Pred1 & Pred2 & Pred3 & Pred4 & Pred5 & Rank \\ \hline p1a & \{shrimp\} [MASK] & salad & cocktail & pasta & soup & rice & 359 \\ p1b & [MASK] & fried & no & garlic & coconut & fresh & 117 \\ \hline p2a & \{shrimp\} is a [MASK] & joke & must & winner & favorite & hit & 959 \\ p2b & \{shrimp\} is an [MASK] & option & issue & experience & art & order & 4407 \\ \hline p3a & \{shrimp\} is a kind of [MASK] & joke & thing & dish & treat & disappointment & 146 \\ p3b & \{shrimp\} is a type of [MASK] & dish & thing & food & sauce & seafood & 5 \\ p3c & \{shrimp\} is an example of [MASK] & that & this & shrimp & food & seafood & 5 \\ \hline p4a & \{MASK\} such as \{shrimp\} & sides & food & seafood & fish & shrimp & 3 \\ p4b & A [MASK] such as \{shrimp\} & lot & variety & side & combination & protein & 40 \\ p4c & An [MASK] such as \{shrimp\} & ingredient & item & option & order & animal & 197 \\ \hline p5a & My favorite [MASK] is \{shrimp\} & dish & thing & part & item & roll & 16 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation queries for the parent-child pair (seafood,shrimp). Figure 2: Comparison of masking strategies for a sample Yelp review where taxonomy entities or those proposed by AutoPhrase are underlined. We prioritize masking the taxonomy entities, AutoPhrase entities and random tokens, in that order. included in the base tokenizer, such as "sushi", "appetizer" and "carne asada", and resizing the models' token embedding matrices to match the size of the new tokenizer. The embedding representation of new tokens were initialized randomly before fine-tuning, although it is possible to assign them to the representation of the closest terms in the original vocabulary. By adding only a small number of new tokens to the model and tokenizer, we also ensure similar model and tokenizer efficiencies. We believe that vocabulary extension will become a necessary step for effective hypernymy prediction in most specialized domains, though the exact optimal strategies remain to be discussed. ## 5 Experiments We conducted our experiments on the Yelp corpus which contains around 1.08M restaurant reviews such as the one in Figure 2 (top box). We used the very same corpus prepared by Huang et al. (2020).8 Footnote 8: Available at: [https://drive.google.com/drive/folders/13D0@II9QFLDh0bbRcbQ-Ty9hcJETbht9](https://drive.google.com/drive/folders/13D0@II9QFLDh0bbRcbQ-Ty9hcJETbht9). ### Taxonomies With the ATC systems described in Section 3 we produced seven taxonomies we seek to evaluate. Our objective here was to explore the extractors so to get the best taxonomies in a fair amount of explorations. For TaxoGen, we only had to specify some parameters.9 For CoRel and HiExpan however, we had to provide a seed taxonomy. Hence we produced 5 such taxonomies,10 mainly seeking for frequent parent-child pairs. Footnote 9: We considered taxonomy depth, number of topics per level, and “word filtering threshold”. See the github for the specific values we used. Table 3 reports the main topics (level 1) of the resulting taxonomies. We observe that the output of one ATC system varies substantially from one parametrization to another. Also, it is noticeable that main topics lack some structure. For instance, grouping _beef_ and _pork_ into the a category _meat_ would arguably make sense in the output of CoRel1. ### Models We fined-tuned six language models according to the different strategies we presented in Section 4 and which characteristics are summarized in Table 4. In particular, we experiment with _entity masking_ while fine-tuning model m1a, m1b and m0b, which emphasizes masking task-relevant tokens, because it has been shown to be more effective than _random masking_ in (Sun et al., 2019; Kawintiranon and Singh, 2021). All models have been fine-tuned for 2 epochs by masking 15% tokens, to the exception of m1b (marked with \(\star\)) for which only one entity has been masked per example. For comparison purposes, we also selected two pre-trained models bert-large-uncased-whole-word-masking and bert-base-uncased that we did not fine \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Finetuning & Masking \\ name (base) & Voc. & Full & 70\% & Ent. & Tok \\ \hline m1a (bert-base) & & & & \\ m1b (bert-base) & & & & \\ m2a (bert-base) & & & & \\ m2b (bert-base) & & & & \\ \hline m0a (bert-base) & & & & \\ m0b (distilber-base) & & & & \\ \hline \hline \end{tabular} \end{table} Table 4: Configurations of the fine-tuned models, with models m0a and m0b serving as baselines for training with the base tokenizer; m0b using a smaller pre-trained model and less fine-tuning material. Column Voc. indicates that main target words proposed ATC systems were injected in the model’s vocabulary. \begin{table} \begin{tabular}{|l c|} \hline \hline Taxonomy & Top level (main) topics \\ \hline \hline CoRel1 & steak, veggies, beef, cheese, crispy, fish, rice, salad, shrimp, spicy, pork, bacon, burger, appetizer, bread, dessert, seafood \\ \hline CoRel2 & bacon, bread, fries, roll, soup, burger, dessert, salad, shrimp \\ \hline CoRel3 & chinese, seafood, dessert, steak \\ \hline CoRel4 & dinner, food, location, lunch, service \\ \hline HiExpan1 & seafood, salad, dessert, appetizer, food, sushi, dessert, pizza, coffee, bread, pasta, beer, soup, wine, cheese, cocktail, taco, water, music \\ \hline TaxoGen1 & main\_dish, south\_hills, high\_ceilings, etait\_pas \\ \hline TaxoGen2 & chest, tempe, amaretto, pepper\_jelly, relies, travis, free\_admission, exposed\_brick \\ \hline \hline \end{tabular} \end{table} Table 3: Main targets of MLM evaluation. tune and that we named B-l and B-b respectively. To highlight the qualitative differences between our evaluation models, we provide a simple prompt "my favorite [MASK] is sirloin" for the models to predict the taxonomy hypernm "steak" in CoRel1. The results are shown in Table 5, where 5 out of 6 fine-tuned models and none of the pre-trained models correctly predicted the taxonomy parent in the top 4 predictions. Further, all fine-tuned models returned "steak" in the top ten predictions. Lastly, we show the positive effects of extending the vocabulary of the language model in Table 6 where we wish to recall the parent term "appetizer" for the concept pair "appetizer-mozzarella sticks" in CoRel1, where the token "appetizer" would be split into _'app', '##ti' and '##zer'_ by the standard tokenizer. Both models m1a and m1b trained with entity masking and an expanded vocabulary correctly predicted "appetizer" in their top five predictions; m2 models also recalled the term, albeit with a very low rank whereas other models are completely oblivious to it. Nevertheless, we find that expanding the model's vocabulary in conjunction with entity masking may introduce bias into the models when fine-tuning with limited training samples, i.e. over predicting the added tokens. ### Ranking Results #### 5.3.1 Manual Ranking The first author of this paper manually ranked the taxonomies we built, prior to experimenting RaTE. The main task has been to manually verifying the quality of the parent-child pairs of each taxonomy, while also taking into account factors like taxonomy structure.11 Footnote 11: All parent-child pairs of HiExpan1 and TaxoGen1&2 have been evaluated, but we sampled concept pairs from CoRel 1-4 because their word clusters are too large. HiExpan1 was deemed the best likely because word relations actually come from a verified database and we found the coverage to be broad. It is also observably more accurate than CoRel 1-4, which have similar (overall good) quality. TaxoGen taxonomies were the least accurate (TaxoGen1 being better than TaxoGen2). We found them trivial, in that many unimportant topics are picked by the algorithm. One reason for this we believe is the sensitivity of the system to the keywords generated by AutoPhrase, which on Yelp is generating too many irrelevant terms, leading to many noisy pairs (e.g. "exposed brick - music video"). In fine, it was easy and straightforward to rank the HiExpan and TaxoGen taxonomies but more difficult for rank the CoRel taxonomies. Such an evaluation is delicate, after all, this was the main motivation for this study. #### 5.3.2 RaTE Ranking Table 7 showcases the results of MLM taxonomy relation accuracy evaluation, calculated by the number of positive relations over all unique parent-child pairs in a taxonomy.12 Footnote 12: We considered word infections and certain special cases to improve matching between taxonomy terms and machine predictions, e.g. “veggies”, “vegetable” and “vegetables”, “dessert” and “desert”. The entity-masking models m1a and m1b predicted the most positive relationships in each candidate taxonomy while the pre-trained models predicted the fewest, which was expected. It is also surprising that B-b outperforms B-l when it comes to matching more positive concept pairs. Model m2b (trained on two-thirds of the data) expectedly \begin{table} \begin{tabular}{l l l l l l} \hline \hline Model & Pred1 & Pred2 & Pred3 & Pred4 & Rank \\ \hline m1a & sides & foods & food & apps & 5 \\ m1b & sides & food & appeetizer & foods & 3 \\ m2a & sides & items & food & dessert & 6089 \\ m2b & things & items & foods & props & 3111 \\ \hline m0a & sides & extras & items & dessert & N/A \\ m0e & sides & apps & foods & snacks & N/A \\ B-l & foods & items & products & food & N/A \\ B-b & foods & snacks & food & items & N/A \\ \hline \hline \end{tabular} \end{table} Table 6: Top-4 predictions of models with extended (top) or base (bottom) vocabulary for the prompt “[MASK] such as mozzarella sticks”. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Model & Pred1 & Pred2 & Pred3 & Pred4 & Rank \\ \hline m1a & burger & dish & sandwich & steak & 4 \\ m1b & dish & burger & beer & sandwich & 10 \\ m2a & steak & dish & meat & cut & 1 \\ m2b & steak & dish & burger & meat & 1 \\ m0a & dish & burger & steak & meat & 3 \\ m0b & cut & steak & meat & beef & 2 \\ \hline B-l & fruit & flavor & food & color & 69 \\ B-b & food & drink & color & dessert & 71 \\ \hline \hline \end{tabular} \end{table} Table 5: Fine-tuned (top) vs. pre-trained (bottom) models’ top-4 predictions with the prompt “my favorite [MASK] is sirloin.” underperforms model m2a, but not drastically. However, all models produce overall similar score distributions, with the HiExpan taxonomy receiving the highest scores and the TaxoGen taxonomies receiving the lowest. This is consistent with our manual judgements in that the HiExpan concept pairs were derived from an accurate relation dataset (Probase), whereas TaxoGen1 and TaxoGen2 contain mostly noise. We also compute the majority voting scores for each evaluation target using the six models of Table 4: a concept pair of a taxonomy is positive if and only if three or more models have successfully predicted the parent word. The resulting ranking is reported in the next column, and is shown to correlate well with our manual evaluation (last column). ### Random noise Simulation To further evaluate the good behaviour of RaTE, we conducted an experiment where we degraded the HiExpan1 taxonomy (the best one we tested). We did this by randomly replacing a percentage of concepts by others. Figure 3 shows that the score (obtained with model m1a) roughly decreases linearly with the level of noise introduced, which is reassuring. ## 6 Discussion We presented RaTE, a procedure aimed at automatically evaluating a domain taxonomy without reference taxonomies or human evaluations. It relies on a large language model and an unmasking procedure for producing annotations. We tested RaTE on the Yelp corpus which gathers restaurant reviews, and found that it well behaves: it correlates well with human judgments, and (artificially) degrading a taxonomy leads to a score degradation proportional to the amount of noise injected. Still, we observed that the quality of the language model predictions varies according to the strategies used to fine-tune them. There remains a number of avenues to investigate. First, we have already identified a number of decisions that could be revisited. In particular, we must test RaTE on other domains, possibly controlling variables such as the size of the fine-tuning material or the frequency of terms. Second, RaTE is an accuracy measure, and depending on the evaluation scenario, it should eventually be coupled with a measure of recall. Last, an interesting avenue is to investigate whether RaTE can be used to optimize the hyper-parameters of an ATC system. ## Acknowledgments This work has been done in collaboration with IATA to whom we are grateful. \begin{table} \begin{tabular}{l r r r r r|r r|r r|r} \hline \hline & \multicolumn{4}{c|}{Fine-tuned Models} & \multicolumn{2}{c|}{BERT} & \multicolumn{1}{c|}{Majority} & \multicolumn{1}{c|}{RaTE} & \multicolumn{1}{c}{Manual} \\ & m1a & m1b & m2a & m2b & m0a & m0b & large & base & Voting & ranking & ranking \\ \hline CoRel1 & 72.7 & 71.8 & 42.4 & 44.5 & 46.3 & 43.6 & 20.4 & 27.4 & 44.3 & 4 & 3 \\ CoRel2 & 78.2 & 75.0 & 54.4 & 53.7 & **57.2** & 51.2 & 25.9 & 36.2 & 57.2 & 2 & 2 \\ CoRel3 & 60.2 & 66.7 & 54.1 & 54.9 & **57.2** & 50.1 & 36.0 & 40.0 & 53.5 & 3 & 4 \\ CoRel4 & 68.2 & 64.6 & 45.0 & 39.0 & 36.5 & 38.1 & **41.0** & 41.8 & 34.7 & 5 & 5 \\ HiExpan1 & **84.5** & **84.7** & **59.5** & **56.7** & 56.9 & **64.3** & 34.9 & **42.0** & **59.0** & 1 & 1 \\ TaxoGen1 & 13.5 & 14.7 & 5.5 & 6.1 & 1.2 & 2.5 & 3.1 & 3.7 & 1.2 & 6 & 6 \\ TaxoGen2 & 0.0 & 0.0 & 0.0 & 5.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 7 & 7 \\ \hline \hline \end{tabular} \end{table} Table 7: Relation accuracy scores evaluated by language models, calculated by the number of positive relations, or parent terms in the model predictions, divided by the number of unique parent-child pairs in each taxonomy. Figure 3: Relation accuracy obtained with model m1a, as a function of the percentage of noise introduced in HiExpan1.
2307.12440
Hydrodynamic bound states of rotating micro-cylinders in a confining geometry
Many micro-swimmers propel themselves by rotating micro-cylindrical organelles such as flagella or cilia. These cylindrical organelles almost never live in free space, yet their motions in a confining geometry can be counter-intuitive. For example, one of the intriguing yet classical results in this regard is that a rotating cylinder next to a plane wall does not generate any net force in Newtonian fluids and therefore does not translate. In this work, we employ analytical and numerical tools to investigate the motions of micro-cylinders under prescribed torques in a confining geometry. We show that a cylinder pair can form four non-trivial hydrodynamic bound states depending on the relative position within the confinement. Our analysis shows that the distinct states are the results of competing effects of the hydrodynamic interactions within the cylinder pair and between the active cylinders and the confinement.
Hanliang Guo, Yi Man, Hai Zhu
2023-07-23T21:57:08Z
http://arxiv.org/abs/2307.12440v2
# Hydrodynamic bound states of rotating micro-cylinders in a confining geometry ###### Abstract Many micro-swimmers propel themselves by rotating micro-cylindrical organelles such as flagella or cilia. These cylindrical organelles almost never live in free space, yet their motions in a confining geometry can be counter-intuitive. For example, one of the intriguing yet classical results in this regard is that a rotating cylinder next to a plane wall does not generate any net force in Newtonian fluids and therefore does not translate [1; 2]. In this work, we employ analytical and numerical tools to investigate the motions of micro-cylinders under prescribed torques in a confining geometry. We show that a cylinder pair can form four non-trivial hydrodynamic bound states depending on the relative position within the confinement. Our analysis shows that the distinct states are the results of competing effects of the hydrodynamic interactions within the cylinder pair and between the active cylinders and the confinement. ## I Introduction Rotation is a fundamental form of microorganism locomotion. Flagellated bacteria such as _E. coli_ move by rotating their semi-rigid flagella [3]; green algae _Chlamydomonas_ swim in a helical trajectory while rotating its cell-body with the beat of two near-identical flagella [4; 5]; thousands of flagella on the surface of _Volovax_ generate a tangential velocity at an angle of the axis of motion that generates the swirling motion [6]. Even though body rotation may not be the most efficient way of locomotion through the lens of hydrodynamics, it can be biologically beneficial as it allows the microorganisms to perceive the environment from all angles and enable crucial functions such as photo-taxis [7]. From the technological point of view, micro-particles can be driven into rotation mode relatively easily through external fields such as magnetic fields, and the driven rotations are usually coupled with translations and can lead to interesting collective behaviors [8]. Understanding the mechanisms of these micro-rotors or micro-rollers in complex conditions has been a point of focus recently as they have shown great potential in drug delivery, microsurgery, and mixing [9; 10; 11; 12]. Biological microorganisms and artificial microswimmers almost never live in free space, and it has been known for many decades that nearby boundaries qualitatively alter the dynamics of microswimmers. For example, Rothschild [13] reported that Bull spermatozoa are attracted to no-slip boundaries, and suggested that it might be due to the hydrodynamic interaction between the spermatozoa and the boundary about 60 years ago. Numerous studies have since revealed the effects of hydrodynamic interactions from the confinement geometry to the microswimmers, both experimentally and theoretically. Among many other interesting findings, we now know that _E. coli_ swim in circular motions near the boundary due to the force- and torque-free swimming and the hydrodynamic interactions with the boundary, and the direction of the motion depends on the boundary type [14; 15; 16]; active particles can be trapped into closed orbits by passive colloids, and can also trap and transport passive cargos [17; 18; 19]; geometric asymmetry of the microswimmer or the boundary can also lead to qualitatively different trajectories [20; 21]. Readers are kindly referred to a concise review of this topic given by Elgeti and Gompper [22]. Hydrodynamic interactions between multiple micro-rollers and the boundary can lead them into various interesting periodic motions, called hydrodynamic bound states. For example, a pair of _Volox_ "dance" in multiple hydrodynamic bound states when in close proximity to solid walls, forming the _waltz_ and _minuet_ bound states [23]. Additionally, magnetically driven micro-rollers can form various states such as "critters" or one-dimensional chain via sole hydrodynamic interactions [24; 25]. Interestingly, Delmotte [26] showed that rich dynamics can be obtained for a micro-roller _pair_ above a flat wall. They found that the different states of the micro-roller pair can be obtained by altering the relative strength of gravitational forces and external torques. Furthermore, the micro-roller pair would be in a stable motile orbiting mode, reminiscent of the "critters" state observed in micro-roller suspensions, when the relative strength is high. Studying the micro-roller pair provides us the opportunity to obtain a deep understanding of the mechanisms behind the various states of micro-roller suspensions. However, the method illustrated in [26] relies on the image systems above a no-slip planar boundary [27] that is difficult to extend to other types of confinement. In this letter, we focus on the hydrodynamic bound states of two neutrally buoyant rotating cylinders inside cylindrical confinement with circular cross-sectional areas. The dynamics of one rotating cylinder inside the confinement is derived analytically, and the dynamics of a rotating cylinder pair is computed numerically. We note that while it is straightforward to extend the numerical method to more complex geometry, we focus on circular cylinders to allow feasible analysis of the mechanisms. We show that a pair of rotating circular cylinders form four non-trivial hydrodynamic bound states, resulting from the competing effects of the hydrodynamic interactions within the cylinder pair and between the active cylinders and the confinement. ## II Model and Methods Consider a cylindrical confinement \(\gamma_{0}\) with a circular cross-section of radius \(R\) filled with viscous fluids. Active cylinders \(\gamma_{k}\) (\(k>0\)) of radius \(r\) are suspended inside the confinement. Each active cylinder generates a torque per unit length \(M_{a}\mathbf{e}_{3}\) about its own axis. Let the center of the confinement be the origin and the center of the \(k\)-th active cylinder be \(\mathbf{x}_{k}\). The schematic figure is shown in Figure 1(a). In the viscosity-dominant regime, the inertia is negligible and the fluid dynamics is governed by the Figure 1: (a) Model. Active cylinders of radius \(r\) inside the confining cylinder of radius \(R\). The active cylinders are driven by a prescribed torque \(M_{a}\mathbf{e}_{z}\) per unit length. The gap between the active cylinder and the confinement is filled with fluid of viscosity \(\mu\). \(\gamma_{0}\) denotes the surface of the confinement and \(\gamma_{k}\) (\(k>0\)) denotes the surfaces of the active cylinders. \(\Omega\) denotes the fluid domain bounded by the surfaces. (b) Single active cylinder case. The prescribed torque \(M_{a}\mathbf{e}_{z}\) of the active cylinder generates an angular velocity \(\omega_{1}\mathbf{e}_{z}\) and a translational velocity \(U_{1}\) in the azimuthal direction of the confinement. The active cylinders is positioned \(d_{1}\) units away from the center of the confinement. (c) Co-rotating frame that eliminates the translational velocity of the active cylinder. Quantities with asterisks are those measured in the co-rotating frame. Positive directions of measured quantities are denoted by arrows. incompressible Stokes equation: \[-\mu\nabla^{2}\mathbf{u}+\nabla p=\mathbf{0},\quad\nabla\cdot\mathbf{u}=0,\quad \text{for }\mathbf{x}\in\Omega, \tag{1}\] where \(\Omega\) is the fluid domain, \(\mathbf{u}\) is the fluid velocity, \(p\) is the pressure, and \(\mu\) is the fluid viscosity. The fluid velocity on the surface of the active cylinder and the confining geometry are given by rigid-body motion and the no-slip boundary condition: \[\mathbf{u}(\mathbf{x})=\left\{\begin{array}{ll}0,&\text{for }\mathbf{x}\in \gamma_{0}\\ \mathbf{U}_{k}+\omega_{k}\mathbf{e}_{z}\times(\mathbf{x}-\mathbf{x}_{k}),& \text{for }\mathbf{x}\in\gamma_{k}\quad(k>0)\end{array}\right., \tag{2}\] where \(\mathbf{U}_{k}\) and \(\omega_{k}\) are the \(k\)-th active cylinder's centroidal translational and angular velocities respectively. The active cylinders also satisfy the force- and torque-balance conditions in the viscosity-dominant regime: \[\int_{\gamma_{k}}\mathbf{f}(\mathbf{x})d\mathbf{x}=\mathbf{0},\quad\int_{ \gamma_{k}}(\mathbf{x}-\mathbf{x}_{k})\times\mathbf{f}(\mathbf{x})d\mathbf{x }+M_{a}\mathbf{e}_{z}=\mathbf{0}, \tag{3}\] where \(\mathbf{f}\) is the fluid traction density per unit length on the cylinder boundary (\(k>0\)). In general, given the centroidal positions of the active cylinders, we solve \(\mathbf{U}_{k}\) and \(\omega_{k}\) numerically using a high-order boundary-integral method, similar to our previous work [28]. The details of our implementation are included in the Supplemental Material. We note that the case of a single active cylinder rotating inside the confinement can be solved analytically. Specifically, we are interested in the rotation-induced translational velocity of the inner cylinder when its center is \(d_{1}\) distance away from the center of the confining cylinder (Fig. 1(b)). Without loss of generality, we assume the center of the active cylinder \(\mathbf{x}_{1}\) is on the positive \(x\)-axis. In this set-up, the \(x\)-component of the translational velocity is zero by symmetry. Thus \(\mathbf{U}_{1}=U_{1}\mathbf{e}_{y}\), where \(U_{1}\) is the centroidal translational speed. To find \(U_{1}\), we adopt the bipolar coordinate systems [29, Appendix-12] following conventional approaches (e.g., in [1, 30]). The transformation between Cartesian and the bipolar coordinate systems are given by \[\left\{\begin{array}{ll}x+iy=ic\cot\frac{\xi+i\eta}{2},\quad(c>0)\\ x=\frac{c\sinh\eta}{\cosh\eta-\cos\xi},\quad y=\frac{c\sin\xi}{\cosh\eta-\cos \xi},\end{array}\right. \tag{4}\] where \(\xi\in[0,2\pi]\) and \(\eta\in(-\infty,\infty)\) are the bipolar coordinates, and \(i=\sqrt{-1}\) is the imaginary unit. The curves given by \(\eta=\eta_{0}\) are a family of circles with centers at \((x,y)=(c\coth\eta_{0},0)\) and radius \(c|\operatorname{csch}\eta_{0}|\). Let the active cylinder and the confinement be denoted by \(\eta=\alpha\) and \(\eta=\beta\) respectively, where \(\alpha<\beta<0\). The parameters \(\alpha\), \(\beta\), and \(c\) can be solved directly from the cylinder diameters \(R\) and \(r\), and the center-to-center distance \(d_{1}\): \[c=\sqrt{\left(\frac{d_{1}^{2}+r^{2}-R^{2}}{2d_{1}}\right)^{2}-r^{2}},\quad \alpha=-\operatorname{arcCsch}(r/c),\quad\beta=-\operatorname{arcCsch}(R/c). \tag{5}\] Wakiya [30] solved the force and moment exerted upon the cylinders when both cylinders are pinned at their centers and rotate with given angular velocities. To leverage their solution, we apply a frame transformation to cancel the active cylinder's translational velocity \(U_{1}\mathbf{e}_{y}\). Specifically, we choose a co-rotating frame at an angular velocity \(U_{1}/d_{1}\) in the counter-clockwise direction (Fig. 1(c)). In this frame, the confining cylinder experiences an angular velocity \(\omega_{0}^{*}=-U_{1}/d_{1}\), and the inner cylinder experiences an angular velocity \(\omega_{1}^{*}=\omega_{1}+\omega_{0}^{*}\) and zero translational velocity. Note that we are using the asterisk (\(*\)) to denote the quantities in the co-rotating frame. Substituting into equations (2.13 - 2.14) in Wakiya [30] yields the force and moment exerted upon the active cylinder: \[\left\{\begin{array}{ll}F_{x}=0\\ F_{y}=4\pi\mu(r\omega_{1}^{*}\sinh\beta+R\omega_{0}^{*}\sinh\alpha)\sin( \alpha-\beta)/S\\ M=-4\pi\mu r^{2}\sinh^{2}\alpha\left\{\omega_{1}^{*}\frac{\sinh^{2}(\alpha- \beta)}{\sinh^{2}\alpha}+(\omega_{1}^{*}-\omega_{0}^{*})\left[(\alpha-\beta) \frac{\cosh(\alpha-\beta)}{\sinh(\alpha-\beta)}-1\right]\right\}/S\end{array}\right. \tag{6}\] where \(S=(\alpha-\beta)(\sinh^{2}\alpha+\sinh^{2}\beta)-2\sinh\alpha\sinh\beta\sinh( \alpha-\beta)\). \(F_{x}=0\) confirms our previous argument that the active cylinder does not have radial velocity. Note that the positive sign is used in the expression of \(F_{y}\) as \(\alpha<0\) in our case. Substituting the force-balance condition (\(F_{y}=0\)) and torque-balance condition (\(M+M_{a}=0\)) yields \[\omega_{0}^{*}=\frac{SM_{a}}{4\pi\mu r^{2}\sinh^{2}\alpha}\left/\left\{\frac{R \sinh\alpha\sinh^{2}(\alpha-\beta)}{r\sinh\beta\sinh^{2}\alpha}+\left(\frac{R \sinh\alpha}{r\sinh\beta}+1\right)\left[(\alpha-\beta)\frac{\cosh(\alpha- \beta)}{\sinh(\alpha-\beta)}-1\right]\right\}\right., \tag{7}\] and the translation speed is simply \[U_{1}=-\omega_{0}^{*}d_{1}. \tag{8}\] ## III Results Figure 2 shows the translational velocity as a function of the center-to-center distance between the active cylinder and the confinement for \(R=10r\). The analytical and numerical solutions match perfectly. The translational velocities at \(d_{1}=0\) and \(d_{1}=R-r\) are both \(0\), as dictated by symmetry and the no-slip boundary conditions, respectively. The active cylinder translates the fastest when \(d_{1}\approx 8r\), in which case the gap between the active cylinder and the confinement is approximately the radius of the active cylinder. Interestingly, the translational velocity increases almost linearly for \(0\leq d_{1}\leq 6r\). This means that the orbital angular velocity of the active cylinder (\(U_{1}/d_{1}\)) is almost constant when \(d_{1}\leq 6r\). In other words, if more than one active cylinders are in the \(d_{k}\leq 6r\) region and _not_ interacting hydrodynamically with each other, the relative positions between each active cylinder will remain the same. The numerical flow fields are shown in Figure 3 with increasing eccentricity resulting from the position of the active cylinder (\(d_{1}/r=2.5,\,5,\,7.5\)). When the eccentricity is high, we reproduce the classical counter-rotating secondary recirculation zone inside the confinement. The existence of the secondary recirculation zone, as will be shown in the following sections, strongly affects the dynamics of the cylinder pair. Next, we investigate the long-term dynamics of an active cylinder pair inside the confinement. Specifically, consider two active cylinders suspended inside a stationary confining circular cylinder of radius \(R=10r\) with Figure 2: Translational velocity of the active cylinder as a function of the center position, scaled by the characteristic speed and length respectively. Numerical results are shown in blue circles, and analytical solutions are shown in the red curve. \(R=10r\). Insets denote the concentric configuration (\(d_{1}=0\)) and the non-concentric configuration (\(d_{1}>0\)). the same torque per unit length \(M_{a}\mathbf{e}_{z}\). No analytical solution seems feasible for this case and we adopt the numerical route. Let \(\mathbf{x}_{k}(t)\) denote the \(k\)-th active cylinder's center position at time \(t\) and \(d_{k}(t)=|\mathbf{x}_{k}(t)|\). Figure 4 shows a gallery of four qualitatively different periodic trajectories with different initial positions. We start by putting cylinder 1 halfway between the center and the boundary of the confinement and cylinder 2 at the center of the confinement (\(\mathbf{x}_{1}(0)=5.0r\mathbf{e}_{x}\), \(\mathbf{x}_{2}(0)=0\)). Both cylinders orbit counter-clockwise when viewed in the lab frame and the center of each cylinder traces a _cycloid-type curve_ around the origin. (Fig. 4(a)). The cycloid-type curves collapse into simple curves when viewed in the co-rotating frame \(x^{*}y^{*}\), implying that the motions of the cylinders are periodic (Fig. 4(e)). The co-rotating frame is defined in the same way as in our analytical method to eliminate cylinder 1's azimuthal movement. The center of cylinder 2 traces a slightly curved trajectory approximately aligned with the \(y^{*}\) axis. The discontinuity of cylinder 2's trajectory, denoted by the red dashed line in the co-rotating frame, happens in the degenerate case where cylinder 1 is at the center of the confinement. The periodic nature of the motions is also reflected when we plot the distances between the active cylinders and the center of the confinement as functions of time (Fig. 4(i)). As time increases, \(d_{1}(t)\) first decreases from \(5r\) to \(0\) as cylinder 1 gradually moves towards the origin while \(d_{2}(t)\) increases from \(0\) to \(5r\). At about \(t=10^{3}\mu r^{2}/M_{a}\), \(d_{1}(t)=0\) as cylinder 1 reaches the origin and \(d_{2}(t)=5r\). At this point, the two cylinders essentially exchanged their positions compared to their initial positions, albeit with a rotated viewing angle. As a result, the functions \(d_{1}(t)\) and \(d_{2}(t)\) are identical up to a half-period phase lag. That is, \(d_{1}(t)=d_{2}(t+T/2)\) for all \(t\), where \(T\) is the period of the motion. We refer to this type of periodic motion with exchangeability between cylinders as the _Waltz_ bound state, motivated by the dancing motion of _Volvox_[23]. Different types of long-term dynamics can be obtained by varying the initial positions of the cylinder pair. For example, if we keep the initial position of cylinder 2 to be at the origin and move cylinder 1 further away such that \(\mathbf{x}_{1}(0)=7.5r\mathbf{e}_{x}\), cylinder 1 will always orbit around the origin at a large yet approximately constant distance, whereas cylinder 2 is "trapped" to move in small circles close to the origin (Fig. 4(b)). Again, the nature of the motion is best shown in the co-rotating frame: cylinder 2 orbits in a small circle biased towards cylinder 1 while cylinder 1 barely moves (Fig. 4(f)). During the period, \(d_{1}(t)\) varies between \(7.5r\) and \(8.0r\) while \(d_{2}(t)\) varies between \(0\) and \(2.3r\). The center-to-center distance of the cylinder pair is also larger compared to the first case, where \(5.7r<d_{12}(t)<7.5r\) (Fig. 4(j)). Unlike the Waltz bound state, the exchangeability within the cylinder pair is lost as \(d_{1}(t)>d_{2}(t)\) for all \(t\), and the two cylinders seem to orbit with their own radius. We refer to this type of period motion as the _Orbit_ bound state. Another two transitions in states happen as we keep increasing the center-to-center distance within the cylinder pair. Specifically, for \(\mathbf{x}_{1}(0)=7.5r\mathbf{e}_{x}\), and \(\mathbf{x}_{2}(0)=-5.5r\mathbf{e}_{x}\), no cylinder would be trapped close to the origin, and the respective distances to the confinement center are similar for the two cylinders (Fig. 4(c)). Interestingly, the center of cylinder 2 in the co-rotating frame traces a crescent shape that encloses the trajectory of cylinder 1 if mirrored about the \(y^{*}\) axis (Fig. 4(g)). This is a state reminiscent of the Waltz Figure 3: Flow fields in the lab frame for single active cylinder inside the confining cylinder (\(R/r=10\)) at different positions. From left to right: \(d_{1}/r=2.5\), \(5\), and \(7.5\) respectively. The streamlines are shown on top of the flow field which is color-coded by the magnitude of the fluid velocity. bound state as \(d_{1}(t)\) and \(d_{2}(t)\) are identical up to a half-period phase lag (Fig. 4(k)). The larger center-to-center distance \(d_{12}\) leads to weaker hydrodynamic interactions within the cylinder pair and the period is much longer compared to Waltz. We refer to this state as the _weak Waltz_ bound state. On the other hand, increasing \(d_{12}(0)\) further would reveal a state similar to the Orbit bound state (Fig. 4(d)(h)(l)). Specifically, if \(\mathbf{x}_{1}(0)=7.5r\mathbf{e}_{x}\) and \(\mathbf{x}_{2}(0)=-8.5r\mathbf{e}_{x}\), \(d_{1}(t)\) oscillates between \(4.9r\) and \(7.5r\), while \(d_{2}(t)\) is almost constant at \(8.5r\). \(d_{12}\) in this case oscillates between \(13.4r\) and \(16r\). Unlike the typical Orbit bound state, no cylinder is trapped close to the origin. We refer to this state as _weak Orbit_ bound state. We then conduct a systematic study of the initial positions of both active cylinders \(\mathbf{x}_{k}(0)\). Due to rotational symmetry, we set \(\mathbf{x}_{1}(0)=d_{1}(0)\mathbf{e}_{x}\), where \(0\leq d_{1}(0)<9r\) without loss of generality. Additionally, numerical evidence (not shown here) suggests that no matter where \(\mathbf{x}_{2}(0)\) is, there is always a time \(t_{o}\) such that the origin, \(\mathbf{x}_{1}(t_{o})\), and \(\mathbf{x}_{2}(t_{o})\) form a straight line for \(R=10r\). This result greatly reduces the parameter space Figure 4: Characteristic trajectories of cylinder pairs with different initial positions. (a-d): center trajectories in the lab frame; (e-h): center trajectories co-rotating frame such that cylinder 1 is always on the positive \(x^{*}\) axis; (i-l): distance to the center of confinement (blue/red) and the center-to-center distance within the cylinder pair (black) as functions of time. In all panels, blue and red curves are the corresponding curves for cylinder 1 and 2, respectively. The arrows in panels (a-h) denote the translational velocity at the initial positions. Four different hydrodynamic bound states are shown in columns. Waltz: \(\mathbf{x}_{1}(0)=5r\mathbf{e}_{x},\mathbf{x}_{2}(0)=0\); Orbit: \(\mathbf{x}_{1}(0)=7.5r\mathbf{e}_{x},\mathbf{x}_{2}(0)=0\); weak Waltz: \(\mathbf{x}_{1}(0)=7.5r\mathbf{e}_{x},\mathbf{x}_{2}(0)=-5.5r\mathbf{e}_{x}\); weak Orbit: \(\mathbf{x}_{1}(0)=7.5r\mathbf{e}_{x},\mathbf{x}_{2}(0)=-8.5r\mathbf{e}_{x}\). we need to explore, as we only need to consider the cases where \(\mathbf{x}_{2}(0)\) is on the \(x\)-axis as well. Figure 5 shows the entire parameter space when \(\mathbf{x}_{1}(0)\) and \(\mathbf{x}_{2}(0)\) are varied along the \(x\) axis. In particular, the horizontal axis is the center-to-center distance within the cylinder pair at \(t=0\), which measures the strength of hydrodynamic interaction between the active cylinders; the vertical axis is the distance of the mid-position of the two active cylinders to the origin at \(t=0\), which measures the eccentricity of the cylinder pair. The upper right region of the parameter space is physically inaccessible as at least one active cylinder will be outside the circular confinement. The cylinder pair is in the equilibrium state by symmetry if \(\left|\mathbf{x}_{1}(0)+\mathbf{x}_{2}(0)\right|=0\). In this case, both active cylinders will keep moving in the azimuthal direction of the confining cylinder at the same orbital angular velocity. Aside from the symmetric case, there are four distinct regions over the entire parameter space, corresponding to the four hydrodynamic bound states discussed earlier in this section. Specifically, the cylinder pair is in the Waltz bound state if both the initial separation distance \(\left|\mathbf{x}_{1}(0)-\mathbf{x}_{2}(0)\right|\) and the eccentricity are small. As the separation distance increases, the cylinder pair falls in the Orbit, weak Waltz, and weak Orbit bound states in succession. For a given initial separation distance, the bound state tends to transition to the next bound state when the eccentricity increases. ## IV Mechanisms of the Multiple Hydrodynamic Bound States The motion of each cylinder depends on two factors: (1) The _self-induced velocity_ resulted from the active torque of its own, and (2) The _pair-induced velocity_ resulted from the hydrodynamic interaction within the active cylinder pair. While the self-induced velocity is determined directly by the position of the active cylinder inside the confinement (Fig. 2), the pair-induced velocity depends both on the distance within the pair (\(d_{12}\)) and the positions of the active cylinder with respect to the confinement. In the limiting case where the cylinder pair eccentricity is absent (\(\mathbf{x}_{1}(0)+\mathbf{x}_{2}(0)=0\)), the system possesses rotational symmetry by construction. Both cylinders will translate in the azimuthal direction of the confinement at the same orbital angular velocity and the pair eccentricity will remain 0 for all \(t\), leading to the _Symmetry_ state shown in Figure 5. In the following, we focus on analyzing the four examples shown in Figure 4 to deduce the mechanisms of these hydrodynamic bound states. In example 1, the small eccentricity for each active cylinder and the small center-to-center distance within the pair together implies that the self-induced velocity is dominated by the pair-induced velocity. As a result, Figure 5: Parameter space showing distinct hydrodynamic bound states. (shaded) Green: (weak) Waltz, (shaded) Yellow: (weak) Orbit, Grey: Symmetry, Dark Blue: Physically inaccessible. Examples shown in Figure 4 are denoted by black circles. the cylinder pair rotates about its center similar to that of the symmetry state, while the pair center slowly orbits around the center of the confinement due to the non-zero eccentricity, resulting in the _Waltz_ bound state. In example 2, cylinder 2 is at the center of the secondary recirculation zone generated by the rotation of cylinder 1, at which location both the self- and pair-induced velocities are close to zero. Additionally, the flow generated by cylinder 2 advects cylinder 1 in the same direction as its self-induced velocity, reinforcing the azimuthal movement of cylinder 1. Cylinder 2 is thus trapped close to the origin while cylinder 1 orbits in the azimuthal direction, leading to the _Orbit_ bound state. The analysis of the mechanisms is more complicated for larger center-to-center distances, as the pair-induced velocities no longer dominate the self-induced velocities and the cylinders are in the secondary recirculation zones generated by their counterparts. To facilitate our analysis, we decompose the velocities generated by the rotation of cylinder 1 into the azimuthal and radial components. These components are shown in the co-rotating frame as we are mostly focused on the relative motions of the pair (Fig. 6). We note that the radial components of the self-induced velocity as well as that of the pair-induced velocity are both zero if the centers of the cylinders and the origin form a straight line. We look at the azimuthal component first. In example 3, the pair-induced azimuthal velocity on cylinder 1 is much lower than its self-induced counterpart because of the wall-screening effect (\(3.8\times 10^{-4}M_{a}/\mu r\) vs \(5.3\times 10^{-3}M_{a}/\mu r\)), whereas those on cylinder 2 are comparable to each other (\(1.8\times 10^{-3}M_{a}/\mu r\) vs \(4.0\times 10^{-3}M_{a}/\mu r\)). Therefore, even though the self-induced orbital angular velocity (\(\omega_{k}=U_{k}/d_{k}\)) of cylinder 2 is slightly higher than that of cylinder 1, as evidenced by the tapered slope of the curve in Figure 2, the pair-induced velocity in the azimuthal direction advects cylinder 2 clockwise to the second quadrant of the co-rotating frame (Fig. 6(a)). In the second quadrant, the radial component of the pair-induced velocity is positive, meaning that \(d_{2}\) will increase and catch up \(d_{1}\) (Fig. 6(b)). On the other hand, the flow generated by cylinder 2 in the second quadrant advects cylinder 1 in the negative radial direction, which initiates the _weak Waltz_ bound state where the two cylinders will eventually exchange their relative positions. In example 4, both cylinders are close to the confining boundary with a large center-to-center distance within the pair, thus the self-induced velocities dominate the pair-induced velocities for each cylinder. The Figure 6: Mechanisms for the weak Waltz and the weak Orbit bound states. Panels show the azimuthal (a) and radial (b) velocities generated by the active cylinder at \(7.5r\mathbf{e}_{x}\). The other cylinder’s positions leading to the weak Waltz and weak Orbit bound states are denoted by the faint circles. The self- and pair-induced velocities are denoted by black and grey arrows, respectively. decreasing slope in Figure 2 for large \(d_{k}\) shows that the self-induced orbital velocity for cylinder 2 (at \(d_{2}=8.5r\)) is slower than that for cylinder 1 (at \(d_{1}=7.5r\)). Therefore cylinder 2 will move into the second quadrant of the co-rotating frame and \(d_{2}\) will increase even though it is already greater than \(d_{1}\) at the beginning. Therefore the two cylinders will not be exchangeable and are in the _weak Orbit_ bound state. ## V Conclusions In this letter, we systematically studied the hydrodynamic bound states of an active cylinder pair rotating inside cylindrical confinement. We focused on the case where the two active cylinders are identical both in terms of geometry and active torque. We found that the active cylinder pair can fall into four distinct non-trivial hydrodynamic bound states, termed _Waltz_, _Orbit_, _weak Waltz_, and _weak Orbit_ depending on their initial positions, where Waltz and weak Waltz are the states where the motions of the two active cylinders are exchangeable up to a half-period shift in phase, i.e., \(d_{1}(t)=d_{2}(t+T/2)\). The distinction between Waltz and weak Waltz is based on the parameter space, as the two states are separated by the Orbit state. Similarly for Orbit and weak Orbit, where the weak Waltz state separate the two states in the parameter space. The mechanisms of these hydrodynamic bound states can be explained by the competing effects of the self-induced velocity generated by each active cylinder and the confinement and the pair-induced velocity generated within the active cylinder pair. We note that the Waltz bound state is reminiscent of the _Leapfrog_ motion observed in Delmotte [26] in which the micro-rollers are placed close to a flat wall. In fact, the Leapfrog motion is the only periodic state observed in [26] when the micro-rollers are neurally buoyant - our work shows that having non-planar confinement can lead to various periodic states, highlighting the effects of complex geometry on active matters. Some recent work has been investigating the effect of complex geometry on run-and-tumble behaviors [31; 32; 33; 34], our results suggest that bacteria swimming in cylindrical confinement may exhibit different modes, as run-and-tumble is essentially the result of rotating (and counter-rotating) a few semi-rigid flagella. Many extensions can be applied to this work. For example, one can introduce asymmetry within the active cylinder pair, either in terms of the shape or the driving mechanism. This asymmetry could presumably lead to more interesting motions. Furthermore, one can also alter the shape of the confinement and explore the possibility of delivering a cargo cylinder from one compartment to another by controlling the torque of the active rotating cylinder. It will also be interesting to see how different bound states affect the mixing of scalar fields. On the other hand, fast numerical methods such as the Fast multipole method (FMM) [35; 36] can be readily applied to the boundary integral method we adopted here, allowing the possibility to study the interactions of many active cylinders. Extending the work to 3D would also allow us to study more realistic cases with more interesting geometries such as helical-shape filaments.
2305.14648
Competing Uniaxial Anisotropies in Epitaxial Fe Thin Films Grown on InAs(001)
We report on the interplay of two uniaxial magnetic anisotropies in epitaxial Fe thin films of varying thickness grown on InAs(001) as observed in ferromagnetic resonance experiments. One anisotropy originates from the Fe/InAs interface while the other originates from in-plane shear strain resulting from the anisotropic relaxation of the Fe film. X-ray diffraction was used to measure the in-plane lattice constants of the Fe films, confirming the correlation between the onset of film relaxation and the corresponding shear strain inferred from ferromagnetic resonance data. These results are relevant for ongoing efforts to develop spintronic and quantum devices utilizing large spin-orbit coupling in III-V semiconductors.
James M. Etheridge, Joseph Dill, Connor P. Dempsey, Mihir Pendharkar, Javier Garcia-Barriocanal, Guichuan Yu, Vlad S. Pribiag, Paul A. Crowell, Chris J. Palmstrøm
2023-05-24T02:34:41Z
http://arxiv.org/abs/2305.14648v1
# Competing Uniaxial Anisotropies in Epitaxial Fe Thin Films Grown on InAs(001) ###### Abstract We report on the interplay of two uniaxial magnetic anisotropies in epitaxial Fe thin films of varying thickness grown on InAs(001) as observed in ferromagnetic resonance experiments. One anisotropy originates from the Fe/InAs interface while the other originates from in-plane shear strain resulting from the anisotropic relaxation of the Fe film. X-ray diffraction was used to measure the in-plane lattice constants of the Fe films, confirming the correlation between the onset of film relaxation and the corresponding shear strain inferred from ferromagnetic resonance data. These results are relevant for ongoing efforts to develop spintronic and quantum devices utilizing large spin-orbit coupling in III-V semiconductors. pacs: + [FOOTNO Ferromagnetic thin films grown epitaxially on semiconductor substrates have been of interest in the field of spintronics for several decades. They serve as model systems for spin injection and spin detection, which would be critical components of a spin transistor.[1; 2; 3] For example, the system of Fe on GaAs(001) has been extensively studied due to the near lattice match of GaAs and Fe as well as the ability to form Schottky tunnel barriers.[3; 4; 5; 6; 7; 8] However, the introduction of semiconductors with larger spin orbit coupling (SOC) is necessary for the realization of a fast and efficient spin field-effect transistor.[9; 10; 11] Recently, there has also been growing interest in developing quantum devices that integrate magnetic elements with large-SOC semiconductors.[12; 13; 14; 15; 16; 17] The two most commonly used III-V semiconductors with large SOC are InAs and InSb, whose lattice mismatch with Fe are 5.4% and 11.5%, respectively. This is quite large compared to the 1.4% mismatch between Fe and GaAs. Despite the possible challenges that a large lattice mismatch may present, ferromagnets (FM) on III-V semiconductors with strong SOC must be investigated further given their potential for applications. When epitaxial ferromagnets with a bulk cubic symmetry are grown on III-V surfaces, there are two primary sources of in-plane uniaxial magnetic anisotropy. The first arises from the anisotropic interfacial bonding between the substrate and FM film due to the breaking of the fourfold symmetry at the III-V surface. Independent of the III-V surface reconstruction, the easy axis of this anisotropy is along the [110] direction.[18; 19; 20; 21; 22] The second uniaxial anisotropy is due to a magnetoelastic energy originating from anisotropic in-plane strain.[4; 23] This results in a magnetoelastic energy with a minimum along the [1\(\bar{1}\)0] direction. The end result is a thickness-dependent competition between these two uniaxial terms as well as the FM bulk anisotropy. In this Letter we demonstrate the FM thickness dependence (1.4 to 39.0 nm) of the competition between the two uniaxial anisotropies as well as the cubic anisotropy of bulk Fe, whose easy axes are along the \(<\)100\(>\) direction of Fe/InAs(001) heterostructures. To probe these anisotropies room temperature broadband ferromagnetic resonance (FMR) measurements were performed. In addition to the FMR experiments, reciprocal space mapping (RSM) was performed using x-ray diffraction (XRD) to determine lattice constants and subsequent lattice strain[24]. The Fe films were grown by molecular beam epitaxy (MBE). The InAs(001) substrates were cleaned for one hour in an ultrahigh vacuum (UHV) chamber with a base pressure \(\sim 1\times 10^{-10}\) mbar using atomic hydrogen. The temperature of the substrate was 475 \({}^{\circ}\)C, as read by a thermocouple close to the sample, and the hydrogen cracker was 1700 \({}^{\circ}\)C. After cleaning, the substrates were loaded into a VG V80 UHV growth chamber with a base pressure \(\sim 5\times 10^{-11}\) mbar, where elemental Fe was deposited on the substrate from an effusion cell. During deposition the substrate was maintained at room temperature except for the 4.6 nm film, where the sample was heated to 75\({}^{\circ}\)C as measured by a thermocouple. After the deposition of Fe was completed, the samples were either capped with approximately 6 nm of Ti in the same VG V80 chamber as the Fe, or _in vacuo_ transferred to an inter-connected UHV chamber where a 10 nm Au cap was grown at room temperature. The lower growth temperatures used in this work were chosen to avoid diffusion of Fe into InAs and to minimize any interfacial reactions. Reflection high energy electron diffraction (RHEED) performed during growth confirmed the film was oriented as expected as well as suggesting a single crystalline phase. Post-growth, grazing incidence x-ray diffraction (GIXRD) was used to determine the precise Fe thickness for each sample. Vibrating sample magnetometry yielded hysteresis loops for each sample at multiple orientations, enabling the determination of the saturation magnetization and coercivity for each sample. Using the Stoner-Wolfarth model, theoretical hysteresis loops were generated with results qualitatively agreeing with the experimental results. Wide-angle XRD was performed in concert with the RSM, with both pointing to the same results, which are discussed in detail below. Broadband FMR measurements were performed at room temperature using a coplanar waveguide (CPW) in a simple transmission geometry with modulation of the applied magnetic field.[25] The applied field was swept through resonance, and the signal was fit to a Lorentzian derivative function. From these fits \(H_{FMR}\), the strength of the applied field at resonance, was extracted. This procedure was repeated as \(\phi_{H}\), the angle of the applied field in the plane of the film, was varied from 0\({}^{\circ}\) to 360\({}^{\circ}\) in increments of 5\({}^{\circ}\). The data for several samples are shown in Fig. 1. They show a combination of a two-fold and four-fold symmetry of varying strengths dependent on the sample thickness. For the thinnest sample, the two-fold component is strongest with respect to the four-fold component. The magnetic easy axis of this sample is along the [110] direction of the crystal. For the intermediate thickness of 4.6 nm, the easy axis switches to the [1\(\bar{1}\)0] direction, and there is an increase in the relative strength of the four-fold component. The 90\({}^{\circ}\) rotation of the easy axis signals the relaxation of the Fe film, and this will be discussed in depth later. In the thicker samples, the magnetic easy axis is along the [1\(\bar{1}\)0] direction, indicating that they have also relaxed. When the film thickness reaches 39.0 nm, the data show a dominant four-fold symmetry that would be expected from a bulk Fe sample. However, a small two-fold component, whose easy axis is along the [1\(\bar{1}\)0] direction, still remains, suggesting that complete relaxation has not yet occurred. To quantitatively describe the magnetic anisotropy of this series of heterostructures, a model of the free energy density was introduced with Zeeman, two uniaxial, cubic, and shape anisotropy Figure 1: a) The inset shows the geometry of the sample with respect to the applied magnetic field. a)-c) Azimuthal angular dependence of resonance fields with corresponding fits to Eq. 2 shown in the solid curves. contributions, \[\begin{split} F=&-\mathbf{M}\cdot\mathbf{H}+K_{u}\sin^{2} \phi+K_{u2}\sin^{2}(\phi+\pi/2)\\ &+K_{1}(\alpha_{1}^{2}\alpha_{2}^{2}+\alpha_{2}^{2}\alpha_{3}^{2}+ \alpha_{3}^{2}\alpha_{1}^{2})+2\pi M_{eff}^{2}\cos^{2}\theta,\end{split} \tag{1}\] where \(\mathbf{M}\) is the magnetization vector, \(\mathbf{H}\) is the applied magnetic field vector, \(K_{u}\) is the uniaxial anisotropy constant originating from the Fe/InAs interface, \(K_{u2}\) is the uniaxial anisotropy constant originating from anisotropic strain, \(K_{1}\) is the cubic anisotropy constant, \(\alpha\)'s are the directional cosines of the magnetization, and \(M_{eff}\) is the effective magnetization. The concept of effective magnetization comes from the combination of demagnetizing energy and perpendicular magnetic anisotropy. This free energy density can be used to determine the condition for FMR, first shown by Suhl,[26] with the result being, \[\omega=\frac{\gamma}{M_{s}\sin\theta}\sqrt{F_{\theta\theta}F_{\phi\phi}-\left( F_{\theta\phi}\right)^{2}}, \tag{2}\] where \(F\) is the free energy density, \(\omega\) is the frequency, \(\gamma\) is the gyromagnetic ratio, \(M_{s}\) is the saturation magnetization, \(\phi\) is the azimuthal angle of the magnetization with respect to the [110] direction, and \(\theta\) is the polar angle of the magnetization with respect to the [001] direction. The subscripts denote derivatives with respect to \(\theta\) and \(\phi\). We made the ansatz that \(K_{u}\) is a 3D effective surface energy density, so it must be inversely proportional to film thickness.[20; 21; 23] Thus, \[K_{u}=\frac{K_{u}^{int}}{t}, \tag{3}\] where \(t\) is the thickness of the film and \(K_{u}^{int}\) is the purely surface portion of the uniaxial anisotropy. It is assumed that \(\mathbf{M}\) and \(\mathbf{H}\) are in the plane of the film. A previously reported value of 1714 emu\(/\)cm\({}^{3}\) was used for the magnetization.[27] At resonance, the applied magnetic field is large enough, greater than 800 Oe, to assume that the magnetization is in the same direction as the applied field. This assumption was verified for all films through hysteresis loops, including measurements with the applied field along the magnetic hard axis. Eq. 2 was manipulated to express \(H_{FMR}\) as a function of \(\phi_{H}\) in order to fit the experimental data of \(H_{FMR}\) versus \(\phi_{H}\). From these fits, \(K_{u}\), \(K_{u2}\), \(K_{1}\), and \(M_{eff}\) were extracted. For the thinnest sample, 1.4 nm, the film is pseudomorphic, so \(K_{u2}\) can be set to zero. This determination was made in light of the following results: for thicker samples, greater than 3.2 nm, the easy axis is rotated 90\({}^{\circ}\), and an Fe peak appears in the x-ray diffraction data. A pseudomorphic sample has an Fe peak on top of the InAs peak making it very difficult to separate the two. The remaining free parameters were determined through fitting. With \(K_{u}\) extracted and the thicknesses determined through x-ray reflectivity (XRR), \(K_{u}^{int}\) was determined to be \(2\times 10^{-2}\mathrm{erg/cm^{2}}\). Previous works on Fe/GaAs heterostructures have seen this surface energy term \(\sim 10^{-1}\mathrm{erg/cm^{2}}\).[20; 21] With all thicknesses known and \(K_{u}^{int}\) extracted, \(K_{u}\) was constrained for subsequent fits. The data from the 3.2 nm film are more ambiguous than that of the 1.4 nm film. The magnetic easy axis of the 3.2 nm sample is along the [110] direction of the crystal, just as it is in the 1.4 nm sample. In the x-ray diffraction data, seen in panel c) and d) of Fig. 4, there is a shoulder of the InAs peak which is most likely the emergence of an Fe peak. However, \(K_{u2}\) must still be small compared to \(K_{u1}\) as there is not a 90\({}^{\circ}\) rotation of the easy axis. Thus, \(K_{u2}\) again was set to zero for its corresponding fit. None of the other samples were pseudomorphic and the x-ray data were readily analysable, so \(K_{u2}\) was not constrained for their fits. The values of \(K_{u}\), \(K_{1}\), and \(K_{u2}\) resulting from this procedure are shown in Fig. 2. By the above ansatz, the magnitude of \(K_{u}\) is inversely proportional to thickness as defined in Eq. 3. The magnitude of \(K_{1}\) increases significantly with thickness, saturating for thicknesses above 7.1 nm to a value of \(5.2\times 10^{5}\mathrm{erg/cm^{3}}\). This is slightly larger than the literature value for bulk Fe at room temperature, \(4.8\times 10^{5}\mathrm{erg/cm^{3}}\).[27]\(K_{u2}\) is zero for the two thinnest samples and then also decreases with thickness. The appearance of a non-zero \(K_{u2}\) signals that the Fe has relaxed, albeit not completely, resulting in an in-plane shear strain of the Fe lattice. Figure 2: The inset shows the structure of the InAs(001) including the direction of the As dimer rows. Anisotropy constants are plotted as a function of Fe thickness. From the definition of magnetoelastic energy,[28; 29]\(K_{u2}\) must take the following form, \[K_{u2}=B_{2}\epsilon_{6}, \tag{4}\] where \(B_{2}\) is the magnetoelastic coefficient of Fe associated with shear strain, and \(\epsilon_{6}\) is the in-plane shear strain defined as \(\epsilon_{[110]}-\epsilon_{[1\bar{1}0]}\). The extracted values of \(\epsilon_{6}\), shown in Fig. 3, are consistent with values from previous Fe/GaAs heterostructures, which are on the order of 0.1%.[4; 5] For the set of samples studied, the critical thickness at which shear strain is introduced to the system occurs between 3.2 and 4.6 nm. After the relaxation of the film and subsequent onset of shear strain, the shear strain decreases with thickness, but does not reach zero. This indicates that the film is not completely relaxed even at a thickness of 39.0 nm. We attempted to determine \(\epsilon_{6}\) directly through XRD and compare to the value extracted from the FMR data. To do this the lattice constants of each sample where determined through reciprocal space mapping with several results shown in Fig. 4. Data taken for two different geometrical configurations offset by a 90\({}^{\circ}\) rotation of the sample about the [001] direction. This was done so that maps around both the (202) and (022) substrate peaks could be produced. From the RSM's along with the FMR results, it is determined that the thinnest sample is pseduomorphic to the substrate. The data of the second thinnest sample has a shoulder on the InAs peak, most likely corresponding to an Fe peak. Due to low signal to noise lattice constants could not be extracted. Despite this, the FMR data for this sample points to a small and possibly negligible shear strain. Figure 3: The inset shows a parallelepiped and its in-plane diagonals. Shear strain is plotted as a function of Fe thickness with values determined from fitting FMR data to Eq. 2. The appearance of a definite offset Fe peak in the thicker samples shows that the Fe layer has relaxed. To determine the amount of relaxation, the RSM's, oriented by the substrate peak, were fit to Gaussian curves and the effective Miller indices were extracted. With the effective Miller indices of the (101) and (011) Fe peaks known, the lattice constants of each orientation were calculated. The map centered on the (202) peak gives the \(a\) and \(c\) lattice constants while the one centered on the (022) peak gives \(b\) and \(c\). Lattice parameter values are shown in Table 1. The \(c\) parameter determined by wide angle XRD in a coupled geometry was consistent with the results of the RSM's, confirming the validity of the procedure. Figure 4: a) c) e) g) Reciprocal space maps around the (202) peak of InAs(001). b) d) f) h) Reciprocal space maps around the (022) peak of InAs(001). The Au peak seen in panels e)-h) comes from the Au capping layer. This peak is not seen in a)-d) as the 1.4 and 3.2 nm samples were capped with Ti. The units are Miller indices of InAs, with a\({}_{0}\) = 6.06 A. The red line in f) corresponds to the projection of the RSM in Fig. 5. Like the FMR data, the RSM's show that films thicker than 3.2 nm are relaxed. The data from the 3.2 nm sample is ambiguous in regards to if it pseudomorphic or not. However, even if the sample has relaxed, little to no shear strain is introduced to the lattice as inferred from the FMR data. For films 4.6 nm thick and greater, the lattice develops a tetragonal distortion, with equal \(a\) and \(b\) lattice parameters larger than the out-of-plane \(c\) parameter. As the thickness of the film increases the in-plane lattice constants relax towards the bulk value of 2.866 A, while maintaining the tetragonal distortion. It is inferred from the FMR data that the lattice develops decreasing amounts of in-plane shear strain with increasing thickness as expected. This strain distorts the cubic lattice into a parallelepiped. All angles defining the parallelepiped should be extremely close to but not exactly 90\({}^{\circ}\), resulting in a base that is nearly a square, but is in actuality a parallelogram whose diagonals differ in length. The shear strain, \(\epsilon_{6}\), which is defined as the Figure 5: An example of Gaussian fitting, used to extract the effective Miller indices, of one of the Fe peak projections of the 4.6 nm sample shown as the red line in panel f). \begin{table} \begin{tabular}{l l l l l} Fe Thickness (nm) & a (Å) & b (Å) & c (Å) & a\({}_{\text{InAs}}\)/2 (Å) \\ \hline 4.6 & 2.908 & 2.908 & 2.868 & 3.03 \\ 7.1 & 2.906 & 2.907 & 2.840 & 3.03 \\ 10.0 & 2.902 & 2.901 & 2.836 & 3.03 \\ 39.0 & 2.892 & 2.892 & 2.839 & 3.03 \\ \end{tabular} \end{table} Table 1: Summary of the lattice parameters determined from RSM’s shown in Fig. 4. difference of the two diagonal strains, will be nonzero. To determine the in-plane shear strain, both in-plane lattice parameters and the resulting diagonals must be determined from a single XRD scan, before rotating the sample 90\({}^{\circ}\). Unfortunately, from the XRD geometry used to acquire the data, this was not possible. A determination of the shear strain using a constant volume assumption along with the lattice parameters shown in Table 1 was attempted. However, the resulting in-plane shear strains are orders of magnitude larger than those expected from the FMR data and thus non-physical. It is believed that the constant volume assumption is not valid in our samples. This is not surprising as Poisson's ratio for Fe is 0.29, and a material that perfectly adheres to the the constant volume assumption would have a Poisson's ratio of exactly 0.5.[30] It is interesting to note that the 39.0 nm sample is not completely relaxed with \(a\) and \(b\) parameters larger than bulk and the \(c\) parameter less than bulk. Although this is surprising, the FMR data also supports the conclusion that the 39.0 nm sample is not completely relaxed. Fig. 3 clearly shows a small nonzero shear strain. Although direct evidence of shear strain through in-plane x-ray diffraction was not investigated, the agreement between current FMR and x-ray data is strong evidence of its existence. We have shown through FMR and XRD that the magnetic anisotropy of Fe/InAs(001) heterostructures is heavily dependent on sample thickness. Our investigations show that anisotropic relaxation of the Fe thin films results in shear strain, producing an additional term of magnetoelastic origin in the free energy density. This magnetoelastic term was confirmed through a rotation of the uniaxial easy axis as observed through FMR measurements. Similar phenomena have been reported in Fe/GaAs(001) structures. This suggests that, despite Fe/InAs having a significantly larger lattice mismatch than Fe/GaAs, this does not appear to have a detectable detrimental effect on interfacial magnetic properties. We would expect to see similar results for other Fe/III-V semiconductor heterostructures as well, which is encouraging for the developement of applications relying on the interplay between ferromagnetism and high-SOC semiconductors. ###### Acknowledgements. We acknowledge Bill Peria and Brett Heischmidt for their helpful input and discussions throughout this research. This work was supported by the Department of Energy under award no. DE-SC0019274. Parts of this work were carried out in the Characterization Facility, University of Minnesota, which receives partial support from NSF through the MRSEC program.
2303.01875
Decoding and Visualising Intended Emotion in an Expressive Piano Performance
Expert musicians can mould a musical piece to convey specific emotions that they intend to communicate. In this paper, we place a mid-level features based music emotion model in this performer-to-listener communication scenario, and demonstrate via a small visualisation music emotion decoding in real time. We also extend the existing set of mid-level features using analogues of perceptual speed and perceived dynamics.
Shreyan Chowdhury, Gerhard Widmer
2023-03-03T12:10:54Z
http://arxiv.org/abs/2303.01875v1
# Decoding and Visualising Intended Emotion in an Expressive Piano Performance ###### Abstract Expert musicians can mould a musical piece to convey specific emotions that they intend to communicate. In this paper, we place a _mid-level_ features based music emotion model in this performer-to-listener communication scenario, and demonstrate via a small visualisation music emotion decoding in real time. We also extend the existing set of mid-level features using analogues of _perceptual speed_ and _perceived dynamics_. Shreyan Chowdhury\({}^{1}\) Gerhard Widmer\({}^{1,2}\)\({}^{1}\)Institute of Computational Perception, Johannes Kepler University Linz, Austria \({}^{2}\)LIT AI Lab, Linz Institute of Technology, Austria [email protected] ## 1 Introduction Music emotion recognition aims at identifying and recognising emotional content in music using computer systems and typically considers _perceived_ emotion, which is the emotion that a human listener may recognise when listening to a song (that may be different from what the composer attempted to express and what the listener feels in response to it). However, in scenarios where the _intended_ emotion (emotion that the composer or performer aims to transmit) is available (for instance in the experiments by [1]), one could view emotion recognition models in a different light. Instead of comparing model predictions with listener ratings of perceived emotion, they could be compared directly to the intended emotion. In the present paper, we are interested in placing a music emotion recognition model based on _mid-level features_ in a similar scenario - decoding music emotion in real-time vis-a-vis a performer's intended emotions. (We have shown in previous research [2] that mid-level perceptual features, such as rhythmic complexity, or perceived major/minor harmonic character, are effective in modelling music emotion.) Also, in addition to the original seven mid-level features from [3] and [2], we investigate approximations of two additional mid-level features - _perceptual speed_ and _perceived dynamics_. In Section 2, we describe the original seven mid-level features and the two additional ones. In Section 3, we compare emotion modelling using the original (7)-mid-level and the augmented (9)-mid-level feature sets and find that the two additional features improve emotion modelling substantially. Finally, in Section 4, we demonstrate how the new (9)-mid-level feature model can effectively decode, in real time, the specific emotions that a musician (multiple Grammy Award winner Jacob Collier) is explicitly trying to communicate in a spontaneous "emotion communication experiment" that he shared with the world on YouTube [4]. ## 2 The Features In previous research [2], we have shown that training an emotion model based on mid-level perceptual features improves the model's accuracy and robustness, compared to models based on traditional low-level features, or to models that predict emotion in an end-to-end fashion directly from spectrograms. In this paper, we use the mid-level features based approach, however we investigate two additional features as well, which were absent in previous mid-level feature sets - _perceptual speed_ and _perceived dynamics_. ### Mid-level Features Mid-level features are musical qualities that are supposed to be intuitively recognisable by most listeners, without requiring music-theoretic knowledge. In [5] and [2], we used mid-level feature predictors trained on human-annotated data from [3] so that our emotion recognition model using these would conform to how humans seem to perceive mid-level qualities, and we showed that emotion recognition based on these features results both in good recognition accuracy, and in musically interpretable models. The set of seven mid-level features used in these works are: _melodiousness_, _articulation_, _rhythm complexity_, _rhythm stability_, _dissonance_, _tonal stability_, and _minorness_ (or _mode_). ### Perceptual Speed and Dynamics While music emotion is modelled sufficiently well using the original seven mid-level features defined in [3], two important features for emotional expression are conspicuously missing from this set: _perceptual speed_ and _dynamics_[6, 7]. Previous studies [7] have indicated that musical cues such as attack rate and playing dynamics play a significant role in musical emotion expression. Our hypothesis is that augmenting the mid-level feature space with features analogous to perceptual speed and dynamics should improve emotion modelling significantly. Lacking appropriate human-annotated datasets, we model these features in a more direct way based on our musical intuition rather than on empirical listener perception data. #### 2.2.1 Perceptual Speed Perceptual speed indicates the general speed of the music disregarding any deeper analysis such as the tempo, and is easy for both musicians and non-musicians to relate to [8]. While tempo is typically computed as the occurrence rate of the most prominent metrical level (beat), perceptual speed is influenced by lower level or higher level metrical levels as well. We approximate perceptual speed using _onset density_, following observations from [9, 10]. Onset density (or event density) refers to the number of musical onsets (the beginnings of musical notes or other sonic events) per unit time. While several signal processing based onset density detectors are available in practice (for instance, the SuperFlux algorithm [11] implemented in the madmom library [12]), we take a different approach for our case. Since here we are aiming to only consider solo piano performances, we use an automatic polyphonic piano note transcription algorithm (RNNPianoNoteProcessor from madmom [12]) to transcribe the audio and extract the piano notes, from which we then infer the onsets (the starting positions of the transcribed notes are taken as the onsets). We observed that this procedure gives us a better estimate of onsets for solo piano music across different recordings. #### 2.2.2 Perceived Dynamics Perceived dynamics indicates the played dynamic level disregarding listening volume (presumably related to the estimated effort of the player) [8]. In our case of solo piano music, we find that the RMS (Root-Mean-Squared) amplitude of the audio signal is a good approximation to performed dynamics. We use Librosa's RMS function [13] to compute this feature. For an input audio signal \(x\), the RMS amplitude is given as \(\text{RMS}_{k}=\sqrt{\text{mean}(w_{\tau}(x)_{k}^{2})}\), \(k=1\dots N\), where \(w_{\tau}(\cdot)_{k}\) is a rectangular windowing function which partitions the input sequence into frames and returns the \(k\)-th frame of length \(\tau\), and \(N\) is the total number of frames. ## 3 Predicting Emotion In a previous publication [2], we reported the predictive performance for music emotion using different feature sets (including the original (7)-mid-level features set) on a dataset comprising of recordings of Bach's Well-Tempered Clavier (WTC) Book 1. Here, again, we use the WTC dataset1 to evaluate the predictive performance of the original and augmented mid-level feature sets. We perform a regression-based analysis by fitting the different feature sets on the arousal/valence ratings. Footnote 1: The WTC dataset contains recordings of 48 pieces of Bach’s Well-Tempered Clavier Book 1 performed by 6 different pianists and multiple listener ratings of arousal and valence for each recording. First, we predict the original (7)-mid-level features for the 288 recordings using a pre-trained mid-level feature model (trained on the Mid-level Features Dataset). The architecture of this model is a modified receptive-field regularised ResNet (see Koutini et al. [14]) domain-adapted for solo piano audio as described in Chowdhury and Widmer [15]. Since in Section 4, our aim is going to be to predict emotions continuously using a sliding window, we want the input window size to be as small as possible without compromising model performance. Through experiments detailed in Chowdhury [16], we find that an input window length of 5 seconds is optimal for our purpose. We then compute the mean onset densities and mean RMS amplitudes for each of the recordings using the approximations mentioned in Section 2.2, giving us the (9)-mid-level feature set for the Bach dataset. A multiple linear regression model with nine inputs and two outputs (for arousal and valence) is then fitted on the data, and we report the adjusted \(R^{2}\)-score. This is compared to the case where only the original seven (7)-mid-level features are used, and where only the two newly added features are used. The results are tabulated in Table 1. Further, we examine the feature importances for the prediction of arousal and valence using T-statistics (or T-values), plotted in Figure 1. The T-statistic of a feature in a regression model is defined as its estimated weight scaled with its standard error. We note that for both arousal and valence, using the combined (9)-mid-level feature set gives the best result. Our findings point to a future direction of work on learning perceived speed and perceived dynamics from actual human ratings as a means to improve music emotion recognition algorithms grounded on human perception. ## 4 The Visualisation In a YouTube video [4], Jacob Collier plays the piece "Danny Boy" on a piano and modifies it according to different emotions shown to him while he is playing. Using our mid-level feature-based emotion model, we continuously (hop length = 1 second; window length = 5 seconds) predict emotions from the audio extracted from a part of the video (0:00 to 2:45 minutes, dealing with what Jacob calls "basic" or "tier-1 emotions") and map them onto Russell's circumplex [17]2. Footnote 2: Note that this is a demonstration of our model, and not a detailed analysis of model performance. The predicted emotion values are visualised together with the original video. To achieve smooth animation, exponential interpolation is utilised to obtain values between actual predicted values at 1-second intervals. Our visualisation video is accessible at [https://youtu.be/1Y8wDZfuakU](https://youtu.be/1Y8wDZfuakU). \begin{table} \begin{tabular}{l|r|r} \hline \hline & \multicolumn{2}{c}{Adjusted R\({}^{2}\)} \\ Feature Set & Arousal & Valence \\ \hline The (7)-mid-level feature set & 0.68 & 0.63 \\ Onset density and RMS amplitude & 0.74 & 0.39 \\ The (9)-mid-level feature set & **0.79** & **0.65** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of the different feature sets on modelling the arousal valence of the Bach WTC Book 1 dataset. ## 5 Results and Discussion The complete trace of the prediction point is demonstrated in Figure 2 while Figure 4 exhibits three frames from the video overlaid with the corresponding predicted emotions. We can see that the predicted emotions match closely with the intended emotions ("Jacob's Emotions"). We also obtain static emotions - the audio sections corresponding to each of the seven emotions are extracted and used as individual audio inputs. In this case, we use our standard input length (15-second) mid-level feature model, with the input audio being looped if it is less than 15 seconds, and the predictions for successive windows with a 5-second hop averaged if it is more than 15 seconds. These static emotion predictions are shown in Figure 3, where the predicted points are annotated with the intended emotion for each. This visualisation experiment provides a proof-of-concept for further investigations into the decoding of intended emotions using computer systems. Interestingly, we observe that a simple linear regression model with only a handful of mid-level features as inputs (7 original plus 2 new), trained on a small dataset of 288 Bach WTC examples, successfully predicts intended emotions for a markedly different set of performances in a satisfactory manner. This points to the robustness of the (9)-mid-level features and of our (7)-mid-level feature model, and to the impressive capacity of these features to reflect encoded music emotion. In conclusion, this paper provides promising evidence that mid-level features are effective at capturing emotions conveyed by music performances and encourages further exploration to fully understand the potential of mid-level features as a means of modelling more complex emotions. Figure 1: Feature importance values for the augmented mid-level feature set, analysed using the T-statistic. For each feature in a linear regression model, the T-statistic is defined as its estimated weight scaled with its standard error. Figure 3: Static emotion prediction. The predicted emotions are marked with dark text, and the emotion words of Russell’s circumplex are marked with light text. Figure 2: Full trace of dynamic emotion prediction. Jacob’s intended emotions (according to the notated emotion in the original video) are depicted with different colours, and the passage of time is depicted with the shade – from lightest to darkest. ## 6 Acknowledgments This research was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreements No 670035 ("Con Espressione") and 101019375 ("Whither Music?").
2306.15006
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome
Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area. Existing works have largely hinged on k-mer, fixed-length permutations of A, T, C, and G, as the token of the genome language due to its simplicity. However, we argue that the computation and sample inefficiencies introduced by k-mer tokenization are primary obstacles in developing large genome foundational models. We provide conceptual and empirical insights into genome tokenization, building on which we propose to replace k-mer tokenization with Byte Pair Encoding (BPE), a statistics-based data compression algorithm that constructs tokens by iteratively merging the most frequent co-occurring genome segment in the corpus. We demonstrate that BPE not only overcomes the limitations of k-mer tokenization but also benefits from the computational efficiency of non-overlapping tokenization. Based on these insights, we introduce DNABERT-2, a refined genome foundation model that adapts an efficient tokenizer and employs multiple strategies to overcome input length constraints, reduce time and memory expenditure, and enhance model capability. Furthermore, we identify the absence of a comprehensive and standardized benchmark for genome understanding as another significant impediment to fair comparative analysis. In response, we propose the Genome Understanding Evaluation (GUE), a comprehensive multi-species genome classification dataset that amalgamates $36$ distinct datasets across $9$ tasks, with input lengths ranging from $70$ to $10000$. Through comprehensive experiments on the GUE benchmark, we demonstrate that DNABERT-2 achieves comparable performance to the state-of-the-art model with $21 \times$ fewer parameters and approximately $92 \times$ less GPU time in pre-training.
Zhihan Zhou, Yanrong Ji, Weijian Li, Pratik Dutta, Ramana Davuluri, Han Liu
2023-06-26T18:43:46Z
http://arxiv.org/abs/2306.15006v2
# DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome ###### Abstract Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area. Existing works have largely hinged on _k-mer_, fixed-length permutations of A, T, C, and G, as the _token_ of the genome language due to its simplicity. However, we argue that the computation and sample inefficiencies introduced by k-mer tokenization are primary obstacles in developing large genome foundational models. We provide conceptual and empirical insights into genome tokenization, building on which we propose to replace k-mer tokenization with Byte Pair Encoding (BPE), a statistics-based data compression algorithm that constructs _tokens_ by iteratively merging the most frequent co-occurring genome segment in the corpus. We demonstrate that BPE not only overcomes the limitations of k-mer tokenization but also benefits from the computational efficiency of non-overlapping tokenization. Based on these insights, we introduce DNABERT-2, a refined genome foundation model that adapts an efficient tokenizer and employs multiple strategies to overcome input length constraints, reduce time and memory expenditure, and enhance model capability. Furthermore, we identify the absence of a comprehensive and standardized benchmark for genome understanding as another significant impediment to fair comparative analysis. In response, we propose the Genome Understanding Evaluation (GUE), a comprehensive multi-species genome classification dataset that amalgamates \(28\) distinct datasets across \(7\) tasks, with input lengths ranging from \(70\) to \(1000\). Through comprehensive experiments on the GUE benchmark, we demonstrate that DNABERT-2 achieves comparable performance to the state-of-the-art model with \(21\times\) fewer parameters and approximately \(56\times\) less GPU time 2 in pre-training. Compared to DNABERT, while being \(3\times\) more efficient, DNABERT-2 outperforms it on \(23\) out of \(28\) datasets, with an average improvement of \(6\) absolute scores on GUE. The code, data, and pre-trained model are publicly available at [https://github.com/Zhihan1996/DNABERT_2](https://github.com/Zhihan1996/DNABERT_2). Footnote 2: About 14 days on 8 NVIDIA RTX 2080Ti GPUs V.S. 17 days on 128 NVIDIA A100 GPUs. Estimated with the **Method 2: GPU Time** introduced by OpenAI in [https://openai.com/research/ai-and-compute](https://openai.com/research/ai-and-compute). ## 1 Introduction Transformer-based foundation models (Bommasani et al., 2022; Kenton and Toutanova, 2019; OpenAI, 2023) have witnessed significant progress in recent years, particularly exemplified by the advent of groundbreaking language models like ChatGPT (OpenAI, 2023; Ouyang et al., 2022). In parallel, the significance of foundation models has also been increasingly appreciated in the genomics field, as they represent the understanding of genome sequences via numerical embeddings that are directly applicable to various genome analysis tasks. These models can capture complex relationships and dependencies in DNA sequences, opening new avenues for understanding transcriptional regulation (Li et al., 2023), non-coding genetic variants associated with human diseases and traits (Rozowsky et al., 2023), and the functional effects of regulatory elements (Smith et al., 2023). Recent advancements in genome language modeling have demonstrated their superiority in a range of downstream applications, including promoter prediction (Le et al., 2022; Zhang et al., 2022), gene expression prediction (Avsec et al., 2021), DNA methylation prediction (Jin et al., 2022], chromatin state analysis [Lee et al., 2022], promoter-enhancer interaction prediction [Chen et al., 2022; Ni et al., 2022], TF-DNA binding prediction [Wang et al., 2022], variant effect prediction [Rozowsky et al., 2023], gene network prediction [Theodoris et al., 2023] and more. These models provide researchers with powerful tools to understand the functional importance of different genomics elements and unravel key biological processes and mechanisms. In this context, we previously developed DNABERT [Ji et al., 2021], an initial foundation model (FM), to unravel the human genome from a language perspective. Despite being widely applied in the community, several technical limitations still present at the time with the original DNABERT implementation, limiting its full potential. First, although proven to be generalizable to other organisms, the pretraining was solely done on the human reference genome, omitting the sequence conservation and diversity across species. Second, k-mer tokenization resulted in information leakage and overall poor computational efficiency during pre-training, which hampers its scalability. Lastly, the simplistic DNABERT-XL solution--intended to bypass the restriction of 512 input sequences imposed by the learned positional embedding [Kenton and Toutanova, 2019]--fell short in handling long input sequences, both in efficiency and effectiveness. These limitations underlined the need for further advancements in the domain of DNA language models. Recently, Lopez et al. [2023] introduced Nucleotide Transformers (NT), a series of genome foundation models scaling from \(500M\) to \(2500M\) parameters. NT alleviated the first two limitations of DNABERT by pre-training on a large collection of genomes from 850 species and replacing overlapping k-mer tokenization with a non-overlapping version, substantially reducing tokenized sequence length. Despite this, a hard input length limitation still exist, while, as we will discuss in Sec. 2, non-overlapping k-mer tokenization also suffered from poor sample efficiency as it complicates the model's task of aligning significantly distinct representations of near-identical inputs. In view of the aforementioned limitations, we introduce DNABERT-2, a multi-species genome foundation model that replaces k-mer tokenization with Byte Pair Encoding (BPE) [Sennrich et al., 2016], a data compression algorithm that has been widely used by large language models. We show that BPE effectively addresses the known issues of k-mer tokenization while maintaining the computational efficiency of non-overlapping tokenization. Moreover, DNABERT-2 overcomes the limitation of DNABERT by replacing learned positional embeddings with Attention with Linear Biases (ALiBi) [Press et al., 2021] to get rid of the input length limitation, incorporating Flash Attention [Dao et al., 2022] to increase computational efficiency, and adjusting model architecture to increase model capability. As a result of the efficient tokenizer and advanced model architecture, DNABERT-2 achieves comparable performance to the state-of-the-art model with approximately \(56\times\) less computational cost and \(21\times\) fewer parameters, identifying its computation- and sample- efficiency and enabling efficient fine-tuning on most consumer GPUs. Meanwhile, despite progress in genome foundational models, the absence of carefully curated benchmarks has posed a significant challenge. Owing to the unstandardized pre-processing pipeline of genome sequences, it is unjust to directly compare model performances with results reported in previous papers, even when the data originate from the same source. Moreover, many genome understanding evaluation datasets used in existing works [Lopez et al., 2023] are either too trivial or too challenging, leading to similar scores for most models and failing to accurately reflect different models' capabilities. The scarcity of high-quality benchmark datasets hampers evaluating and comparing different models and further hinders the development of novel techniques. To this end, we introduce Genome Understanding Evaluation (GUE), a standardized and comprehensive multi-species benchmark containing \(28\) datasets across \(7\) important genome analysis tasks on genomes of \(4\) species with input lengths ranging from \(70\) to \(1000\). All the datasets are elaborately calibrated with a series of strategies to ensure they are suitable for reflecting the capability level of existing genome foundation models. Our main contributions can be therefore summarized as follows: 1) We identify key obstacles in genome tokenization and provide deep insights, presenting a simple yet effective solution that balances the efficiency and effectiveness of genome foundation models; 2) We introduce DNABERT-2, an efficient pre-trained foundation model for multi-species genome that delivers performance on par with the state-of-the-art model while being \(21\times\) smaller and utilizes approximately \(56\times\) less GPU time; 3) We introduce Genome Understanding Evaluation (GUE), a standardized, comprehensive, and well-calibrated multi-species genome classification benchmark including \(7\) tasks and \(28\) datasets to facilitate research in genome foundation model. ## 2 Background Tokenization serves as a critical initial step in language modeling, significantly impacting the efficiency and effectiveness of the model. DNA sequences consist of \(4\) unique nucleotide bases: A, T, C, and G. A majority of genome language models [Ji et al., 2021; Lopez et al., 2023] utilize the k-mer tokenization technique, in which each contiguous \(k\)-length genome segment is considered as a token. During tokenization, a sliding window with window size \(k\) and stride \(t\) is employed to convert the original genome sequence into a series of k-mers. Here, the stride \(t\) is either set as \(1\) or \(k\), while the first one represents the overlapping version of k-mer tokenization and the other one represents the non-overlapping version. Figure 1 presents examples of overlapping (left) and non-overlapping (right) k-mer tokenizations. Despite its wide application, we argue that both versions of the k-mer tokenization are less optimal. Overlapping k-mers tokenization ensures adjacent tokens always overlap by \(k-1\) characters, resulting in significant information leakage in masked language modeling. As depicted in Figure 1, a masked token is entirely leaked when adjacent tokens from both sides are not masked, and it is partially leaked when adjacent tokens from only one side are present. Generally, in the overlapping \(k\)-mer tokenization setting, let \(l\) and \(r\) denote the distances between a masked token [M] and its closest unmasked adjacent token on the left and right sides, the number of possible options of [M] is \(4^{\min(l,r,k,\max(0,l+r-k))}\). In other words, to prevent the entire leakage of a masked token, at least \(k-1\) tokens on its left and right sides in total must be masked, which explains why Ji et al. (2021) opt to mask a continuous span of \(k\) tokens. Furthermore, to guarantee no leakage of a masked token, at least \(k\) tokens on both sides must be masked. Nevertheless, information leakage is still inevitable for the leftmost and rightmost \(k-1\) masked tokens. Ideally, in masked language modeling, a model is required to select the best option from the _entire_ vocabulary, enabling it to differentiate and evaluate among a large number of options. However, if the search space is undesirably reduced due to information leakage, the model only needs to differentiate between a limited number of options. Consequently, this results in poor sample efficiency, as the model may not be sufficiently challenged to learn the underlying patterns in the data. Also, the tokenized sequence for an input of length \(L\) consists of \(L-k+1\) tokens, each with a length of \(k\). This results in a tokenized sequence with considerable redundancy and a length nearly equivalent to the original sequence, leading to low computation efficiency considering the quadratic computation complexity of Transformer-based (Vaswani et al., 2017) models. This becomes particularly problematic when attempting to scale up the model. Therefore, Lopez et al. (2023) proposed the non-overlapping k-mer tokenization. Non-overlapping k-mer tokenization, despite its advantage of reducing sequence length by a factor of \(k\), is plagued by a notable issue of sample inefficiency. Figure 1 graphically illustrates this problem. Considering a scenario when the context window is reduced by \(1\), the model input is then switched from _Sequence 1_ to _Sequence 2_. In theory, this should involve a minor adjustment in tokenized output. However, with the non-overlapping k-mer tokenizer, this minor shift instigates a dramatic alteration in the tokenized output. Despite the two sequences originating from the same genomic segment, their tokenized representations bear little resemblance. This inconsistent behavior introduces unnecessary hurdles for the model during training, as it poses unnecessary difficulty for the model to align distinct representations of identical or near-identical inputs. Consequently, the inefficiency in learning from the data could impede the overall model performance. The implications of these observations advocate for a re-evaluation of tokenization strategies for the genome language, with a focus on strategies that ensure robust and efficient representation. To address the aforementioned issues, we propose to adapt SentencePiece (Kudo and Richardson, 2018), a subword tokenization framework widely used in natural language processing, to replace k-mer tokenization for genome sequences. We employ Byte-Pair Encoding (BPE) (Sennrich et al., 2016) to iteratively merge frequent pairs of nucleotides and genome segments, forming a vocabulary of variable-length tokens that effectively represent the entire genome dataset. Despite its conceptual simplicity, this method is well-suited for genome foundation models. First, it not only prevents information leakage but also significantly reduces the sequence length by approximately \(5\) times (detailed statistics are presented in Sec 3.1), substantially improving computational efficiency. Moreover, its robust tokenization result is beneficial for sample efficiency since it allows the model to focus on understanding the genome language semantics without being distracted by the distinct representations of the same input. Furthermore, unlike k-mer tokenization, BPE doesn't always produce tokens of length \(k\). Consequently, when a token containing an unspecified number of nucleotides is masked, the model is challenged to predict both the number of nucleotides and the particular nucleotides Figure 1: Illustration of the drawbacks of k-mer tokenization. In the overlapping setting, information about a masked token is leaked by its adjacent tokens, while in the non-overlapping setting, adding/deleting one nucleotide base leads to a dramatic change in the tokenized sequence. themselves. This naturally transforms the masked language modeling objective into a T5-style (Raffel et al., 2020) "replace spans of text" objective, which has been demonstrated to be more effective than standard masked language modeling in various scenarios. ## 3 Method In this section, we provide empirical analysis on the BPE tokenizer for genome language (SS 3.1) and describe the model architecture (SS 3.2) and implementation details (SS 3.3) of DNABERT-2. ### Tokenizer DNABERT-2 adapts SentencePiece (Kudo and Richardson, 2018) with Byte Pair Encoding (BPE) (Sennrich et al., 2016) to perform tokenization for DNA sequences. SentencePiece is a language-agnostic tokenizer that considers each input as a raw stream without assuming any pre-tokenization, which matches greatly with genome sequences where the definitions of _word_ and _sentence_ do not exist. BPE is a compression algorithm that has been widely used in the area of natural language processing as a word segmentation strategy. It learns a fixed-sized hypercube of the characters. Figure 2 illustrates the network of genome sequences. To determine the most suitable vocabulary size, we constructed \(8\) vocabularies with target sizes ranging from \(2^{8}\) to \(2^{15}\) on the multi-species genomes (see Sec. 4.1) to empirically evaluate the impact of varying vocabulary sizes. As indicated in Figure 2(a), larger vocabularies tend to encompass more lengthy tokens, which enables the tokenizer to represent the same input sequence with fewer tokens. Shorter tokenized sequences consequently reduce the computational cost (See Figure 2(b)), as the computational complexity of Transformers is quadratic in relation to the input sequence length. Therefore, from the computation efficiency perspective, a larger vocabulary size is favorable. However, a larger vocabulary leads to more sparse updates to the embedding layer, given that each token would be used less frequently, which might compromise the model's performance. We empirically analyzed this issue by pre-training three different DNABERT-2 variants with vocabulary sizes of \(2^{8}\), \(2^{12}\), and \(2^{15}\) on the multi-species genome dataset with a batch size of \(2048\) for \(150000\) steps and evaluating them on the GUE benchmark (see Sec. 4.2). Figure 2(c) displays the performance of each variant, where the model performance is measured by the dataset- and task-average scores. As depicted in the figure, unlike computational efficiency, the model's performance does not consistently improve as the vocabulary size increases. Therefore, we selected a vocabulary size of \(2^{12}\) = \(4096\) for training the final DNABERT-2 model, as it best balances model performance with computational efficiency among the candidates. ### Model DNABERT-2 adapts the Transformer Encoder architecture similar to BERT (Kenton and Toutanova, 2019). To address the limitations of existing models, we incorporate a series of recent advances in deep learning to increase the model's efficiency and capability, including: 1) replacing learned positional embeddings with the Attention with Linear Biases (ALiBi) (Press et al., 2021) to overcome the input length limitation; 2) utilizing FlashAttention (Dao et al., 2022) and Low Precision Layer Normalization to increase computation and memory efficiency; 3) employing the Low-Rank Adaptation (LoRA) (Hu et al., 2021) in the fine-tuning stage (if necessary) for parameter-efficient training. Attention with Linear Biases.Due to the permutation-invariant nature of the attention mechanism, explicit positional information is required in attention-based models. Existing solutions such as Sinusoidal (Vaswani et al., 2017), learned (Kenton and Toutanova, 2019), and Rotary (Su et al., 2021) positional embedding methods either suffer from input Figure 2: Illustration of the BPE vocabulary constructions. length restriction or poor _extrapolation_ capability when applied to sequences longer than training data. Attention with Linear Biases (ALiBi) provides an efficient yet effective solution. Instead of adding position embeddings to the input, ALiBi adds a fixed set of static, non-learned biases to each attention calculation to incorporate positional information into attention scores. Specifically, let \(\mathbf{q}_{i}\) define the \(i\)-\(th\) query in the input sequence of length \(L\) and \(\mathbf{K}\) defines the key matrix, the attention score of query \(i\) is calculated as: \(\texttt{softmax}(\mathbf{q}_{i}\mathbf{K}+m*[-(i-1),...,-2,-1,0,-1,-2,...,-(L-1- i)])\), where \(m\) is a fixed head-specific constant. ALiBi used a geometric sequence (_i.e._, \(\frac{1}{2^{1}},\frac{1}{2^{2}},...,\frac{1}{2^{n}}\)) of different \(m\) to each attention head. Intuitively, ALiBi increasingly penalizes attention scores between key-query pairs as their distances increase, and \(m\) determines the penalty rate. Replacing learned position embedding with ALiBi allows DNABERT-2 to effectively handle arbitrarily long sequences during fine-tuning and inference despite being pre-trained on relatively short sequences. Flash Attention.Flash attention is an IO-aware algorithm that implements the exact standard attention calculation in a more time- and memory-efficient way. It identifies a main bottleneck of standard attention implementation as the lack of taking the number of reads and writes to fast GPU on-chip SRAM and relatively slow GPU high bandwidth memory (HBM) into account. To avoid reading and writing to the slow HBM, it splits Key/Query/Value matrices into blocks and incrementally performs softmax over the entire input. It also proposes to recompute large intermediate results like attention scores in backward pass to trade extra computation for fewer IO with HBM, which empirically leads to less computational time. It accelerates DNABERT-2 without sacrificing model performance. Low-Rank Adaptation (LoRA).Fine-tuning all the parameters of a model becomes increasingly expensive as the pre-trained model becomes much larger. Thus, we adopt LoRA, a parameter-efficient fine-tuning method that significantly reduces the computation and memory costs with ignorable performance sacrifice. Let \(W_{0},W_{1}\in\mathbb{R}^{m\times n}\) define the same weight matrix before and after task-specific fine-tuning, and we have \(W_{1}=W_{0}+\Delta W\), where \(\Delta W\in\mathbb{R}^{m\times n}\) represents the change of each weight element during the fine-tuning. In ordinary fine-tuning, we independently update each weight based on its corresponding gradient, while in LoRA, we represent \(\Delta W\) with a low-rank decomposition \(\Delta W=BA\), where \(B\in\mathbb{R}^{m\times r}\), \(A\in\mathbb{R}^{r\times n}\), and \(r\ll m,r\ll n\). Modeling \(\Delta W\) with low-rank decomposition reduces the number of trainable parameters from \(m\times n\) to \(r\times(m+n)\), leading to significant improvement in training time and memory usage. Besides, we replace the Relu activation function with GEGLU (Shazeer, 2020), a variant of GLU (Dauphin et al., 2017) that has been shown to improve the performance of Transformer models. The GEGLU function is calculated as \(\texttt{GEGLU}(x,W,V,b,c)\)\(\in\)\(\texttt{GELU}(xW+b)\)\(\otimes\)\((xV+c)\), where \(x\) is the function input, \(W\) and \(V\) are learnable weights, and \(b\) and \(c\) are learnable biases. The GELU function is defined as \(\texttt{GELU}(x)=x\Phi(x)\), where \(\Phi(x)\) is the cumulative distribution function (CDF) of the standard normal distribution. Figure 3: This figure presents the average token length, average sequence length reduced after tokenization, and model performance on the GUE benchmark with different vocabulary sizes. ### Implementation We pre-train DNABERT-2 with the Masked Language Modeling (MLM) loss with a mask ratio of \(15\%\). Notably, we independently mask every token instead of masking spans of continuous tokens like Ji et al. (2021). We use a batch size of \(4096\) and a max sequence length of \(128\). We train the model for \(500000\) steps using the AdamW(Loshchilov and Hutter, 2019) optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.98\), \(\epsilon=1e{-6}\) and weight decay of \(1e{-5}\). The learning rate linearly increases from 0 to \(5e{-4}\) during the first \(30000\) steps while linearly decreasing to \(0\) in the last \(470000\) steps. The pre-training stage takes approximately 14 days using eight Nvidia RTX 2080Ti GPUs. To train the model, we used the Transformers library by HuggingFace (Wolf et al., 2020) and the Composer library by MosaicML (Team, 2021). ## 4 Data In order to facilitate further research on large-scale genome foundational models, we have collated and made available multi-species genome datasets for both pre-training of models (Sec. 4.1) and benchmarking (Sec. 4.2). ### Pre-Train: Human and Multi-Species Genome To investigate the impact of species diversity on genome foundational models, we've compiled and made publicly available two datasets for foundational model pre-training: the human genome and the multi-species genome. The human genome dataset is borrowed from DNABERT's pre-training data (Ji et al., 2021), which comprises \(2.75\)B nucleotide bases. The multi-species genome dataset encompasses genomes from \(135\) species, spread across \(7\) categories. In total, this dataset includes \(32.49\)B nucleotide bases, nearly \(12\) times the volume of the human genome dataset. We exclude all sequences with N and retain only sequences that consist of A, T, C, and G. Detailed statistics are presented in Table 7. ### Benchmark: Genome Understanding Evaluation (GUE) We introduce the Genome Understanding Evaluation (GUE) benchmark, which includes \(7\) genome sequence classification problems with \(28\) datasets. Table 1 presents the summarization of the GUE benchmark. To evaluate models' capability in modeling sequences with different lengths, we select datasets with input lengths ranging from \(70\) to \(1000\). GUE contains tasks for \(4\) species: human, mouse, virus, and yeast, to evaluate the multi-species transferability in genome understanding of each model. We explicitly define evaluation metrics for each task and split each dataset into training, validation, and test data for a fair comparison across different models. To calibrate the GUE benchmark's difficulty level and better illuminate each model's capabilities, we carefully selected datasets that are neither too simple nor overly challenging for current models. For example, when the Nucleotide Transformer variants (Lopez et al., 2023) were tested on the _Splice Site Prediction_ dataset, all variants achieved an accuracy between \(97\%\) and \(98\%\). Similar outcomes were observed in tasks like _Promoter Prediction_ and _Enhancer Prediction_. These high scores might suggest these variants perform similarly, but as our experiments in Section 5 show, they vary significantly on more discerning datasets. The construction of GUE starts with the aggregation of various biologically important genome analysis datasets, followed by the assessment of existing models such as DNABERT (Ji et al., 2021) and Nucleotide Transformer (Lopez et al., 2023) on these datasets. Datasets where the majority of models yielded moderate (e.g., F1-scores between 0.3 and 0.8) and distinguishable performance scores were retained. On the other hand, datasets that did not meet these criteria underwent a restructuring process involving various strategies such as class balancing, adversarial sample inclusion, \begin{table} \begin{tabular}{l l r r r} \hline \hline **Species** & **Task** & **Num. Datasets** & **Num. Classes** & **Sequence Length** \\ \hline \multirow{4}{*}{**Human**} & Core Promoter Detection & 3 & 2 & 70 \\ & Transcription Factor Prediction & 5 & 2 & 100 \\ & Promoter Detection & 3 & 2 & 300 \\ & Splice Site Detection & 1 & 3 & 400 \\ \hline **Mouse** & Transcription Factor Prediction & 5 & 2 & 100 \\ \hline **Yeast** & Epigenetic Marks Prediction & 10 & 2 & 500 \\ \hline **Virus** & Covid Variant Classification & 1 & 9 & 1000 \\ \hline \hline \end{tabular} \end{table} Table 1: Summarization of the Genome Understanding Evaluation (GUE) benchmark. and reduction of training sample volume, among others. After several iterations of this process, we ultimately arrived at 28 representative datasets of moderate difficulty. Due to space limits, we present the detailed data processing and statistics of each dataset in Sec. B.2. ## 5 Experiments We evaluate DNABERT-2 using the proposed GUE benchmark to thoroughly investigate its versatility and robustness across a variety of tasks involving multi-species genomes. ### Baseline We compare DNABERT-2 with two state-of-the-art genome foundation models: DNABERT (Ji et al., 2021) and Nucleotide Transformer (Lopez et al., 2023). **DNABERT** was the first pre-trained foundational model for genome sequences, trained on human genome sequences. It has four variants, namely _DNABERT (3-mer)_, _DNABERT (4-mer)_, _DNABERT (5-mer)_, and _DNABERT (6-mer)_, which utilize overlapping \(3/4/5/6\)-kmer tokenization respectively. While DNABERT employs the same architecture as BERT-base, it has a different vocabulary size, which is dependent on the chosen \(k\)-mer. **Nucleotide Transformer (NT)** scales up the data and model size to achieve state-of-the-art performance in \(27\) DNA analysis tasks. It also has \(4\) variants: _NT-500M-human_, _NT-500M-1000g_, _NT-2500M-1000g_, and _NT-2500M-multi_, where _human_, _1000g_, and _multi_ respectively refers to the GRCh38/hg38 human reference genome, 3202 high-coverage human genomes from the 1000 Genome project (Byrska-Bishop et al., 2021), and genome from 850 different species. It is important to note that NT models are 6 to 29 times larger than DNABERT, which precludes standard model fine-tuning on consumer GPUs. Therefore, we perform standard fine-tuning for DNABERT and DNABERT-2, while adapting the Low-Rank Adaptation (LoRA) technique for fine-tuning the Nucleotide Transformer to enhance efficiency. For a fair comparison, we conducted preliminary experiments to confirm that our implementation of NT achieves comparable results to those reported in their original paper (Lopez et al., 2023) (see Appendix A.3 for more details). ### Setup and Metric We evaluate the models from two perspectives: computational efficiency and performance on downstream tasks. To measure each model's computational cost, we consider the number of model parameters and the relative Floating Point Operations (FLOPs)--which is the total number of multiplication and addition operations during a forward pass--compared to DNABERT-2. We evaluate FLOPs on genome sequences with a length of 500, a commonly used setup in genome analysis. To measure model performance, we utilize F1-Score and Matthews Correlation Coefficient (MCC). We use different metrics for different tasks, following conventional practices (refer to Table 8 for details). Table 2 presents the overall performance of each model on the GUE benchmark. It provides the average score of each model and the number of times it ranks in the top two among all models. The average results across all tasks are reported in Table 3, while task-specific results can be found in 4, with full details relegated to this section due to space constraints. We also include statistics on the number of tokens each model processed during its pre-training phase, providing insight into the effects of training steps on model performance. For each model, we keep most of the hyperparameters (e.g., learning rate, batch size, weight decay, etc.) constant across all datasets, adjusting only the maximum sequence length and the number of training steps according to the specific dataset. Hyperparameter tuning tailored to each dataset is left for future work. Throughout the training process, we validate the model every 200 steps, save the model that yields the smallest loss on the validation set, and report its evaluation results on the test set. We train each model using three different random seeds and report the average results. GUE benchmark, which requires negligible computational overhead, it delivers the highest average performance and consistently ranks in the top two across the 28 tasks of the GUE benchmark. These results showcase the model's remarkable efficiency and effectiveness. Despite having 30% more parameters than DNABERT, DNABERT-2 requires only one-third the number of FLOPs. This indicates the superiority of the Byte Pair Encoding (BPE)-based tokenization method over overlapping k-mer tokenization in terms of modeling efficiency. Armed with the new tokenization method and the Attention with Linear Biases (ALiBi) module, DNABERT-2 can effectively process long genome sequences arbitrarily, demonstrating enhanced efficiency. This improvement becomes even more significant as the length of the input sequence increases. Moreover, DNABERT-2 consistently outperforms DNABERT by a large margin, indicating the effectiveness of multi-species pre-training and new model architecture. Although DNABERT-2 is \(5\) times smaller, it surpasses NT-500M while using less FLOPs. This underscores the importance of providing the model with _adequate_ data, particularly when the model size is scaled up, and further highlights the inefficiency of overlapping k-mer tokenization. The comparison between DNABERT and NT-2500M-1000g exposes the sample inefficiency of non-overlapping k-mer tokenization. Despite being trained on \(2.5\) times more tokens, NT-2500M-1000g achieves a performance similar to that of DNABERT. The averaged results for each task are displayed in Table 3. DNABERT-2 and NT-2500M-multi consistently achieve top-tier performance across most tasks. Their dominance over other baselines is particularly notable in non-human genome analysis tasks, demonstrating the effectiveness of pre-training on multi-species genomes. Furthermore, models trained on multi-species genomes also show strong performance on human genome analysis tasks, proving their ability to develop a comprehensive understanding of multi-species genomes without compromising their grasp of the human genome. However, we observe that additional pre-training does not uniformly enhance performance across all tasks, indicating that task-specific further pre-training might be beneficial when addressing a certain downstream task. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model** & **Num. Params. \(\downarrow\)** & **FLOPs \(\downarrow\)** & **Trn. Tokens** & **Num. Top-2 \(\uparrow\)** & **Ave. Scores \(\uparrow\)** \\ \hline **DNABERT (3-mer)** & 86M & 3.27 & 122B & 2 \(\parallel\) 0 & 61.62 \\ **DNABERT (4-mer)** & 86M & 3.26 & 122B & 0 \(\parallel\) 1 & 61.14 \\ **DNABERT (5-mer)** & 87M & 3.26 & 122B & 0 \(\parallel\) 1 & 60.05 \\ **DNABERT (6-mer)** & 89M & 3.25 & 122B & 0 \(\parallel\) 1 & 60.51 \\ **NT-500M-human** & 480M & 3.19 & 50B & 0 \(\parallel\) 0 & 55.43 \\ **NT-500M-1000g** & 480M & 3.19 & 50B & 0 \(\parallel\) 1 & 58.23 \\ **NT-2500M-1000g** & 2537M & 19.44 & 300B & 0 \(\parallel\) 1 & 61.41 \\ **NT-2500M-multi** & 2537M & 19.44 & 300B & 7 \(\parallel\) 9 & 66.93 \\ \hline **DNABERT-2** & 117M & 1.00 & 262B & 8 \(\parallel\) 4 & 66.80 \\ **DNABERT-2\(\blacklozenge\)** & 117M & 1.00 & 263B & **11** & **10** & **67.77** \\ \hline \hline \end{tabular} \end{table} Table 2: The statistics and performance of each model. The five columns represent the number of model parameters, relative FLOPs compared to DNABERT-2, the number of tokens used in pre-training, and the number of being top-2 among all the models (lst \(\parallel\) 2nd) and the average evaluation scores on the 28 datasets of the GUE benchmark. \(\blacklozenge\): perform further masked language modeling pre-training on the training sets of the GUE benchmark. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & **Yeast** & **Mouse** & **Virus** & \multicolumn{4}{c}{**Human**} \\ \cline{2-9} & **EMP** & **TF-M** & **CVC** & **TF-H** & **PD** & **CPD** & **SSP** \\ \hline **DNABERT (3-mer)** & 49.54 & 57.73 & 62.23 & 64.43 & 84.63 & **72.96** & 84.14 \\ **DNABERT (4-mer)** & 48.59 & 59.58 & 59.87 & 64.41 & 82.99 & 71.10 & 84.05 \\ **DNABERT (5-mer)** & 48.62 & 54.85 & 63.64 & 50.46 & 84.04 & 72.03 & 84.02 \\ **DNABERT (6-mer)** & 49.10 & 56.43 & 55.50 & 64.17 & 81.70 & 71.81 & 84.07 \\ **NT-500M-human** & 45.35 & 45.24 & 57.13 & 50.82 & 85.51 & 66.54 & 79.71 \\ **NT-500M-1000g** & 47.68 & 49.31 & 52.06 & 58.92 & 86.58 & 69.13 & 80.97 \\ **NT-2500M-1000g** & 50.86 & 56.82 & 66.73 & 61.99 & 86.61 & 68.17 & 85.78 \\ **NT-2500M-multi** & 58.06 & 67.01 & **73.04** & 63.32 & **88.14** & 71.62 & **89.36** \\ \hline **DNABERT-2** & 55.98 & 67.99 & 71.02 & **70.10** & 84.21 & 70.52 & 84.99 \\ **DNABERT-2\(\blacklozenge\)** & **58.83** & **71.21** & 68.49 & 66.84 & 83.81 & 71.07 & 85.93 \\ \hline \hline \end{tabular} \end{table} Table 3: The models’ averaged performance on the \(8\) tasks in the GUE benchmark, including Epigenetic Marks Prediction (EMP), Transcription Factor Prediction on the Human genome and the Mouse genome (TF-H and TF-M), Covid Variants Classification (CVC), Promoter Detection (PD), Core Promoter Detection (CPD), and Splice Site Prediction (SSP). Additionally, DNABERT variants achieve optimal performance in the Core Promoter Detection task, where inputs are sequences of length \(70\). However, their performance diminishes in the similar task of Promoter Detection, where the input length increases to \(300\). These results highlight a common challenge associated with non-overlapping k-mer tokenization and BPE-based tokenization: the capacity to identify subtle signals from limited input. Although inefficient, the overlapping k-mer tokenization adopted by DNABERT retains most of the information in the original sequences. In contrast, the sequence length is significantly reduced (_i.e.,_ from \(70\) to \(15\)) with non-overlapping k-mer and BPE tokenization, which might limit the retained information and hinder informed decision-making. This identifies a critical area for future exploration in DNA language models. ## 6 Conclusion In this paper, we introduce DNABERT-2, an efficient foundational model pre-trained on extensive multi-species genomes. We identify the computational and sample inefficiencies of the existing k-mer tokenization method and propose the adaptation of Byte Pair Encoding (BPE) for DNA language modeling. We provide insightful and comprehensive empirical analyses, building DNABERT-2 based on these findings. Moreover, we integrate several techniques such as Attention with Linear Biases (ALiBi) and Low-Rank Adaptation (LoRA) to address the limitations of current DNA language models. From a data perspective, we compile and introduce the Genome Understanding Evaluation (GUE), a benchmark for multi-species genome analysis comprising seven tasks and 28 datasets with well-defined training, validation, test sets, clear evaluation metrics, and elaborately calibrate difficulty. In addition, we release a multi-species genome dataset consisting of \(32.49\) billion nucleotide bases derived from the genomes of 135 species across seven categories. We believe these datasets will significantly contribute to the progression of research on DNA language models. For future work, we identify several promising directions: 1) the development of effective modeling strategies for short genome sequences; 2) scaling up the model size; and 3) the introduction of training targets and data processing/augmentation methods that leverage the unique double-strand structure of DNA. ## Acknowledgments This work is supported by NIH R01LM01372201.
2304.02303
Oscillations in three-reaction quadratic mass-action systems
It is known that rank-two bimolecular mass-action systems do not admit limit cycles. With a view to understanding which small mass-action systems admit oscillation, in this paper we study rank-two networks with bimolecular source complexes but allow target complexes with higher molecularities. As our goal is to find oscillatory networks of minimal size, we focus on networks with three reactions, the minimum number that is required for oscillation. However, some of our intermediate results are valid in greater generality. One key finding is that an isolated periodic orbit cannot occur in a three-reaction, trimolecular, mass-action system with bimolecular sources. In fact, we characterise all networks in this class that admit a periodic orbit; in every case all nearby orbits are periodic too. Apart from the well-known Lotka and Ivanova reactions, we identify another network in this class that admits a center. This new network exhibits a vertical Andronov--Hopf bifurcation. Furthermore, we characterise all two-species, three-reaction, bimolecular-sourced networks that admit an Andronov--Hopf bifurcation with mass-action kinetics. These include two families of networks that admit a supercritical Andronov--Hopf bifurcation, and hence a stable limit cycle. These networks necessarily have a target complex with a molecularity of at least four, and it turns out that there are exactly four such networks that are tetramolecular.
Murad Banaji, Balázs Boros, Josef Hofbauer
2023-04-05T08:50:25Z
http://arxiv.org/abs/2304.02303v1
# Oscillations in three-reaction quadratic mass-action systems ###### Abstract It is known that rank-two bimolecular mass-action systems do not admit limit cycles. With a view to understanding which small mass-action systems admit oscillation, in this paper we study rank-two networks with bimolecular source complexes but allow target complexes with higher molecularities. As our goal is to find oscillatory networks of minimal size, we focus on networks with three reactions, the minimum number that is required for oscillation. However, some of our intermediate results are valid in greater generality. One key finding is that an isolated periodic orbit cannot occur in a three-reaction, trimolecular, mass-action system with bimolecular sources. In fact, we characterise all networks in this class that admit a periodic orbit; in every case all nearby orbits are periodic too. Apart from the well-known Lotka and Ivanova reactions, we identify another network in this class that admits a center. This new network exhibits a vertical Andronov-Hopf bifurcation. Furthermore, we characterise all two-species, three-reaction, bimolecular-sourced networks that admit an Andronov-Hopf bifurcation with mass-action kinetics. These include two families of networks that admit a supercritical Andronov-Hopf bifurcation, and hence a stable limit cycle. These networks necessarily have a target complex with a molecularity of at least four, and it turns out that there are exactly four such networks that are tetramolecular. ## 1 Introduction There are two well-known small reaction networks that exhibit oscillations: the Lotka reactions [20] and the Ivanova reactions [28, page 630]. The networks, along with their associated mass-action differential equations, are \[\begin{array}{c}\mathsf{X}\xrightarrow{\kappa_{1}}2\mathsf{X}\\ \mathsf{X}+\mathsf{Y}\xrightarrow{\kappa_{2}}2\mathsf{Y}\\ \mathsf{Y}\xrightarrow{\kappa_{3}}0\end{array} \begin{array}{c}\dot{x}=\kappa_{1}x-\kappa_{2}xy,\\ \dot{y}=\kappa_{2}xy-\kappa_{3}y\end{array} \tag{1}\] and \[\begin{array}{c}\mathsf{Z}+\mathsf{X}\xrightarrow{\kappa_{1}}2\mathsf{X}\\ \mathsf{X}+\mathsf{Y}\xrightarrow{\kappa_{2}}2\mathsf{Y}\\ \mathsf{Y}+\mathsf{Z}\xrightarrow{\kappa_{3}}2\mathsf{Z}\end{array} \begin{array}{c}\dot{x}=\kappa_{1}xz-\kappa_{2}xy,\\ \dot{y}=\kappa_{2}xy-\kappa_{3}yz,\\ \dot{z}=\kappa_{3}yz-\kappa_{1}xz,\end{array} \tag{2}\] respectively. Both networks are _bimolecular_, i.e., the molecularity of every source and target complex is at most two. Both have rank two, i.e., the span of their reaction vectors is two-dimensional; for the Ivanova reactions this follows from the mass-conservation relation \(\dot{x}+\dot{y}+\dot{z}=0\). In both systems, all positive nonequilibrium solutions are periodic and, up to the inclusion of trivial species (to be defined below), these are the only three-reaction, bimolecular, rank-two, mass-action systems which admit a periodic solution. In fact, even with any number of reactions, there are no bimolecular rank-two systems that admit isolated periodic orbits [22], [23], [9, Theorem 4.1]. Hence, when searching for small mass-action systems with isolated periodic orbits, it is natural to study bimolecular rank-three networks or trimolecular rank-two networks. The former were studied in, for example, [29] and [10], and more systematically in [3] and [4]. Examples of the latter include Selkov's glycolytic oscillator [25], the Brusselator [18], and the Schnakenberg networks [24]. In the present paper, we study networks with bimolecular sources but allow higher target molecularity. For instances of this kind in the literature, see, for example, [15] or [12]. Bimolecular sources are chemically more realistic than sources of higher molecularity, and also easier to treat mathematically, because the corresponding mass-action differential equation is only quadratic. From here onwards, for brevity, we refer to networks with bimolecular sources as "quadratic". For example, a trimolecular, quadratic network will mean a network with source molecularities at most two and target molecularities at most three. As our goal is to find or rule out oscillation in small networks, we focus on networks with three reactions, the minimum that is necessary for oscillation. The following theorem is one of our main results. It is an immediate corollary of Theorem 10, which is proved in Section 5. **Theorem 1**.: _Three-reaction, trimolecular, quadratic, mass-action systems admit no isolated periodic orbit._ We remark that there is no assumption in Theorem 1 on the number of species involved: regardless of the number of species, whenever a three-reaction, trimolecular, quadratic, mass-action system has a periodic orbit, all nearby orbits are also periodic. In fact, we show that systems in this class admitting periodic orbits must belong to one of three families: one related to the Lotka system (1), one to the Ivanova system (2), and one to the _Lifted LVA_ \[\begin{array}{r@{\quad\quad}l}\quad 2{\mathsf{X}}\xrightarrow{\kappa_{1}} \quad 3{\mathsf{X}}\quad\quad&\dot{x}=\kappa_{1}x^{2}-\kappa_{2}xy,\\ {\mathsf{X}}+{\mathsf{Y}}\xrightarrow{\kappa_{2}}\quad 2{\mathsf{Y}}+{\mathsf{Z}} \quad\quad&\dot{y}=\kappa_{2}xy-\kappa_{3}yz,\\ {\mathsf{Y}}+{\mathsf{Z}}\xrightarrow{\kappa_{3}}\quad 0\quad\quad&\dot{z}=\kappa_{ 2}xy-\kappa_{3}yz,\end{array} \tag{3}\] a mass-action system that is obtained by adding a new species to the Lotka-Volterra-Autocatalator (LVA) [12, Eq. (8)], [26, Eq. (1)], in such a way that the rank of the network remains two. The Lifted LVA admits a vertical Andronov-Hopf bifurcation: it has a two-parameter family of periodic orbits when \(\kappa_{2}=\kappa_{3}>\kappa_{1}\), and no periodic orbits otherwise. This is in contrast to the Lotka and Ivanova systems, where every positive nonequilibrium solution is periodic for _all_\(\kappa_{1}\), \(\kappa_{2}\), \(\kappa_{3}\), and no bifurcations are admitted. In light of Theorem 1, a three-reaction, quadratic, mass-action system with an isolated periodic orbit must have a target complex with molecularity at least four. We find that there are four planar, three-reaction, tetramolecular, quadratic, mass-action systems which admit a supercritical Andronov-Hopf bifurcation, and thus a linearly stable limit cycle, the simplest one being \[\begin{array}{r@{\quad\quad}l}\quad 2{\mathsf{X}}\xrightarrow{\kappa_{1}} \quad 3{\mathsf{X}}+{\mathsf{Y}}\quad\quad&\dot{x}=\kappa_{1}x^{2}-\kappa_{2} xy,\\ {\mathsf{X}}+{\mathsf{Y}}\xrightarrow{\kappa_{2}}\quad{\mathsf{Y}}\quad\quad &\dot{y}=\kappa_{1}x^{2}-\kappa_{3}y.\end{array} \tag{4}\] The other three are obtained from (4) by replacing the target complex of the second reaction by \(2{\mathsf{Y}}\), \(3{\mathsf{Y}}\), or \(4{\mathsf{Y}}\). In fact, exactly two families of three-reaction, planar, quadratic, mass-action systems admit a supercritical Andronov-Hopf bifurcation. The first family (with source complexes \(2{\mathsf{X}},{\mathsf{X}}+{\mathsf{Y}}\), \({\mathsf{Y}}\)) includes the tetramolecular examples mentioned above. In the second family (with source complexes \(2\mathsf{X},\mathsf{X}+\mathsf{Y},\mathsf{0}\)), every network has a target complex with a molecularity of at least seven. These results, along with further results on planar, quadratic, mass-action systems admitting periodic orbits, are discussed in Section 4. The rest of this paper is organised as follows. After introducing the basic notation and terminology in Section 2, we present some tools focussed on the analysis of rank-two mass-action systems in Section 3. Oscillations in two-species, three-reaction systems are studied in Section 4, and three-reaction networks with an arbitrary number of species are treated in Section 5. Finally, we close with some concluding remarks and observations in Section 6. ## 2 Preliminaries We collect some basic notation, terminology, and known results needed later. The symbols \(\mathbb{R}_{+}\), \(\mathbb{R}_{\geq 0}\), and \(\mathbb{R}_{-}\) denote the set of _positive_, _nonnegative_, and _negative real numbers_, respectively. Accordingly, \(\mathbb{R}_{+}^{n}\), \(\mathbb{R}_{\geq 0}^{n}\), and \(\mathbb{R}_{-}^{n}\) denote the _positive_, _nonnegative_, and _negative orthants_, respectively. We use similar notation for sets of positive or nonnegative integers. We refer to subsets of \(\mathbb{R}_{+}^{n}\) as _positive_. Given a row vector \(a=[a_{1},\dots,a_{n}]\) of nonnegative integers, we adopt the standard convention that \(x^{a}\) is an abbreviation for the monomial \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\). Accordingly, if \(A\) is an \(m\times n\) matrix of nonnegative integers with \(a_{j}\). being its \(j\)th row (\(j=1,\dots,m\)) then \(x^{A}\) denotes the column vector \([x^{a_{1}},\dots,x^{a_{m}}]^{\top}\). The symbol \(\circ\) stands for the _entrywise product_ of two vectors or matrices of the same size. For \(u,v\in\mathbb{R}^{n}\), we write \(u\cdot v\) for the _scalar product_ of \(u\) and \(v\). When \(n=3\), we denote the _cross product_ of \(u\) and \(v\) by \(u\times v\). ### Chemical reaction networks We start by introducing (chemical) species, (chemical) complexes, (chemical) reactions, and (chemical reaction) networks. For a more detailed exposition, the reader may consult, for example, [30]. Given _species_\(\mathsf{X}_{1},\dots,\mathsf{X}_{n}\), a _complex_ is a formal sum \(\sum_{i=1}^{n}a_{i}\mathsf{X}_{i}\), where the coefficients \(a_{i}\) are assumed to be nonnegative integers. A _reaction_ corresponds to the conversion of a complex termed the _source complex_ (or just _source_ for short) into another termed the _target complex_ (or just _target_ for short): a reaction can thus be regarded as an ordered pair of complexes. A _network_ is a collection of reactions. To facilitate the introduction of further terminology, consider the \(m\)-reaction network \[\sum_{i=1}^{n}a_{ij}\mathsf{X}_{i}\longrightarrow\sum_{i=1}^{n}(a_{ij}+c_{ij} )\mathsf{X}_{i}\quad\text{ for }j=1,\dots,m. \tag{5}\] The matrix \(\Gamma=[c_{ij}]\in\mathbb{R}^{n\times m}\) is called the _stoichiometric matrix_ of the network, while \(\Gamma_{l}=[a_{ij}]\in\mathbb{R}^{n\times m}\) is termed its _source matrix_. Each column of \(\Gamma\) is the _reaction vector_ of the corresponding reaction. The image of \(\Gamma\), denoted by \(\operatorname{im}\Gamma\), is termed the _stoichiometric subspace_ of the network, while the sets \((x_{0}+\operatorname{im}\Gamma)\cap\mathbb{R}_{\geq 0}^{n}\) for \(x_{0}\in\mathbb{R}_{\geq 0}^{n}\), and \((x_{0}+\operatorname{im}\Gamma)\cap\mathbb{R}_{+}^{n}\) for \(x_{0}\in\mathbb{R}_{+}^{n}\), are the network's _stoichiometric classes_ and _positive stoichiometric classes_, respectively. Finally, the _rank_ of the network is, by definition, \(\operatorname{rank}\Gamma\). A species \(\mathsf{X}_{i}\) is called _trivial_ if \(c_{ij}=0\) for all \(j=1,\dots,m\). In all reasonable models of a reaction network, the concentration of a trivial species remains constant along every trajectory. A reaction network can be identified with its _Euclidean embedded graph_ as defined in [11]. This is a directed graph obtained by identifying each complex with a point in \(\mathbb{Z}_{\geq 0}^{n}\), and each reaction with an arc whose tail is the source of the reaction and whose head is the target of the reaction. For example, the Euclidean embedded graph of the tetramolecular network admitting a supercritical Andronov-Hopf bifurcation in (4) is: ### Molecularity The _molecularity_ of a complex \(\sum_{i=1}^{n}a_{i}\mathsf{X}_{i}\) is the nonnegative integer \(\sum_{i=1}^{n}a_{i}\). If every complex of a network has molecularity at most two, three, four, etc., then the network is said to be _bimolecular_, _trimolecular_, _tetramolecular_, etc. For example, the network (5) is bimolecular if and only if \(\sum_{i=1}^{n}a_{ij}\leq 2\) and \(\sum_{i=1}^{n}(a_{ij}+c_{ij})\leq 2\) for \(j=1,\ldots,m\). If every source complex of a network has molecularity at most two then we refer to it as a _quadratic network_. If, for example, the network (5) satisfies \(\sum_{i=1}^{n}a_{ij}\leq 2\) and \(\sum_{i=1}^{n}(a_{ij}+c_{ij})\leq 3\) for \(j=1,\ldots,m\), we refer to it as a "trimolecular, quadratic network". In terms of the Euclidean embedded graph, such a network is one whose sources are confined to \(\{a\in\mathbb{Z}_{\geq 0}^{n}\colon\sum_{i=1}^{n}a_{i}\leq 2\}\), and whose targets are confined to \(\{a\in\mathbb{Z}_{\geq 0}^{n}\colon\sum_{i=1}^{n}a_{i}\leq 3\}\). ### Dynamically nontrivial networks We refer to a network as _dynamically nontrivial_ if \(\ker\Gamma\cap\mathbb{R}_{+}^{m}\neq\emptyset\) or equivalently, \(\operatorname{im}\Gamma^{\top}\cap\mathbb{R}_{\geq 0}^{m}=\{0\}\), and _dynamically trivial_ otherwise. The equivalence of the two definitions follows from Stiemke's Theorem [27], which is a variant of Farkas' Lemma. Under weak assumptions on the reaction rates, the existence of a nonzero, nonnegative vector in the image of \(\Gamma^{\top}\) is equivalent to the existence of a linear Lyapunov function in \(\mathbb{R}_{+}^{n}\) for the associated differential equation, which increases strictly along all orbits in \(\mathbb{R}_{+}^{n}\) (see [2, Section 3.3]). Thus, dynamically trivial networks do not admit positive limit sets. In particular, only dynamically nontrivial networks can have an equilibrium or a periodic orbit in \(\mathbb{R}_{+}^{n}\). We remark that some authors refer to dynamically nontrivial networks as _consistent_[1], and similar ideas appear already in [14, Section 5]. Notice that we can interpret the condition for a network to be dynamically nontrivial as saying that its reaction vectors must be _positively dependent_: the zero vector in \(\mathbb{R}^{n}\) can be written as a positive combination of the \(m\) reaction vectors. In particular, a dynamically nontrivial network with \(m\) reactions has rank at most \(m-1\). Moreover, as we will see in the next subsection, a mass-action system with \(m\) reactions and rank \(m\) admits no periodic orbits, positive or otherwise. Since in this paper we are interested in networks with three reactions with the potential for periodic orbits, the networks of interest have rank at most two. On the other hand, since the differential equations we investigate are autonomous and have a unique solution for each initial condition in \(\mathbb{R}_{\geq 0}^{n}\), periodic solutions can only occur for networks of rank at least two. Thus, our main focus is on rank-two networks. In Section 3 we discuss some properties of rank-two mass-action systems. ### Mass-action systems Assuming _mass-action kinetics_, a positive number, termed the _rate constant_, is associated with each reaction. The species _concentration_\(x\in\mathbb{R}_{\geq 0}^{n}\) then evolves over time according to the autonomous ordinary differential equation \[\dot{x}=\Gamma(\kappa\circ x^{\Gamma^{\top}_{\,l}}), \tag{6}\] where \(\kappa\in\mathbb{R}_{+}^{m}\) is the vector of the rate constants. By a _mass-action system_ we mean a network with rate constants, or the differential equation (6) itself; this should cause no confusion. It can be shown that both the positive orthant \(\mathbb{R}_{+}^{n}\) and the nonnegative orthant \(\mathbb{R}_{\geq 0}^{n}\) are forward invariant under (6). In fact, solutions with a positive initial condition are confined to the positive stoichiometric class of the initial condition for all \(t\geq 0\). It is also well-known that given any (relatively open) face \(F\) of \(\mathbb{R}_{\geq 0}^{n}\), the mass-action vector field on \(F\) is either nowhere tangent to \(F\), or everywhere tangent to \(F\), in which case \(F\) is locally invariant and forward invariant. If we restrict attention to any such locally invariant face then, by removing species whose concentrations are zero on the face and reactions involving these species, we obtain either a mass-action system involving fewer species; or an "empty"system where no reactions proceed and hence \(F\) consists entirely of equilibria. The previous construction sometimes allows us to extend claims about the positive orthant to the nonnegative orthant as a whole. For example, we observe that a mass-action system with \(m\) reactions and rank \(m\) (i.e., with linearly independent reaction vectors) necessarily forbids periodic orbits. That positive periodic orbits are forbidden is immediate as the network is dynamically trivial. However, periodic orbits are also forbidden on any locally invariant face of \(\mathbb{R}_{\geq 0}^{n}\): restricting attention to such a face we obtain either a system which again has linearly independent reaction vectors and hence is dynamically trivial; or one where no reactions proceed and all points are equilibria. We can infer that a three-reaction mass-action system with a periodic orbit must have rank two. ### The reduced Jacobian determinant of a mass-action system We will refer to a network with \(n\) species, \(m\) reactions, and rank \(r\) as an \((n,m,r)\) network. The Jacobian matrix of an \((n,m,r)\) mass-action system is, at each point of \(\mathbb{R}_{\geq 0}^{n}\), an \(n\times n\) matrix of rank at most \(r\). We are interested in the dynamics of such a system restricted to stoichiometric classes, and hence in the action of its Jacobian matrices on the stoichiometric subspace. Fixing any \(x_{0}\in\mathbb{R}_{\geq 0}^{n}\), the _reduced Jacobian determinant_ of the system at \(x_{0}\) is the determinant of the Jacobian matrix at \(x_{0}\) regarded as a linear transformation on the stoichiometric subspace. Given any basis for the stoichiometric subspace, we can write down a matrix representation of this linear transformation. Since all such matrices are similar, we abuse notation by referring to any one of them as the _reduced Jacobian matrix_ of the system at \(x_{0}\). The reduced Jacobian determinant is then just the determinant of any reduced Jacobian matrix. Equivalently, it is the product of the \(r\) eigenvalues of the Jacobian matrix associated with the stoichiometric subspace. A number of equivalent formulations are given in a more general setting in [6, Section 2.2]. We refer to an equilibrium of a mass-action system as _nondegenerate_ if the reduced Jacobian determinant, evaluated at the equilibrium, is nonzero. ### Nondegenerate \((n,n+1,n)\) networks The class of \((n,n+1,n)\) networks is analysed in [4, Section 3]. A dynamically nontrivial \((n,n+1,n)\) network is called _nondegenerate_ if its source complexes are affinely independent, and _degenerate_ otherwise. The terminology is justified by the following result (see [4, Lemma 3.1 and Remark 3.2]). **Lemma 2**.: _Consider a dynamically nontrivial \((n,n+1,n)\) network. Then the following statements hold._ * _If the network is nondegenerate then the associated mass-action system has a unique positive equilibrium for all choices of rate constants, and this equilibrium is nondegenerate._ * _If the network is degenerate then the associated mass-action system has no isolated positive equilibria._ We remark that as a consequence of Lemma 3 below, affine independence of the sources is a necessary condition for oscillation in any \((n,3,2)\) mass-action system. Thus nondegeneracy of the network is necessary for oscillation in \((2,3,2)\) networks. Rank-two mass-action systems In this section, we derive a few properties of mass-action systems whose underlying network has rank two, focussing on necessary conditions for periodic orbits, or isolated periodic orbits. ### Periodic orbits in rank-two mass-action systems We say that a mass-action system "admits" a periodic orbit if it has a periodic orbit for some choice of rate constants. Observe that a periodic orbit in a rank-two mass-action system without any trivial species must necessarily be positive (and, consequently, the system must be dynamically nontrivial). This follows from the simple observation that the intersection of any stoichiometric class with a proper face \(F\) of \(\mathbb{R}_{\geq 0}^{n}\) must have dimension less than two unless all species whose concentrations vanish everywhere on \(F\) are trivial. We refer to a positive equilibrium of a rank-two mass-action system as a _saddle_ if the reduced Jacobian determinant, evaluated at the equilibrium, is negative. Equivalently, one of the nontrivial eigenvalues associated with the equilibrium is positive, while the other is negative. A periodic orbit in a rank-two system must contain at least one equilibrium in its interior on the stoichiometric class on which it resides, and not all of these equilibria can be saddles (see e.g. [13, Section 3.5]). Thus, positive periodic orbits are ruled out in a rank-two mass-action system where all positive equilibria are saddles. If, additionally, the network has no trivial species, then (by our observations in the previous paragraph) periodic orbits on the boundary of \(\mathbb{R}_{\geq 0}^{n}\) are ruled out too. We will frequently use these observations to rule out periodic orbits in mass-action systems. ### Sources on a line We make an observation about rank-two mass-action systems whose source complexes lie on a line. There is no assumption about the number of species or reactions, or about the molecularities of the complexes. **Lemma 3**.: _A rank-two mass-action system whose source complexes lie on a line admits no periodic orbit._ Proof.: Since the positive orthant \(\mathbb{R}_{+}^{n}\) is forward invariant, a periodic orbit lies either entirely in \(\mathbb{R}_{+}^{n}\) or on the boundary of \(\mathbb{R}_{+}^{n}\). In the latter case, the periodic orbit must lie entirely in some relatively open proper face of \(\mathbb{R}_{\geq 0}^{n}\) (see Section 2.4). We first show that the system admits no periodic orbit in \(\mathbb{R}_{+}^{n}\). The general form of the mass-action differential equation for a network with \(n\) species and \(m\) reactions is \[\dot{x}_{i}=\sum_{j=1}^{m}\beta_{ij}x^{\alpha_{j}}\quad(i=1,\ldots,n),\] where \(\alpha_{j}\in\mathbb{R}^{n}\) represents the \(j\)th source complex (\(j=1,\ldots,m\)). Since all source complexes lie on a line, there exist \(s_{1},\ldots,s_{m}\in\mathbb{R}\) and \(\alpha\in\mathbb{R}^{n}\) such that \(\alpha_{j}=\alpha_{1}+s_{j}\alpha\) (here \(s_{1}=0\)). Thus, after division by the positive scalar function \(x^{\alpha_{1}}\), we are left with \[\dot{x}_{i}=f_{i}(x^{\alpha})\quad(i=1,\ldots,n),\] a differential equation whose r.h.s. depends on \(x\in\mathbb{R}_{+}^{n}\) only through the scalar \(x^{\alpha}\). Since the rank of the network is two, the positive orthant is foliated by two-dimensional invariant linear manifolds, the positive stoichiometric classes. By [21, Proposition 1], none of these contains a periodic orbit. Finally, we argue that a periodic orbit on the boundary of \(\mathbb{R}_{\geq 0}^{n}\) is not possible either. Suppose, by way of contradiction, that some proper \(k\)-dimensional (relatively open) face \(F\) of \(\mathbb{R}_{\geq 0}^{n}\) includes a periodic orbit. Clearly, \(k\geq 2\) and, by remarks in Section 2.4, the face \(F\) must be locally invariant. Restricting attention to \(F\) (which we identify with \(\mathbb{R}_{+}^{k}\)), and removing species whose concentrations vanish on \(F\) and reactions involving these species, we can regard the system as a mass-action system on \(k\) species, of rank at most two, and with a periodic orbit which now lies in \(\mathbb{R}^{k}_{+}\). But this is ruled out by the arguments in Section 2.4 and in the previous paragraph. This concludes the proof. ### The reduced Jacobian determinant of a three-reaction system Three-reaction, rank-two, mass-action systems are the class of systems of main interest in this paper. In this subsection, we derive a formula for the reduced Jacobian determinant of such a system at any positive equilibrium, and demonstrate how this formula can be applied to rule out oscillation. Since positive equilibria are ruled out for dynamically trivial networks, we consider a dynamically nontrivial \((n,3,2)\) mass-action system, with no assumptions on the molecularity of complexes. W.l.o.g. we may assume that the first and the second rows of the stoichiometric matrix \(\Gamma\in\mathbb{R}^{n\times 3}\), say \([c_{1},c_{2},c_{3}]\) and \([d_{1},d_{2},d_{3}]\), form a basis for its row-space, and write \[\Gamma=\widetilde{\Gamma}\begin{bmatrix}c_{1}&c_{2}&c_{3}\\ d_{1}&d_{2}&d_{3}\end{bmatrix},\] where \(\widetilde{\Gamma}\in\mathbb{R}^{n\times 2}\) has rank two, and its top \(2\times 2\) block is the identity matrix. Defining \(c=[c_{1},c_{2},c_{3}]^{\top}\) and \(d=[d_{1},d_{2},d_{3}]^{\top}\), the kernel of \(\Gamma\) is spanned by \(u=c\times d\) (i.e., \(u_{1}=c_{2}d_{3}-c_{3}d_{2}\), \(u_{2}=c_{3}d_{1}-c_{1}d_{3}\), \(u_{3}=c_{1}d_{2}-c_{2}d_{1}\)). Since the network is dynamically nontrivial, \(u\in\mathbb{R}^{3}_{+}\cup\mathbb{R}^{3}_{-}\). Fix \(\kappa\in\mathbb{R}^{3}_{+}\) and let \(\bar{x}\in\mathbb{R}^{n}_{+}\) be an equilibrium, i.e., \(\kappa\circ\bar{x}^{\top^{\top}}_{\widetilde{\Gamma}}=\mu u\) for some nonzero scalar \(\mu\). We may use \(\widetilde{\Gamma}\) to define local coordinates on the stoichiometric class of \(\bar{x}\) in the natural way and obtain the reduced Jacobian matrix at \(\bar{x}\): \[J_{\text{red}}=\mu\begin{bmatrix}c^{\top}\\ d^{\top}\end{bmatrix}\Delta_{u}\Gamma^{\top}_{l}\Delta_{1/\bar{x}}\widetilde{ \Gamma},\] where \(\Delta_{u}\in\mathbb{R}^{3\times 3}\) and \(\Delta_{1/\bar{x}}\in\mathbb{R}^{n\times n}\) are the diagonal matrices with \(u_{j}\) (\(j=1,2,3\)) and \(1/\bar{x}_{i}\) (\(i=1,\dots,n\)) on their diagonal, respectively (for more details, see [6, Appendix A] and [4, Sections 2 and 3]). Writing \(a_{i}.\) for the \(i\)th row of \(\Gamma_{l}\), application of the Cauchy-Binet formula leads to \[\det J_{\text{red}}=\mu|\mu||u_{1}u_{2}u_{3}|\sum_{i<j}\frac{\widetilde{\Gamma }[\{i,j\},\{1,2\}]}{\bar{x}_{i}\bar{x}_{j}}(\mathbf{1}\cdot(a_{i}\times a_{j \cdot})), \tag{7}\] where \(\widetilde{\Gamma}[\{i,j\},\{1,2\}]\) is the determinant of the \(2\times 2\) submatrix of \(\widetilde{\Gamma}\) that is formed of its \(i\)th and \(j\)th row, \(\mathbf{1}\) is the vector of ones in \(\mathbb{R}^{3}\), and we also used the fact that \(\mu u_{k}=|\mu u_{k}|\). Note that the sign of \(\mathbf{1}\cdot(a_{i}\times a_{j}.)\) tells us about the orientation of the three source complexes projected onto the \((i,j)\)th coordinate. We remark that any choice of privileged species must lead to the same value of \(\det J_{\text{red}}\). In the special case \(n=2\), the only positive stoichiometric class is \(\mathbb{R}^{2}_{+}\) itself, \(\widetilde{\Gamma}\) is the identity matrix, and the reduced Jacobian determinant is just the Jacobian determinant. Hence, \[\det J=\frac{\mu|\mu||u_{1}u_{2}u_{3}|}{\bar{x}_{1}\bar{x}_{2}}(\mathbf{1}\cdot (a_{1\cdot}\times a_{2\cdot})). \tag{8}\] From this expression we observe that for \(J\) to have a nonzero determinant, the three source complexes must be affinely independent (i.e., they must span a triangle). Furthermore, since \(\mathbf{1}\cdot(c\times d)\) has the same sign as \(\mu\), \(\det J>0\) is positive if and only if the three reaction vectors \([c_{1},d_{1}]^{\top}\), \([c_{2},d_{2}]^{\top}\), \([c_{3},d_{3}]^{\top}\) span a triangle with the same orientation as the triangle spanned by the three source complexes. If the orientations of the triangles spanned by the sources and the reaction vectors are opposite to each other, then \(\det J<0\), and the equilibrium is a saddle. We illustrate the two possibilities with the following two networks: on the left is the LVA (see (19) below); and on the right is a network with the same complexes as the LVA, but different reaction vectors. Any positive equilibrium of the latter network is necessarily a saddle. To illustrate formula (7) in the case where \(n\geq 3\), consider the \((4,3,2)\) network \[\begin{split} 2\mathsf{X}&\longrightarrow 3\mathsf{X}\\ \mathsf{X}+\mathsf{Y}&\longrightarrow\mathsf{Z}+ \mathsf{W}\\ \mathsf{Z}+\mathsf{W}&\longrightarrow\mathsf{Y}\end{split}\] which will play a role in Lemma 12 below. Here, \[\Gamma=\begin{bmatrix}1&-1&0\\ 0&-1&1\\ 0&1&-1\\ 0&1&-1\end{bmatrix}=\begin{bmatrix}1&0\\ 0&1\\ 0&-1\\ 0&-1\end{bmatrix}\begin{bmatrix}1&-1&0\\ 0&-1&1\end{bmatrix}\text{ and }\Gamma_{l}=\begin{bmatrix}2&1&0\\ 0&1&0\\ 0&0&1\\ 0&0&1\end{bmatrix}.\] Therefore, with this choice of species ordering, \(u_{1}=u_{2}=u_{3}=-1\) and \(\mu<0\). Since the minors \(\widetilde{\Gamma}[\{i,j\},\{1,2\}]\) vanish when \(2\leq i<j\leq 4\), only the choices \(i=1\) and \(j=2,3,4\) give nonzero terms in (7). We obtain \[\det J_{\text{red}}=\mu|\mu|\left(\frac{2}{\bar{x}\bar{y}}+\frac{1}{\bar{x} \bar{z}}+\frac{1}{\bar{x}\bar{w}}\right),\] which is negative since \(\mu<0\). Hence, every positive equilibrium is a saddle within its stoichiometric class and, by the observations in Section 3.1, the network admits no periodic orbits. ### Necessary reactions for isolated periodic orbits in quadratic systems In this subsection, we use a divergence argument and the Bendixson-Dulac Test to show that the presence of certain reactions is necessary for the occurrence of isolated periodic orbits in rank-two, quadratic, mass-action systems. In the first result, we make no assumption on the number of reactions in the system. **Lemma 4**.: _Assume that a rank-two, quadratic network with no trivial species has no reaction of the form_ \[2\mathsf{X}_{j}\longrightarrow(2+c_{j})\mathsf{X}_{j}+\sum_{i\neq j}c_{i} \mathsf{X}_{i}\text{ with }c_{j}>0\text{ and }c_{i}\geq 0\text{ for }i\neq j. \tag{9}\] _Suppose that its associated mass-action system admits a periodic orbit. Then the network has either two or three species, and the mass-action differential equation reads either as_ \[\begin{split}\dot{x}_{1}&=x_{1}(r_{1}+b_{12}x_{2})\\ \dot{x}_{2}&=x_{2}(r_{2}+b_{21}x_{1})\end{split} \text{or} \dot{x}_{1}=x_{1}(b_{12}x_{2}+b_{13}x_{3}) \tag{10}\] \[\begin{split}\dot{x}_{2}&=x_{2}(b_{21}x_{1}+b_{23}x_{ 3})\\ \dot{x}_{3}&=x_{3}(b_{31}x_{1}+b_{32}x_{2})\end{split}\] \[\text{with }b_{12}b_{13}<0,b_{21}b_{23}<0,b_{31}b_{32}<0\] \[\text{and }b_{12}b_{23}b_{31}+b_{13}b_{21}b_{32}=0\] _Consequently, a rank-two, quadratic, mass-action system with no trivial species and no reaction of the form (9) admits no isolated periodic orbits._ Proof.: Consider a mass-action system satisfying the hypotheses of the lemma. By remarks in Section 3.1, each periodic orbit of the system must be positive. Denote by \(f\) the vector field on \(\mathbb{R}^{n}_{+}\) obtained by multiplying the r.h.s. of the system by the Dulac function \((x_{1}\cdots x_{n})^{-1}\). We consider the divergence \(\operatorname{div}\!f=\sum_{j=1}^{n}\frac{\partial f_{j}}{\partial x_{j}}\). For a fixed \(j\in\{1,\ldots,n\}\), a reaction \(\sum_{i=1}^{n}a_{i}\mathsf{X}_{i}\stackrel{{\kappa}}{{ \longrightarrow}}\sum_{i=1}^{n}(a_{i}+c_{i})\mathsf{X}_{i}\) contributes the term \(c_{j}\kappa x_{1}^{a_{1}-1}\cdots x_{n}^{a_{n}-1}\) to \(f_{j}\). Hence, the contribution of such a reaction to \(\operatorname{div}\!f\) is zero if \(a_{j}=1\) or \(c_{j}=0\), while it is negative if either \(a_{j}=0\) and \(c_{j}>0\) or \(a_{j}=2\) and \(c_{j}<0\). These are the only two possibilities, as reactions of the form (9) are excluded, and consequently, no reaction makes a positive contribution to \(\operatorname{div}\!f\). Clearly, \(\operatorname{div}\!f\) is either everywhere negative on \(\mathbb{R}^{n}_{+}\), if there is some reaction which contributes a negative term, or is identically zero on \(\mathbb{R}^{n}_{+}\) if there is no such reaction. If \(\operatorname{div}\!f\) is negative everywhere in \(\mathbb{R}^{n}_{+}\) then periodic orbits in \(\mathbb{R}^{n}_{+}\) are precluded by the Bendixson-Dulac Test, see [19, Theorem 3, Remark (3)]. If, on the other hand, \(\operatorname{div}\!f\equiv 0\) on \(\mathbb{R}^{n}_{+}\), then the discussion above implies that for all \(j\), \(c_{j}\neq 0\) implies \(a_{j}=1\). Equivalently, the mass-action differential equation is a Lotka-Volterra equation with no diagonal term (i.e., the monomial \(x_{j}^{2}\) does not occur in the expression for \(\dot{x}_{j}\)): \[\dot{x}_{j}=x_{j}\left(r_{j}+\sum_{k\neq j}b_{jk}x_{k}\right)\text{ for }j=1, \ldots,n.\] **Case \(n=2\).** In order to admit a periodic orbit the system must admit a positive equilibrium which is not a saddle, from which it easily follows that \(r_{1}r_{2}<0,r_{1}b_{12}<0,r_{2}b_{21}<0\), as claimed. **Case \(n=3\).** Since the rank of the network is two, there is a nonzero \(d\in\mathbb{R}^{3}\) such that \(d_{1}\dot{x}_{1}+d_{2}\dot{x}_{2}+d_{3}\dot{x}_{3}=0\). Hence, \(d_{1}r_{1}=d_{2}r_{2}=d_{3}r_{3}=0\) and \[\begin{bmatrix}b_{12}&b_{21}&0\\ b_{13}&0&b_{31}\\ 0&b_{23}&b_{32}\end{bmatrix}\begin{bmatrix}d_{1}\\ d_{2}\\ d_{3}\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\end{bmatrix}.\] This implies \(b_{12}b_{23}b_{31}+b_{13}b_{21}b_{32}=0\). Further, all \(d_{i}\) must be nonzero, for otherwise either the system has a trivial species, or some species concentration increases strictly along positive trajectories, ruling out the existence of a periodic orbit (see [9, proof of Theorem 4.1]). We conclude that \(r_{1}=r_{2}=r_{3}=0\). The existence of a positive equilibrium then implies \(b_{12}b_{13}<0,b_{21}b_{23}<0,b_{31}b_{32}<0\). **Case \(n\geq 4\).** One finds that, in fact, there is no rank-two network without trivial species whose mass-action differential equation is a Lotka-Volterra equation with no diagonal term. See [9, proof of Theorem 4.1] for the details. The nonexistence of isolated periodic orbits for the systems in (10) follows immediately from the fact that both vector fields have (nonlinear) first integrals on \(\mathbb{R}^{2}_{+}\) and \(\mathbb{R}^{3}_{+}\) respectively and, in fact, restricted to two-dimensional invariant sets, are Hamiltonian. The conserved quantities are \(r_{1}\log y-r_{2}\log x+b_{12}y-b_{21}x\) (in the two-species case) and \(d_{1}d_{2}b_{23}\log x+d_{2}d_{3}b_{31}\log y+d_{1}d_{3}b_{12}\log z\) (in the three-species case). Notice that the only trimolecular reaction of the form (9) is \(2\mathsf{X}_{j}\longrightarrow 3\mathsf{X}_{j}\). Hence, we obtain the following result, which is a major step towards proving one of our main results, namely, Theorem 1. **Corollary 5**.: _If a rank-two, trimolecular, quadratic, mass-action network with no trivial species admits an isolated periodic orbit, then there exists \(j\in\{1,\ldots,n\}\) such that \(2\mathsf{X}_{j}\longrightarrow 3\mathsf{X}_{j}\) is a reaction in the network._ If we restrict the scope of Lemma 4 to the three-reaction case, we can characterize those networks that lead to the differential equations in (10). The Lotka reactions (1) and the Ivanova reactions (2) give such instances. However, in addition, we find a two-parameter family of two-species networks generalising the Lotka reactions, and a new two-parameter family of three-species networks, all of which which give rise to differential equations of the form in (10). **Lemma 6**.: _Assume that a three-reaction, quadratic network with no trivial species has no reaction of the form (9). Then its associated mass-action system admits a periodic orbit if and only if, up to a permutation of the species, the network is either the Ivanova reactions or belongs to one of the following two families with parameters \(c,d\geq 1\):_ \[\begin{split}\mathsf{X}&\longrightarrow(1+c) \mathsf{X}\\ \mathsf{X}+\mathsf{Y}&\longrightarrow(1+d)\mathsf{Y} \\ \mathsf{Y}&\longrightarrow 0\end{split} \tag{11}\] _and_ \[\begin{split}\mathsf{Z}+\mathsf{X}& \longrightarrow(1+c)\mathsf{X}\\ \mathsf{X}+\mathsf{Y}&\longrightarrow 0\\ \mathsf{Y}+\mathsf{Z}& \longrightarrow(1+cd)\mathsf{Y}+(1+d)\mathsf{Z}\end{split} \tag{12}\] _In the case of the Ivanova reactions and the networks in (11), for all rate constants, each stoichiometric class contains a unique positive equilibrium, which is a global center in its stoichiometric class. Depending on the rate constants, the networks in (12) either have no periodic orbits; or they have, on some stoichiometric classes, a unique positive equilibrium which is a global center._ Proof.: Consider a reaction network satisfying the hypotheses of the lemma. Recall from Section 2.4 that the rank of a three-reaction system that admits a periodic orbit is necessarily two. By Lemma 4, the network has either two or three species, and the mass-action differential equation takes one of the forms in (10). The two-species case is straightforward: the reader may confirm that any three-reaction network without trivial species giving rise to the differential equation on the left of (10) must be of the form in (11). Any such network admits a first integral \(d\kappa_{2}x+\kappa_{2}y-\kappa_{3}\log x-c\kappa_{1}\log y\) on \(\mathbb{R}^{2}_{+}\), which has bounded level sets and a global minimum at the unique positive equilibrium. It follows immediately that the equilibrium is a global center. We focus on the slightly harder three-species case. It is easily seen that if a three-reaction, three-species network leads to the differential equation on the right in (10), it is necessarily of the form \[\begin{split}\mathsf{Z}+\mathsf{X}& \xrightarrow{\kappa_{1}}(1+c_{31})\mathsf{Z}+(1+c_{11})\mathsf{X} &\dot{x}=x(\kappa_{1}c_{11}z+\kappa_{2}c_{12}y),\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}(1+c _{12})\mathsf{X}+(1+c_{22})\mathsf{Y}&\dot{y}=y(\kappa_{2}c_{22} x+\kappa_{3}c_{23}z),\\ \mathsf{Y}+\mathsf{Z}&\xrightarrow{\kappa_{3}}(1+c _{23})\mathsf{Y}+(1+c_{33})\mathsf{Z}&\dot{z}=z(\kappa_{3}c_{33}y+ \kappa_{1}c_{31}x)\end{split}\] with \(c_{ij}\geq-1\) and \(\operatorname{sgn}c_{ii}=-\operatorname{sgn}c_{i,i+1}\neq 0\). Hence, three of the six \(c_{ij}\)'s are equal to \(-1\), and the other three are positive integers. By a short calculation, the rank of the network is two if and only if \(c_{11}c_{22}c_{33}=-c_{12}c_{23}c_{31}\). If none of the \(c_{ii}\)'s is negative then \(c_{12}=c_{23}=c_{31}=-1\) and \(c_{11}=c_{22}=c_{33}=1\), leading to the Ivanova reactions. If exactly one of the \(c_{ii}\)'s is negative then (because of the cyclic symmetry of the species) w.l.o.g. we may assume \(c_{22}=-1\), and \(c_{11}>0\) and \(c_{33}>0\). Hence, \(c_{12}=c_{31}=-1\) and because of the rank condition, \(c_{23}=c_{11}c_{33}\), leading to the family of networks in (12). The cases when two or three of the \(c_{ii}\)'s are negative can be reduced to the cases already discussed by swapping two species. The behaviour of the Ivanova mass-action system is widely known [28, page 630]. Some straightforward calculations demonstrate that the networks in (12) indeed give rise to a differential equation with centers in some stoichiometric classes for certain rate constants. The set of positive equilibria is the ray \(t(\kappa_{1}\kappa_{3}cd,\kappa_{1}c,\kappa_{2})\) for \(t>0\), the positive stoichiometric classes are given by \(\mathcal{P}_{D}=\{(x,y,z)\in\mathbb{R}_{+}^{3}\colon x-y+cz=D\}\) for \(D\in\mathbb{R}\), and the system has a conserved quantity \(d\kappa_{3}\log x-\kappa_{1}\log y+\kappa_{2}\log z\). The analysis of the system reveals the following. * When \(\kappa_{1}>\kappa_{2}+\kappa_{3}d\), there is a positive equilibrium in \(\mathcal{P}_{D}\) if and only if \(D<0\). In this case, the equilibrium is unique, and it is a global center, since the conserved quantity has compact level sets on \(\mathcal{P}_{D}\) when \(D<0\). * When \(\kappa_{1}=\kappa_{2}+\kappa_{3}d\), the whole ray of positive equilibria lies in \(\mathcal{P}_{0}\). In fact, every ray in \(\mathcal{P}_{0}\) through the origin is invariant, and hence, the system has no periodic orbit. * When \(\kappa_{1}<\kappa_{2}+\kappa_{3}d\), there is a positive equilibrium in \(\mathcal{P}_{D}\) if and only if \(D>0\). In this case, the equilibrium is unique, and it is a saddle. Hence, by the discussion in Section 3.1, the system has no periodic orbit. We remark that the family of networks in (11) is exactly [12, Eq. (5)]. ## 4 The analysis of quadratic \((2,3,2)\) systems In this section, we are interested in two-species, three-reaction, quadratic, mass-action systems. Our first main result on these systems, Theorem 7, is that they admit no isolated periodic orbits when target molecularities do not exceed three. We will later, in Theorem 10, generalise this result to mass-action systems with arbitrary numbers of species; however, the simpler planar case is of interest in itself, and its proof contributes towards the proof of the more general result. **Theorem 7**.: _Three-reaction, two-species, trimolecular, quadratic mass-action systems admit no isolated periodic orbit._ The proof of Theorem 7 is completed in Section 4.3. In Section 4.5, we arrive at our second main result about planar systems, Theorem 9, where we find _all_ three-reaction, planar, quadratic mass-action systems (with arbitrary target molecularities) which admit an Andronov-Hopf bifurcation. It turns out that there are two families of networks that admit supercritical Andronov-Hopf bifurcation, while another family admits a vertical Andronov-Hopf bifurcation. Exactly four of these networks are tetramolecular, while all other networks with an Andronov-Hopf bifurcation have a target complex with a molecularity of at least five. ### Setting For readability, we use a slightly different notation in the two-species case than in the general case, following [8, Section 5]. Throughout this section, we are analysing the three-reaction mass-action system \[\begin{split}& a_{1}\mathsf{X}+b_{1}\mathsf{Y}\xrightarrow{ \kappa_{1}}(a_{1}+c_{1})\mathsf{X}+(b_{1}+d_{1})\mathsf{Y}\\ & a_{2}\mathsf{X}+b_{2}\mathsf{Y}\xrightarrow{\kappa_{2}}(a_{2}+ c_{2})\mathsf{X}+(b_{2}+d_{2})\mathsf{Y}\\ & a_{3}\mathsf{X}+b_{3}\mathsf{Y}\xrightarrow{\kappa_{3}}(a_{3}+ c_{3})\mathsf{X}+(b_{3}+d_{3})\mathsf{Y}\end{split} \tag{13}\] and its associated differential equation \[\begin{split}&\dot{x}=c_{1}\kappa_{1}x^{a_{1}}y^{b_{1}}+c_{2} \kappa_{2}x^{a_{2}}y^{b_{2}}+c_{3}\kappa_{3}x^{a_{3}}y^{b_{3}},\\ &\dot{y}=d_{1}\kappa_{1}x^{a_{1}}y^{b_{1}}+d_{2}\kappa_{2}x^{a_{2 }}y^{b_{2}}+d_{3}\kappa_{3}x^{a_{3}}y^{b_{3}}\end{split} \tag{14}\] with \(a_{i}\), \(b_{i}\), \(a_{i}+c_{i}\), \(b_{i}+d_{i}\) (\(i=1,2,3\)) being nonnegative integers. By Lemma 3, equation (14) can have no periodic orbit if the three sources \((a_{1},b_{1})\), \((a_{2},b_{2})\), \((a_{3},b_{3})\) lie on a line. Hence, from here on, we assume that \[(a_{1},b_{1}),\,(a_{2},b_{2})\text{ and }(a_{3},b_{3})\text{ span a triangle, which is positively oriented.} \tag{15}\] Clearly, the assumption on the orientation does not restrict generality. The stoichiometric matrix of the network is \[\Gamma=\begin{bmatrix}c_{1}&c_{2}&c_{3}\\ d_{1}&d_{2}&d_{3}\end{bmatrix}\,.\] Since we are interested in finding periodic orbits, we assume that \(\operatorname{rank}\Gamma=2\) and the network is dynamically nontrivial. Thus, with \(c=[c_{1},c_{2},c_{3}]^{\top}\) and \(d=[d_{1},d_{2},d_{3}]^{\top}\), the kernel of \(\Gamma\) is spanned by \(u=c\times d\in\mathbb{R}_{+}^{3}\cup\mathbb{R}_{-}^{3}\). Here, \[\begin{split} u_{1}&=c_{2}d_{3}-c_{3}d_{2},\\ u_{2}&=c_{3}d_{1}-c_{1}d_{3},\\ u_{3}&=c_{1}d_{2}-c_{2}d_{1}.\end{split} \tag{16}\] By Lemma 2, under the stated assumptions, the mass-action system (14) has a unique positive equilibrium \((\bar{x},\bar{y})\). As in Section 3.3, there exists a nonzero \(\mu\in\mathbb{R}\) such that \(\mu u_{i}=\kappa_{i}\bar{x}^{a_{i}}\bar{y}^{b_{i}}\) (\(i=1,2,3\)), and the Jacobian matrix, denoted by \(J\), at the equilibrium is given by \[J=\mu\begin{bmatrix}\sum_{i=1}^{3}a_{i}c_{i}u_{i}&\sum_{i=1}^{3}b_{i}c_{i}u_{ i}\\ \sum_{i=1}^{3}a_{i}d_{i}u_{i}&\sum_{i=1}^{3}b_{i}d_{i}u_{i}\end{bmatrix} \begin{bmatrix}1/\bar{x}&0\\ 0&1/\bar{y}\end{bmatrix}.\] By Lemma 2, \(\det J\neq 0\). In fact, since the source complexes are positively oriented by assumption, formula (8) implies that \[\operatorname{sgn}\det J=\operatorname{sgn}\mu=\operatorname{sgn}u_{1}= \operatorname{sgn}u_{2}=\operatorname{sgn}u_{3}. \tag{17}\] When \(\det J<0\), or equivalently \(u\in\mathbb{R}_{-}^{3}\), the equilibrium is a saddle and, by the discussion in Section 3.1, the mass-action system (13) admits no periodic orbits. For later use, the trace of the Jacobian matrix at the positive equilibrium is given by \[\operatorname{tr}J=\mu\left(\frac{1}{\bar{x}}\sum_{i=1}^{3}a_{i}c_{i}u_{i}+ \frac{1}{\bar{y}}\sum_{i=1}^{3}b_{i}d_{i}u_{i}\right). \tag{18}\] ### Lotka-Volterra-Autocatalator In light of Corollary 5, a trimolecular network satisfying the assumptions of Theorem 7 with no reaction \(2\mathsf{X}_{j}\longrightarrow 3\mathsf{X}_{j}\) cannot have an isolated periodic orbit. This motivates the investigation of networks with \(2\mathsf{X}_{j}\longrightarrow 3\mathsf{X}_{j}\). Beginning with the Lotka reactions (1) and replacing the linear autocatalytic step \(\mathsf{X}\longrightarrow 2\mathsf{X}\) by the quadratic autocatalytic step \(2\mathsf{X}\longrightarrow 3\mathsf{X}\), we obtain the system \[\begin{array}{c}2\mathsf{X}\xlongrightarrow{\kappa_{1}}3\mathsf{X}\\ \mathsf{X}+\mathsf{Y}\xlongrightarrow{\kappa_{2}}2\mathsf{Y}\\ \mathsf{Y}\xlongrightarrow{\kappa_{3}}0\end{array}\qquad\qquad\begin{array}{c} \dot{x}=\kappa_{1}x^{2}-\kappa_{2}xy,\\ \dot{y}=\kappa_{2}xy-\kappa_{3}y,\end{array} \tag{19}\] which is called the Lotka-Volterra-Autocatalator (LVA) [12, Eq. (8)], [26, Eq. (1)]. As shown in [12] using a Lyapunov function, the positive equilibrium is globally repelling, and the system can have no periodic orbit. The LVA belongs to a family of networks we refer to as the "generalised LVA", namely, \[\begin{array}{rcl}\begin{array}{rcl}\begin{array}{rcl}2\mathsf{X}& \xrightarrow{\kappa_{1}}&3\mathsf{X}\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}&(d+1)\mathsf{Y}\\ \mathsf{Y}&\xrightarrow{\kappa_{3}}&\mathsf{0}\end{array}&\begin{array}{rcl} \dot{x}=\kappa_{1}x^{2}-\kappa_{2}xy,\\ \dot{y}=d\kappa_{2}xy-\kappa_{3}y,\end{array}&\end{array} \tag{20}\] where \(d\geq 1\). From (17) and (18), we find that \(\det J>0\) and \(\operatorname{tr}J>0\) at the unique positive equilibrium of any system in the generalised LVA family, and thus it is a repellor. In fact, by the use of the Dulac function \(\frac{1}{xy}\) (as in the proof of Lemma 4), one sees that networks in (20) cannot have a periodic orbit. What makes the generalised LVA unique becomes apparent in Lemma 8 below. ### Proof of Theorem 7 To prove Theorem 7, consider a two-species, three-reaction, trimolecular, quadratic network. Assume also that the network has rank two, is dynamically nontrivial, and the sources span a triangle (as discussed in Section 4.1, these are all necessary for the associated mass-action differential equation to have a periodic orbit). We now distinguish between two cases. On the one hand, if neither \(2\mathsf{X}\longrightarrow 3\mathsf{X}\) nor \(2\mathsf{Y}\longrightarrow 3\mathsf{Y}\) is present then, noting that a rank-two network on two species clearly has no trivial species, the system admits no isolated periodic orbit by Corollary 5. On the other hand, if one of these reactions is present then, by Lemma 8 below, the system admits no periodic orbit. Hence, the proof of Theorem 7 will be complete after we prove Lemma 8. **Lemma 8**.: _Suppose a \((2,3,2)\) network (13) includes the reaction \(2\mathsf{X}\longrightarrow 3\mathsf{X}\) and satisfies the conditions above (i.e., it is quadratic, trimolecular, dynamically nontrivial, and has sources spanning a positively oriented triangle). Let \(J\) be the Jacobian matrix of the associated mass-action system at the unique positive equilibrium. Then, either_ 1. \(\det J<0\) _(the equilibrium is a saddle); or_ 2. \(\det J>0\)_, and the network belongs to the generalised LVA family (_20_)._ _Consequently, the system admits no periodic orbit._ Proof.: The assumptions of the lemma imply that \(c_{1}=1\) and \(d_{1}=0\) and so the differential equation (14) reads \[\begin{array}{rcl}\dot{x}=\kappa_{1}x^{2}+c_{2}\kappa_{2}x^{a_{2}}y^{b_{2}} +c_{3}\kappa_{3}x^{a_{3}}y^{b_{3}},\\ \dot{y}=\hskip 28.452756ptd_{2}\kappa_{2}x^{a_{2}}y^{b_{2}}+d_{3}\kappa_{3}x^{a _{3}}y^{b_{3}}.\end{array}\] If \(\det J<0\), then the system admits no periodic orbit (see Section 3.1). So now suppose \(\det J>0\). Then \(u\in\mathbb{R}^{3}_{+}\), see (17). Since \(c_{1}=1\) and \(d_{1}=0\), we get, from (16), that \(u_{2}=-d_{3}>0\) and \(u_{3}=d_{2}>0\). By (15), the source \((a_{3},b_{3})\) is one of \((0,0)\), \((1,0)\) or \((0,1)\), but only in the latter case could \(d_{3}\) be negative. Hence, \((a_{3},b_{3})=(0,1)\) and \(d_{3}=-1\). Further, \(c_{3}\geq 0\) also follows. Then \(u_{1}=c_{2}d_{3}-c_{3}d_{2}>0\) implies \(c_{2}<0\), hence \((a_{2},b_{2})=(1,1)\) and \(c_{2}=-1\) (where we once more used (15)). Then \(u_{1}=1-c_{3}d_{2}>0\) and hence \(c_{3}=0\). Consequently, the network belongs to the generalised LVA family (20) and, by the discussion in Section 4.2, the system admits no periodic orbit. ### Discussion on Theorem 7 By Theorem 10 below, the conclusion of Theorem 7 holds true for systems with any number of species. However, as we now illustrate by examples, the restrictions on the number of reactions and on the (source and target) molecularities cannot be dropped. Each of the following three planar networks admits a supercritical Andronov-Hopf bifurcation, and thus a stable limit cycle. On the left is a four-reaction, quadratic, trimolecular, mass-action system obtained by adding a reaction to the LVA (19); in the middle is a three-reaction, cubic, mass-action system known as the Selkov oscillator [25]; and on the right is a three-reaction, quadratic, tetramolecular, mass-action system which appeared as (4), and is the simplest member of the family (22) below. The bifurcations in these networks occur at \(\kappa_{3}=\kappa_{4}\), \(\kappa_{2}=\frac{\kappa_{2}^{3}}{\kappa_{1}^{2}}\), and \(\kappa_{1}=\kappa_{2}\), respectively. The analysis of all three systems is performed in [7]. ### Andronov-Hopf bifurcations This subsection is devoted to finding all three-reaction, planar, quadratic, mass-action systems that admit an _Andronov-Hopf bifurcation_, where a pair of complex conjugate eigenvalues crosses the imaginary axis as a parameter varies, see e.g. [13, Section 7.2] or [17, Section 3.4]. In light of Theorem 7, networks allowing such a bifurcation necessarily have a target complex with molecularity at least four. If the bifurcation is supercritical, a stable limit cycle is born. In degenerate cases (e.g. in linear systems [17, Fig. 3.9]), there is a one-parameter family of periodic solutions (a center) at the critical bifurcation parameter, an event we refer to as a _vertical_ Andronov-Hopf bifurcation. We proceed first by enumerating all possible configurations of (at most) bimolecular source complexes on two species. For each configuration we then identify which networks satisfy the necessary conditions for oscillation described above, including that they must be dynamically nontrivial, and conditions on the determinant and trace of the Jacobian matrix at positive equilibria. Where the set of networks with given source complexes and meeting these necessary conditions is nonempty, we identify those which admit an Andronov-Hopf bifurcation, and confirm whether the bifurcation is nondegenerate. Up to exchange of X and Y, there are ten ways to choose three bimolecular source complexes that do not lie on a line. These are illustrated in the following diagram: Theorem 9 below analyses these ten possibilities, and can be regarded as the second of our main results on three-reaction, planar, quadratic networks. We find that in Cases 1 to 6 the mass-action systems admit no periodic orbits. In Cases 7 to 10, we find systems with periodic orbits, but only in Cases 9 and 10 are these periodic orbits isolated. Notice that in the statement of the theorem in Cases 7 to 10 we list the source complexes such that they span a positively oriented triangle. **Theorem 9**.: _For a three-reaction, planar, quadratic, mass-action system, the following hold._ * _Cases 1 to 6. The system admits no periodic orbit._ * _Case 7_, \(X\), \(Y\). The system has a positive equilibrium which is a center for all_ _if_ _Otherwise, the system admits no periodic orbit._ * _Case 8_, \(X\), \(Y\), \(X\). The system has a periodic orbit if and only if_ _In this case, the system has a positive equilibrium which is a center, and by varying the ratio_ _, one obtains a vertical Andronov-Hopf bifurcation._ * _Case 9_, \(X\), \(Y\). The system admits an Andronov-Hopf bifurcation if and only if_ _Furthermore, the Andronov-Hopf bifurcation is supercritical._ * _Case 10_, \(X\), \(Y\), \(Y\). The system admits an Andronov-Hopf bifurcation if and only if_ _Furthermore, the Andronov-Hopf bifurcation is supercritical._ Proof.: We note first that whenever the networks in cases 1 to 10 are dynamically nontrivial, they are nondegenerate and, by Lemma 2, each corresponding mass-action system has a unique positive equilibrium for any choice of rate constants. On the other hand, if a network is dynamically trivial, then the corresponding mass-action system can have no periodic orbits. * **Cases 1 to 6** Assume, by way of contradiction, that the system admits a periodic orbit. First, we show that no reaction of the form (9) is present. Then we argue that the differential equation is not of the form (10), and thus arrive at a contradiction. In cases 1 and 2, the network obviously cannot have a reaction of the form (9). In cases 3 to 6 let the first source be 2X and the second source be (cases 3 and 4) or 2Y (cases 5 and 6). In all cases,, and hence. Since there is a periodic orbit by assumption, the network must be dynamically nontrivial, and follows. Additionally, the Jacobian determinant evaluated at the unique positive equilibrium must be positive, and therefore \(u\in\mathbb{R}_{+}^{3}\). Hence, \(0<u_{3}=c_{1}d_{2}-c_{2}d_{1}\) (see (16)), which implies \(c_{1}<0\). Since \(c_{1}\) and \(d_{2}\) are both negative, indeed there is no reaction of the form (9). Then, by Lemma 4, the associated mass-action system is of the Lotka-Volterra type with no diagonal term. In particular, only reactions with a source \(\mathsf{Y}\) or \(\mathsf{X}+\mathsf{Y}\) can contribute to \(\dot{y}\). However, since in all six cases there is at most one such source, \(\dot{y}\geq 0\) or \(\dot{y}\leq 0\). This contradicts the existence of a periodic orbit. * **Case \(\boxed{7}\times\), \(\mathsf{X}+\mathsf{Y}\), \(\mathsf{Y}\).** The differential equation reads \[\dot{x} =c_{1}\kappa_{1}x+c_{2}\kappa_{2}xy+c_{3}\kappa_{3}y,\] \[\dot{y} =d_{1}\kappa_{1}x+d_{2}\kappa_{2}xy+d_{3}\kappa_{3}y\] with \(c_{3},d_{1}\geq 0\). After multiplying by the Dulac function \((xy)^{-1}\), the divergence is \(-c_{3}\kappa_{3}x^{-2}-d_{1}\kappa_{1}y^{-2}\), which is nonpositive, and zero only if \(c_{3}=d_{1}=0\). Hence, if \(c_{3}=d_{1}=0\) is violated then, by the Bendixson-Dulac Test, the system admits no periodic orbit. The analysis of the case \(c_{3}=d_{1}=0\) is standard: see the proofs of Lemma 4 and Lemma 6. * **Case \(\boxed{8}\)**\(2\mathsf{X}\), \(\mathsf{X}+\mathsf{Y}\), \(\mathsf{X}\).** After division by \(x\), the differential equation reads \[\dot{x} =c_{1}\kappa_{1}x+c_{2}\kappa_{2}y+c_{3}\kappa_{3},\] \[\dot{y} =d_{1}\kappa_{1}x+d_{2}\kappa_{2}y+d_{3}\kappa_{3}\] with \(d_{1}\geq 0\) and \(d_{3}\geq 0\) (since \(b_{1}=0\) and \(b_{3}=0\)). This linear system has a periodic orbit (in fact, a center) if and only if there is a unique positive equilibrium at which \(\det J>0\), and \(\operatorname{tr}J=0\). Assume the system admits a periodic orbit. Then \(d_{2}<0\) (otherwise \(\dot{y}\geq 0\) in \(\mathbb{R}_{+}^{2}\)). Since \(\operatorname{tr}J=c_{1}\kappa_{1}+d_{2}\kappa_{2}\), we find that \(c_{1}>0\). Hence, from \(u_{3}>0\) (see (17)) we obtain \(c_{2}d_{1}<0\), which implies \(d_{1}>0\) and \(c_{2}<0\). From \(u_{2}>0\) it follows that \(c_{3}>0\). In fact, since additionally \(c_{2},d_{2}\geq-1\) (because \(a_{2}=b_{2}=1\)), both \(c_{2}\) and \(d_{2}\) equal to \(-1\). This gives all the sign conditions on \(c_{i}\), \(d_{i}\) in the statement of the theorem. The inequalities \(\frac{d_{3}}{c_{3}}<1<\frac{d_{1}}{c_{1}}\) are equivalent to \(u_{1}\), \(u_{2}\), \(u_{3}>0\). Therefore the conditions in the statement are necessary and sufficient for the existence of a periodic solution. Clearly, by varying the ratio \(\frac{\kappa_{1}}{\kappa_{2}}\), one obtains a vertical Andronov-Hopf bifurcation at \(\frac{\kappa_{1}}{\kappa_{2}}=-\frac{d_{2}}{c_{1}}\). * **Case \(\boxed{9}\)**\(2\mathsf{X}\), \(\mathsf{X}+\mathsf{Y}\), \(\mathsf{Y}\).** For an Andronov-Hopf bifurcation in (14), there must exist a unique positive equilibrium \((\bar{x},\bar{y})\) at which \(\det J>0\) and \(\operatorname{tr}J=0\) hold. By (18), \(\operatorname{tr}J=\mu\left(\frac{1}{\bar{x}}(c_{1}u_{1}-c_{3}u_{3})-\frac{1}{ \bar{y}}d_{1}u_{1}\right)\), where we used the general observations that \(\sum_{i=1}^{3}c_{i}u_{i}=0\) and \(\sum_{i=1}^{3}d_{i}u_{i}=0\) (recall that \(u=c\times d\)). Case \(d_{1}=0\). In this case, \(\operatorname{tr}J=0\) if and only if \(c_{1}u_{1}=c_{3}u_{3}\). Thus, \(c_{1}>0\) and \(c_{3}>0\) (\(c_{3}\geq 0\) is given, since \(a_{3}=0\)). Thus, \(c_{2}=-1\) (because \(c_{2}\geq-1\) and \(c_{2}<0\)). Also, \(d_{3}=-1\), because \(u_{2}>0\) implies \(d_{3}<0\). Also, \(d_{2}>0\), because \(u_{3}>0\). Hence, \(0<c_{3}d_{2}<1\), because \(u_{1}>0\). However, this has no integer solution. Case \(d_{1}>0\). Then \(c_{1}u_{1}>c_{3}u_{3}\), and hence, \(c_{1}>0\) and \(c_{2}<0\) (in fact, \(c_{2}=-1\)). If \(c_{3}=0\), we find that \(u_{i}>0\) is equivalent to \(d_{1}>0\), \(d_{3}<0\), and \(\frac{d_{2}}{c_{2}}<\frac{d_{1}}{c_{1}}\). If \(c_{3}>0\), we find that \(u_{i}>0\) and \(c_{1}u_{1}>c_{3}u_{3}\) are equivalent to \(\frac{1}{2}\left(\frac{d_{3}}{c_{3}}+\frac{d_{1}}{c_{1}}\right)<\frac{d_{2}}{c_{ 2}}<\frac{d_{1}}{c_{1}}\). To see that the Andronov-Hopf bifurcation is nondegenerate and, in fact, supercritical, one computes the first focal value and finds it is negative. See the Mathematica Notebook [7] for the calculations. * **Case \(\boxed{10}\)**\(2\mathsf{X}\), \(\mathsf{X}+\mathsf{Y}\), \(\mathsf{0}\).** As in the proof of case \(\boxed{9}\), for an Andronov-Hopf bifurcation to occur there must exist a unique positive equilibrium \((\bar{x},\bar{y})\) at which \(\det J>0\) and \(\operatorname{tr}J=0\) hold. By (18), \(\operatorname{tr}J=\mu\left(\frac{1}{\bar{x}}(c_{1}u_{1}-c_{3}u_{3})+\frac{1}{ \bar{y}}d_{2}u_{2}\right)\), where we used \(\sum_{i=1}^{3}c_{i}u_{i}=0\) (recall that \(u=c\times d\)). Since at least one of \(d_{1}\), \(d_{2}\) or \(d_{3}\) must be negative (as the network must be dynamically nontrivial), we find \(d_{2}=-1\). Then \(c_{1}u_{1}>c_{3}u_{3}\) implies \(c_{1}>0\). Hence, \(c_{2}=-1\). Since \(u_{1}>0\), we also find \(c_{3}>0\). Then, as in the proof of case 9, we find that the system admits an Andronov-Hopf bifurcation if and only if \(\frac{1}{2}\left(\frac{d_{3}}{c_{3}}+\frac{d_{1}}{c_{1}}\right)<\frac{d_{2}}{c_ {2}}<\frac{d_{1}}{c_{1}}\) holds. That the first focal value is negative is shown again in [7]. Hence, the Andronov-Hopf bifurcation is supercritical. ### Realisations of the oscillating systems in Theorem 9 In case 7, up to exchange of \(\mathsf{X}\) and \(\mathsf{Y}\), the mass-action systems with a center for all \(\kappa\) are given by \[\begin{array}{ccc}\mathsf{X}&\xrightarrow{\kappa_{1}}(1+c)\mathsf{X}&\qquad \dot{x}=\kappa_{1}cx-\kappa_{2}xy,\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}(1+d)\mathsf{Y}&\qquad\dot{y}= \kappa_{2}dxy-\kappa_{3}y\end{array} \tag{21}\] with \(c,d\geq 1\). Note that this is one of the families of networks, namely (11), obtained in Lemma 6, and this family also appears in [12, Eq. (5)]. For \(c=d=1\) the network is bimolecular and is, in fact, the Lotka reactions (1). Since the qualitative picture is the same for all \(\kappa\), these systems admit no bifurcation as \(\kappa\) is varied. In case 8, no tetramolecular network admits a vertical Andronov-Hopf bifurcation. The pentamolecular examples are \[\begin{array}{ccc}2\mathsf{X}&\xrightarrow{\kappa_{1}}3\mathsf{X}+2\mathsf{Y }&\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}0&\qquad\dot{x}=\kappa_{1}x^{2}- \kappa_{2}xy+c\kappa_{3}x,\\ \mathsf{X}&\xrightarrow{\kappa_{3}}(1+c)\mathsf{X}+d\mathsf{Y}&\qquad\dot{y}= 2\kappa_{1}x^{2}-\kappa_{2}xy+d\kappa_{3}x,\end{array}\] where \(0\leq d<c\) and \(c+d\leq 4\). The tetramolecular networks in case 9 that admit an Andronov-Hopf bifurcation are \[\begin{array}{ccc}2\mathsf{X}&\xrightarrow{\kappa_{1}}3\mathsf{X}+\mathsf{Y }&\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}(1+d)\mathsf{Y}&\qquad\dot{x}= \kappa_{1}x^{2}-\kappa_{2}xy,\\ \mathsf{Y}&\xrightarrow{\kappa_{3}}0&\qquad\dot{y}=\kappa_{1}x^{2}+d\kappa_{ 2}xy-\kappa_{3}y\end{array} \tag{22}\] with \(d=0,1,2,3\). In case 10 no hexamolecular network admits an Andronov-Hopf bifurcation. The heptamolecular examples are \[\begin{array}{ccc}2\mathsf{X}&\xrightarrow{\kappa_{1}}4\mathsf{X}+3\mathsf{Y }&\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}0&\qquad\dot{x}=2\kappa_{1}x^{2} -\kappa_{2}xy+c\kappa_{3},\\ 0&\xrightarrow{\kappa_{3}}c\mathsf{X}+d\mathsf{Y}&\qquad\dot{y}=3\kappa_{1}x^{ 2}-\kappa_{2}xy-d\kappa_{3},\end{array} \tag{23}\] where \(c>0\), \(d\geq 0\), \(c+d\leq 7\) and \(\frac{d}{c}<\frac{1}{2}\). The analysis of three-reaction, quadratic, trimolecular systems Our main goal in this section is to extend Theorem 7 to an arbitrary number of species, thereby obtaining Theorem 1, namely, no three-reaction, quadratic, trimolecular, mass-action system has an isolated periodic orbit. In fact, we prove more in Theorem 10, where mass-action systems in this class which admit a periodic orbit are fully characterized. Recall from Section 2.4 that a three-reaction mass-action system which admits a periodic orbit must have rank two. We thus focus on \((n,3,2)\) systems. In Section 5.1, we demonstrate by an example that \((n,3,2)\) mass-action systems with \(n\geq 3\) can have multiple isolated positive equilibria in a stoichiometric class, a phenomenon that does not occur when \(n=2\), and makes the general case slightly more complicated. In Section 5.2, we study a family of networks that is related to the generalised LVA (20), and which plays an important role in Theorem 10. In Section 5.3, we state and prove our main result, Theorem 10. The proof uses three lemmas that are stated and proved in Section 5.4. ### Number of equilibria For \((2,3,2)\) networks, by Lemma 2, whenever there is an isolated positive equilibrium, there is exactly one positive equilibrium for each choice of \(\kappa\). For \((n,3,2)\) networks with \(n\geq 3\), the number of isolated positive equilibria may depend on the positive stoichiometric class in question. In fact, even if an equilibrium is unique on its stoichiometric class, it can be degenerate. To illustrate these statements, consider the bimolecular \((3,3,2)\) mass-action system \[\begin{array}{ccc}\mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{1}}2\mathsf{Z}& \qquad\dot{x}=2\kappa_{2}z^{2}-\kappa_{1}xy,\\ 2\mathsf{Z}&\xrightarrow{\kappa_{2}}2\mathsf{X}&\qquad\dot{y}=\kappa_{3}z- \kappa_{1}xy,\\ \mathsf{Z}&\xrightarrow{\kappa_{3}}\mathsf{Y}&\qquad\dot{z}=2\kappa_{1}xy-2 \kappa_{2}z^{2}-\kappa_{3}z.\end{array}\] The set of positive equilibria is the hyperbola \(\{(x,y,z)\in\mathbb{R}_{+}^{3}\colon xy=\frac{\kappa_{3}^{2}}{2\kappa_{1} \kappa_{2}},z=\frac{\kappa_{3}}{2\kappa_{2}}\}\), and the positive stoichiometric classes are \(\mathcal{P}_{C}=\{(x,y,z)\in\mathbb{R}_{+}^{3}\colon x+y+z=C\}\) for \(C>0\). For any fixed rate constants, the number of positive equilibria in \(\mathcal{P}_{C}\) is \(0\), \(1\), or \(2\), depending on \(C\). In fact, the system admits fold bifurcations of equilibria, a phenomenon that is ruled out for \((2,3,2)\) mass-action systems. ### The Lifted LVA The networks we study in this subsection play a special role in the main result, Theorem 10. We consider the family of mass-action systems \[\begin{array}{ccc}2\mathsf{X}&\xrightarrow{\kappa_{1}}3\mathsf{X}&\qquad \dot{x}=\kappa_{1}x^{2}-\kappa_{2}xy,\\ \mathsf{X}+\mathsf{Y}&\xrightarrow{\kappa_{2}}(d+1)\mathsf{Y}+d\mathsf{Z}&\qquad \dot{y}=d\kappa_{2}xy-\kappa_{3}yz,\\ \mathsf{Y}+\mathsf{Z}&\xrightarrow{\kappa_{3}}\mathsf{0}&\qquad\dot{z}=d \kappa_{2}xy-\kappa_{3}yz\end{array} \tag{24}\] with \(d\geq 1\). Each member of this family is obtained by adding a new species, \(\mathsf{Z}\), to networks in the generalised LVA family (20) while preserving the rank of the network. The case \(d=1\) corresponds to the Lifted LVA (3), and consequently we refer to the networks in (24) as the "Lifted LVA family". Note that the Lifted LVA is the only trimolecular network in this family. Observe that as the networks in the Lifted LVA family clearly have no trivial species, any periodic orbits of the corresponding mass-action systems must be positive (see Section 3.1), and we can restrict attention to the positive orthant. The set of positive equilibria of (24) is the ray \(\{(t,\frac{\kappa_{1}}{\kappa_{2}},\frac{\kappa_{2}d}{\kappa_{3}}t)\colon t>0\}\). Since \(\dot{y}=\dot{z}\), the stoichiometric classes are given by the planes \(z=y+C\). Hence, in local coordinates on a positive stoichiometric class, the differential equation (24) reduces to the 2d Lotka-Volterra system \[\dot{x} =x(\kappa_{1}x-\kappa_{2}y), \tag{25}\] \[\dot{y} =y(d\kappa_{2}x-\kappa_{3}(y+C)).\] To analyse (25), we apply the coordinate transformation \(v=\frac{y}{x}\), \(w=\frac{1}{x}\), which takes \(\mathbb{R}_{+}^{2}\) to \(\mathbb{R}_{+}^{2}\) and, after a multiplication by \(w\), transforms (25) into the Lotka-Volterra system \[\dot{v} =v((d\kappa_{2}-\kappa_{1})+(\kappa_{2}-\kappa_{3})v-\kappa_{3}Cw), \tag{26}\] \[\dot{w} =w(-\kappa_{1}+\kappa_{2}v).\] Equation (26) has a positive equilibrium if and only if \(C>0\) and \(d\kappa_{2}^{2}>\kappa_{1}\kappa_{3}\), and in this case the positive equilibrium, \((\bar{v},\bar{w})\), is unique, and the Jacobian determinant of the system evaluated at this equilibrium is positive. For \(C>0\), the predator-prey system (26) has a Lyapunov function \(V(v,w)=\kappa_{2}(v-\bar{v}\log v)+\kappa_{3}C(w-\bar{w}\log w)\), which is convex, attains its unique minimum at \((\bar{v},\bar{w})\), and satisfies \(\dot{V}=\kappa_{2}(\kappa_{2}-\kappa_{3})(v-\bar{v})^{2}\), see e.g. [16, Section 2.7]. Therefore, for \(\kappa_{2}<\kappa_{3}\), the positive equilibrium is globally asymptotically stable (on \(\mathbb{R}_{+}^{2}\)), for \(\kappa_{2}=\kappa_{3}\) it is a global center, and for \(\kappa_{2}>\kappa_{3}\), it is a global repellor. Therefore systems in the Lifted LVA family (24) undergo a vertical Andronov-Hopf bifurcation, as \(\frac{\kappa_{2}}{\kappa_{3}}\) increases through the value \(1\), provided \(\kappa_{1}<d\kappa_{2}\). This happens simultaneously in all stoichiometric classes with \(C>0\). In particular, isolated periodic orbits cannot occur in this family. ### Main result We are now in the position to state our main result. Apart from the results already proved, the proof also relies on three lemmas which are stated and proved in Section 5.4. **Theorem 10**.: _Assume that a three-reaction, quadratic, trimolecular, mass-action system with no trivial species has a periodic orbit. Then, up to a permutation of the species, the differential equation is one of the following._ \[\text{(I)}\] _(II) (III) \[\dot{x} =x(\kappa_{1}c-\kappa_{2}y) \dot{x} =x(\kappa_{1}z-\kappa_{2}y) \dot{x} =x(\kappa_{1}x-\kappa_{2}y)\] \[\dot{y} =y(\kappa_{2}dx-\kappa_{3}) \dot{y} =y(\kappa_{2}x-\kappa_{3}z) \dot{y} =y(\kappa_{2}x-\kappa_{2}z)\] \[\dot{z} =z(\kappa_{3}y-\kappa_{1}x) \dot{z} =y(\kappa_{2}x-\kappa_{2}z)\] \[\kappa_{1},\kappa_{2},\kappa_{3}>0 \kappa_{1},\kappa_{2},\kappa_{3}>0 0<\kappa_{1}<\kappa_{2}\] _Note that_ * _equation (I) is the generalised Lotka ODE (_21_), and the unique positive equilibrium is a global center,_ * _equation (II) is the Ivanova ODE (_2_), and in each positive stoichiometric class_ \(x+y+z=C>0\) _the unique positive equilibrium is a global center,_ * _equation (III) is the Lifted LVA ODE (_3_) with_ \(\kappa_{2}=\kappa_{3}>\kappa_{1}\)_, and the unique positive equilibrium in each positive stoichiometric class_ \(z-y=C>0\) _is a global center, while there is no positive equilibrium in the positive stoichiometric classes_ \(z-y=C\leq 0\)_._ _In particular, mass-action systems satisfying the conditions of the theorem admit no isolated periodic orbits._ Proof.: Recall from Section 2.4 that a three-reaction network, whose mass-action system admits a periodic orbit, has rank two. Hence, by remarks in Section 3.1, the network must be dynamically nontrivial, and any periodic orbit of a system satisfying the hypotheses of the theorem must be positive. We distinguish between two cases. * In case no reaction in the network is of the form \(2\mathsf{X}_{i}\to 3\mathsf{X}_{i}\), by Lemma 6 we find that the network is either the generalised Lotka reactions (11) with \(c,d\in\{1,2\}\), leading to (I), or the Ivanova reactions, leading to (II). Note that none of the networks in (12) is trimolecular. * Assume now that the reaction \(2\mathsf{X}\to 3\mathsf{X}\) is present. In case there are three species, the differential equation is (III) by Lemma 11 below and the discussion in Section 5.2. In case there are two, four, or at least five species then the system admits no periodic orbit by Lemmas 8, 12 and 13, respectively. Notice that Theorem 1 is a corollary of Theorem 10. Furthermore, Theorem 10 is an extension of Theorem 7 to an arbitrary number of species. In fact, even though all networks in Theorem 10 are assumed to have no trivial species, the conclusion that three-reaction, quadratic, trimolecular, mass-action systems admit no isolated periodic orbits clearly holds true without this assumption. Based on Theorem 10 we can list all three-reaction, quadratic, trimolecular \((n,3,2)\) networks (including those with some trivial species), whose mass-action system admits a periodic orbit: there are \(16\) such networks, see Figure 1. Figure 1: The list of all three-reaction, quadratic, trimolecular \((n,3,2)\) networks whose mass-action system has a periodic orbit for some rate constants. There are sixteen such networks. Four are members of the family (21), eight are derived from these by adding a trivial species, and two are obtained by adding two trivial species. The latter two are the only ones with four species. The Ivanova reactions and the Lifted LVA complete the list. Notice that the only ones that are bimolecular are the Lotka reactions (1) and the Ivanova reactions (2). ### Networks with the reaction \(2\mathsf{X}\to 3\mathsf{X}\) In this subsection, we prove three lemmas about quadratic \((n,3,2)\) networks without trivial species that include the reaction \(2\mathsf{X}\to 3\mathsf{X}\); these results are used in the proof of Theorem 10 above. In Lemma 11 we discuss the case \(n=3\) and show that the only networks which lead to a periodic orbit are those in the Lifted LVA family (24). In Lemma 12 we prove that when \(n=4\) any positive equilibrium must be a saddle. Finally, in Lemma 13 we find that no network admits a positive equilibrium when \(n\geq 5\). **Lemma 11**.: _Suppose a quadratic \((3,3,2)\) mass-action system with no trivial species includes the reaction \(2\mathsf{X}\to 3\mathsf{X}\) and has a periodic orbit for some rate constants. Then the network is a member of the Lifted LVA family (24)._ Proof.: Consider a network satisfying the assumptions of the lemma. Any periodic orbit must be positive and the network must be dynamically nontrivial (see Section 3.1); consequently each species is gained in at least one reaction and is lost in at least one reaction. In \(2\mathsf{X}\to 3\mathsf{X}\), an \(\mathsf{X}\) is gained. W.l.o.g. assume that \(\mathsf{X}\) is lost in the second reaction. Then the source of that reaction is either \(\mathsf{X}\) or \(\mathsf{X}+\mathsf{Y}\) (note that the second source cannot be \(2\mathsf{X}\) by Lemma 3). Moreover, the target of the reaction does not include \(\mathsf{X}\). Suppose that the source of the second reaction is \(\mathsf{X}\). As the network is dynamically nontrivial, the third reaction must be \(\mathsf{Y}+\mathsf{Z}\to a\mathsf{X}\) (for some \(a\geq 0\)). To guarantee that the rank of the network is two, the second reaction must then be \(\mathsf{X}\to b\mathsf{Y}+b\mathsf{Z}\) (for some \(b\geq 1\)). As \(\ker\Gamma\) is spanned by \([1-ab,1,1]^{\top}\), the network is dynamically nontrivial if and only if \(a=0\). A short computation shows that the reduced Jacobian determinant (7) at a positive equilibrium \((x,y,z)\) equals \(-\frac{b}{xy}-\frac{b}{xz}\), which is negative. Thus, any positive equilibrium is a saddle, contradicting the occurrence of a periodic orbit (see Section 3.1). Suppose now that the source of the second reaction is \(\mathsf{X}+\mathsf{Y}\) and additionally suppose that \(\mathsf{Y}\) is lost in that reaction. Then \(\mathsf{Y}\) is gained in the third reaction. Further, \(\mathsf{Z}\) must be gained in the second reaction and lost in the third one. The general form of the network is \[2\mathsf{X} \longrightarrow 3\mathsf{X}\] \[\mathsf{X}+\mathsf{Y} \longrightarrow c\mathsf{Z}\] \[\alpha\mathsf{X}+\beta\mathsf{Y}+\gamma\mathsf{Z} \longrightarrow(\alpha+a)\mathsf{X}+(\beta+b)\mathsf{Y}+( \gamma-bc)\mathsf{Z}\] where the stoichiometric coefficient \(\gamma-bc\) is taken to ensure \(\operatorname{rank}\Gamma=2\). The parameters satisfy \[b\geq 1,c\geq 1,\gamma\geq 1,\alpha+\beta\leq 1.\] Examining the stoichiometric matrix, we find that the network is dynamically nontrivial if and only if \(b-a>0\) (recall that \(b\geq 1\)). A short calculation shows that the reduced Jacobian determinant (7) at a positive equilibrium \((x,y,z)\) equals \[-(b-a)b\left(\frac{2-\alpha-\beta}{xy}+\frac{c\gamma}{xz}\right),\] which is negative, i.e., the equilibrium is a saddle. This contradicts the occurrence of a periodic orbit (see Section 3.1). Finally, suppose that the source of the second reaction is \(\mathsf{X}+\mathsf{Y}\) and additionally suppose that \(\mathsf{Y}\) is gained in that reaction. Taking also into account that the network is dynamically nontrivial and has rank two, the network must belong to the Lifted LVA family (24). **Lemma 12**.: _Suppose a quadratic \((4,3,2)\) mass-action network with no trivial species contains the reaction \(2\mathsf{X}\to 3\mathsf{X}\) and has a positive equilibrium. Then the network is_ \[2\mathsf{X} \longrightarrow 3\mathsf{X}\] \[\mathsf{X}+\mathsf{Y} \longrightarrow\mathsf{Z}+\mathsf{W} \tag{27}\] \[\mathsf{Z}+\mathsf{W} \longrightarrow\mathsf{Y}\] _For any choice of rate constants, the corresponding mass-action system has exactly one positive equilibrium in every positive stoichiometric class, and this equilibrium is a saddle. Consequently, the system admits no periodic orbit._ Proof.: Consider a network satisfying the hypotheses of the lemma. In \(2\mathsf{X}\to 3\mathsf{X}\), species \(\mathsf{X}\) is gained and therefore, as the network admits positive equilibria and hence must be dynamically nontrivial, there has to be a reaction where \(\mathsf{X}\) is lost. The source of that reaction must be either \(\mathsf{X}\), \(2\mathsf{X}\), or \(\mathsf{X}+\mathsf{Y}\). In case the source of the second reaction is \(\mathsf{X}\) or \(2\mathsf{X}\), the species \(\mathsf{Y}\), \(\mathsf{Z}\), \(\mathsf{W}\) can only be gained there. However, then each of \(\mathsf{Y}\), \(\mathsf{Z}\), \(\mathsf{W}\) must be lost in the third reaction, which is impossible in a quadratic network. In case the source of the second reaction is \(\mathsf{X}+\mathsf{Y}\), the species \(\mathsf{Z}\) and \(\mathsf{W}\) can be lost only in the third reaction, the source of that reaction must be \(\mathsf{Z}+\mathsf{W}\). The general form of the network is then \[2\mathsf{X} \longrightarrow 3\mathsf{X}\] \[\mathsf{X}+\mathsf{Y} \longrightarrow c\mathsf{Z}+d\mathsf{W}\] \[\mathsf{Z}+\mathsf{W} \longrightarrow a\mathsf{X}+b\mathsf{Y}\] with \(a,b,c,d\geq 0\). We easily find that the network is dynamically nontrivial if and only if \(a=0\) and \(b=c=d=1\), which gives the network (27). By a short calculation, this network has a unique positive equilibrium in every positive stoichiometric class. Notice that network (27) was used in Section 3.3 as an illustrative example, where we concluded that any positive equilibrium is a saddle. Hence, by the observations in Section 3.1, the system admits no periodic orbit. **Lemma 13**.: _For \(n\geq 5\), any \((n,3,2)\) network without trivial species, and including the reaction \(2\mathsf{X}\to 3\mathsf{X}\), must be dynamically trivial. Consequently, the corresponding mass-action system admits no periodic orbit._ Proof.: As at the beginning of the proofs of Lemmas 11 and 12, the sources of the first and second reactions can involve only two species in total. Also, because of the bimolecularity of the sources, the third reaction can also have at most two different species in its source. Hence at least one of the species does not appear in any of the sources, and its concentration is nondecreasing. Since it is, by assumption, not a trivial species, the network is dynamically trivial and admits no periodic orbit (see Section 3.1). ## 6 Conclusions In this paper, we have studied quadratic networks with three reactions, and identified all trimolecular mass-action systems in this class which admit a periodic orbit. In all of these systems, any nearby orbits of a periodic orbit are periodic, too. Thus, the existence of an isolated periodic orbit in a quadratic mass-action system with three reactions implies that at least one target complex has a molecularity of four or more. In fact, in the two-species case (with no assumption on the target molecularities), we classified all networks that admit a nondegenerate Andronov-Hopf bifurcation, and thus, a limit cycle, see cases 9 and 10 in Theorem 9. However, we leave it open whether the networks in cases 9 and 10 admitting no Andronov-Hopf bifurcations can have a limit cycle. For example, we do not know whether the system \[2\mathsf{X}\xrightarrow{\kappa_{1}}3\mathsf{X}+2\mathsf{Y}\] \[\mathsf{X}+\mathsf{Y}\xrightarrow{\kappa_{2}}0\] \[\mathsf{Y}\xrightarrow{\kappa_{3}}\mathsf{X}+\mathsf{Y} \qquad\qquad\qquad\dot{x}=\kappa_{1}x^{2}-\kappa_{2}xy+\kappa_{3}y,\] admits a limit cycle. The networks in cases 9 and 10 in Theorem 9 that admit a nondegenerate Andronov-Hopf bifurcation can be enlarged by adding a new (nontrivial) species in a way that the rank of the network is preserved [5]. The resulting quadratic \((3,3,2)\) networks admit a supercritical Andronov-Hopf bifurcation by [5, Remark 6]. For example, the networks \[\begin{CD}2\mathsf{X}@>{\kappa_{1}}>{}>3\mathsf{X}+\mathsf{Y}\\ \mathsf{X}+\mathsf{Y}@>{\kappa_{2}}>{}>\mathsf{Y}+\mathsf{Z}\\ \mathsf{Y}+\mathsf{Z}@>{\kappa_{3}}>{}>\mathsf{0}\end{CD}\qquad\text{and}\qquad \begin{CD}2\mathsf{X}@>{\kappa_{1}}>{}>4\mathsf{X}+3\mathsf{Y}+\mathsf{Z}\\ \mathsf{X}+\mathsf{Y}\xrightarrow{\kappa_{2}}0\\ \mathsf{Z}@>{\kappa_{3}}>{}>\mathsf{X}\end{CD} \tag{28}\] obtained from (22) with \(d=0\) and (23) with \((c,d)=(1,0)\), respectively, both admit a supercritical Andronov-Hopf bifurcation, and thus, a stable limit cycle. We leave it open whether there are quadratic \((3,3,2)\) networks (with no assumption on the target molecularities) that admit a nondegenerate Andronov-Hopf bifurcation which is not inherited from a smaller network as in these examples. Interestingly, the octomolecular network on the right of (28) admits not only a supercritical Andronov-Hopf bifurcation, but also a Bogdanov-Takens bifurcation [17, Section 8.4], and hence, a homoclinic bifurcation. The stoichiometric classes are given by \(x-y+z=C\) for \(C\in\mathbb{R}\), and the set of positive equilibria is the curve \(\left\{\left(t,\frac{3\kappa_{1}}{\kappa_{2}}t,\frac{\kappa_{1}}{\kappa_{3}}t ^{2}\right):t>0\right\}\), which intersects the stoichiometric classes in \(0\), \(1\), or \(2\) points. One may verify that for fixed \(\kappa_{2}>0\) and \(C<0\), a supercritical Bogdanov-Takens bifurcation occurs at \[(\kappa_{1},\kappa_{3})=\kappa_{2}\left(\frac{3+\sqrt{6}}{3},\frac{-2C}{3+ \sqrt{6}}\right).\] We conclude this paragraph with the observation that the mass-action differential equation of the ocoto-molecular network in question is identical to that of \[\begin{CD}2\mathsf{X}@>{2\kappa_{1}}>{}>3\mathsf{X}\\ 2\mathsf{X}@>{3\kappa_{1}}>{}>2\mathsf{X}+\mathsf{Y}\\ 2\mathsf{X}@>{\kappa_{1}}>{}>2\mathsf{X}+\mathsf{Z}\\ \mathsf{X}+\mathsf{Y}@>{\kappa_{2}}>{}>\mathsf{0}\\ \mathsf{Z}@>{\kappa_{3}}>{}>\mathsf{X}\end{CD}\] which is a five-reaction, trimolecular system, with restrictions on its rate constants. The construction at the end of the previous paragraph works in general: the mass-action differential equation of any quadratic network (with no assumption on the target molecularities) is identical to that of some trimolecular, quadratic network with some restrictions on its rate constants. Thus, claims about systems with high target molecularity can often be reduced to claims about trimolecular systems, at the cost of increasing the total number of reactions. Indeed, given any mass-action system, to obtain an equivalent system with trimolecular targets we may replace each reaction of the form \[\sum_{i=1}^{n}a_{i}\mathsf{X}_{i}\stackrel{{\kappa}}{{ \longrightarrow}}\sum_{i=1}^{n}(a_{i}+c_{i})\mathsf{X}_{i}\quad\text{ with }\sum_{i=1}^{n}(a_{i}+c_{i})\geq 4\] by the (at most) \(n\) reactions \[\sum_{i=1}^{n}a_{i}\mathsf{X}_{i}\stackrel{{\kappa|c_{j}|}}{{ \longrightarrow}}(a_{j}+\operatorname{sgn}c_{j})\mathsf{X}_{j}+\sum_{i\neq j }a_{i}\mathsf{X}_{i}\quad\text{ for }j=1,\ldots,n\text{ with }c_{j}\neq 0.\] For example, the three-reaction, tetramolecular network (4) gives rise to the same mass-action differential equation as \[\begin{array}{c}2\mathsf{X}\stackrel{{\kappa_{1}}}{{\longrightarrow }}3\mathsf{X}\\ 2\mathsf{X}\stackrel{{\kappa_{1}}}{{\longrightarrow}}2\mathsf{X}+ \mathsf{Y}\\ \mathsf{X}+\mathsf{Y}\stackrel{{\kappa_{2}}}{{\longrightarrow}} \mathsf{Y}\\ \mathsf{Y}\stackrel{{\kappa_{3}}}{{\longrightarrow}}0\end{array}\] which is trimolecular, has one more reaction, and has a restriction on its rate constants. As this example and the first network in Section 4.4 show, there exist quadratic, trimolecular \((2,4,2)\) networks admitting a nondegenerate Andronov-Hopf bifurcation with mass-action kinetics. In future work, we plan to find all such networks, and identify which of the corresponding mass-action systems admit Bogdanov-Takens bifurcation.
2305.15556
Optimal Generators for Quantum Sensing
We propose a computationally efficient method to derive the unitary evolution that a quantum state is most sensitive to. This allows one to determine the optimal use of an entangled state for quantum sensing, even in complex systems where intuition from canonical squeezing examples breaks down. In this paper we show that the maximal obtainable sensitivity using a given quantum state is determined by the largest eigenvalue of the quantum Fisher information matrix (QFIM) and, importantly, the corresponding evolution is uniquely determined by the coinciding eigenvector. Since we optimize the process of parameter encoding rather than focusing on state preparation protocols, our scheme is relevant for any quantum sensor. This procedure naturally optimizes multiparameter estimation by determining, through the eigenvectors of the QFIM, the maximal set of commuting observables with optimal sensitivity.
Jarrod T. Reilly, John Drew Wilson, Simon B. Jäger, Christopher Wilson, Murray J. Holland
2023-05-24T20:42:38Z
http://arxiv.org/abs/2305.15556v3
# Optimal Generators for Quantum Sensing ###### Abstract We propose a computationally efficient method to derive the unitary evolution that a quantum state is most sensitive to. This allows one to determine the optimal use of an entangled state for quantum sensing, even in complex systems where intuition from canonical squeezing examples breaks down. In this paper we show that the maximal obtainable sensitivity using a given quantum state is determined by the largest eigenvalue of the quantum Fisher information matrix (QFIM) and, importantly, the corresponding evolution is uniquely determined by the coinciding eigenvector. Since we optimize the process of parameter encoding rather than focusing on state preparation protocols, our scheme is relevant for _any_ quantum sensor. This procedure naturally optimizes multiparameter estimation by determining, through the eigenvectors of the QFIM, the maximal set of commuting observables with optimal sensitivity. _Introduction.--_ Quantum sensing is revolutionizing the way we understand and interact with the world around us. This is enabled by recent major advances in quantum technologies such as atomic clocks [1; 2], inertial sensors [3; 4; 5], gravitational wave detectors [6; 7; 8; 9], and biosensors and tissue imaging devices [10]. The development of sensing devices by way of quantum parameter estimation is at the core of the ever-growing field of quantum metrology [11]. Moreover, quantum sensors offer a true quantum advantage over classical counterparts as quantum entanglement may be used to surpass the standard quantum limit (SQL), which is the fundamental limit that arises from shot noise in measurements of uncorrelated quantum states. Entangled states have also been shown to have increased robustness against fluctuations of parameters during the measurement process [12; 13; 14]. One of the greatest challenges to the development of sensing devices with a quantum advantage is the generation of metrologically useful quantum entanglement. Consequently, current schemes rely on dynamics in which the quantum state evolution can be intuitively understood. This provides insight about the final state so that it may then be manipulated to utilize its entanglement for a given sensing purpose. For example, analytic solutions have been developed for the well-known one-axis twisting (OAT) Hamiltonian [11; 15; 16; 17] to track the anti-squeezed axis, as the resultant state is most sensitive to rotations about this axis. This anti-squeezed axis can then be rotated to sense a particular parameter. Many formal theoretical techniques have been developed to determine the metrological usefulness of a state for a given sensing purpose [11; 18]. In particular, the quantum Fisher information (QFI) represents the maximum achievable precision of measuring a specific encoded parameter [19] and serves as a sufficient entanglement witness [20; 21]. However, this assumes a particular form of the evolution and thus fails to shed light on what evolution is optimal. For example, in higher dimensional systems where dynamics are more complicated than OAT, the intuition from canonical squeezing examples breaks down. This is the case in systems where the dynamics cannot be represented on a single collective Bloch sphere [22; 23; 24; 25; 26; 27] and, as a result, the metrological gain from entanglement cannot be readily determined from previous techniques in an efficient manner. In this Letter, we develop a procedure that finds the physical evolution that a prepared quantum state \(\hat{\rho}\) is most sensitive to. To do this, we utilize the quantum Fisher information matrix (QFIM) in which the diagonal elements are simply the QFI for each single parameter [28; 29], while the off-diagonal elements are the degree of correlation between two parameters [30]. More fundamentally, the QFIM has a deep connection to distances between quantum states in a Hilbert space in the language of quantum state geometry [31; 32], where it provides a metric. We use this geometric formalization to show that one can find the evolution a given state is most sensitive to by simply diagonalizing the QFIM. The largest eigenvalue of the QFIM is the maximum achievable QFI for single parameter estimation with the state and the corresponding eigenvector is related to the evolution that achieves this maximum sensitivity. We benchmark our procedure by first applying it to OAT where we show that well-known squeezing quantities pop out naturally from this formalism. We then analyze a higher dimensional SU(4) system that cannot be represented on a single Bloch sphere. We further use the SU(4) structure to demonstrate the potential of beyond SQL multiparameter estimation utilizing the QFIM eigenvectors. This procedure may then be used to sense, for example, vector or tensorial quantities [33; 34; 35; 36; 37; 5; 38]. To be clear, the purpose of our work is not to propose protocols for the creation of entangled states for quantum sensing. Instead, we consider the state fixed and seek to quantify its sensitivity to all possible evolutions, which allows us to determine its full potential for quantum sensing. This makes our proposed method useful for _any_ preparation scheme of metrologically useful entangled states, assuming the subsequent metrological application is a continuous process. For practical purposes, this method means one could measure a corresponding covariance matrix to a pure state, diagonalize it, and then rotate the state until the optimal generator determined here matches the Hamiltonian for a given sensing purpose. This is a natural consideration because highly entangled states are difficult to engineer while rotations of entangled states are more easily controlled [39; 40]. The utility of optimization via QFIM diagonalization is immediately clear when one considers the dimensionality of well-known unitary group structures that are often used as the basis for quantum metrological interactions [41; 42; 43]. For the case of SU(\(n\)) systems, one has that the dimensionality of the operator space is \(\dim[\mathfrak{su}(n)]=n^{2}-1\). Here, \(\mathfrak{su}(n)\) is the algebra that generates the group SU(\(n\)) under exponentiation. To find the optimal generator of evolution, one would have to optimize over the span of the \(n^{2}-1\) operators. Subject to normalization, this is equivalent to searching an entire \(\mathcal{S}^{n^{2}-2}\) hypersphere. Instead, the QFIM procedure only requires one to find the eigenvector with the largest eigenvalue of an \((n^{2}-1)\times(n^{2}-1)\) matrix, which is drastically simpler. _Formalism.--_ We start now by introducing the general formalism. To work out a procedure to find the optimal generator, we adopt the language of quantum state geometry [44; 45; 46; 47] which we provide a description of in the Supplemental Material (SM) [48]. Consider a Hilbert space \(\mathcal{H}\) of dimension \(d\), with a set of quantum states \(\rho(\mathbf{x})\). Here, the states are parameterized by some ordered list of \(n\) coordinates, \(\mathbf{x}=(x^{1},\ldots,x^{n})\), that are associated with physical parameters. The set of \(\rho(\mathbf{x})\) forms a state-manifold which may be equipped with a unique Riemannian metric in the form of the QFIM [49], \[ds^{2}=\mathcal{F}_{\mu\nu}\ dx^{\mu}dx^{\nu}, \tag{1}\] with the definitions \[\mathcal{F}_{\mu\nu}=\frac{1}{2}\operatorname{Tr}\Bigl{[}\rho\{\hat{L}_{\mu}, \hat{L}_{\nu}\}\Bigr{]},\quad\partial_{\mu}\rho=\frac{1}{2}\left(\rho\hat{L} _{\mu}+\hat{L}_{\mu}\rho\right). \tag{2}\] Here, \(\{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}\) is the anti-commutator, \(\partial_{\mu}=\partial/\partial x^{\mu}\), and \(\hat{L}_{\mu}\) is the Symmetric Logarithmic Derivative [46] with respect to the coordinate \(x^{\mu}\). The set of tangent vector fields on the state-manifold represent all potential quantum operations under which the state may evolve. This is physically equivalent to a set of derivatives, such that any tangent vector may be expanded as \(\vec{V}=V^{\mu}\partial_{\mu}\). From Eq. (1), we can then understand the QFI metric \(\mathcal{F}_{\mu\nu}\) as the inner product between vectors at the point \(\mathbf{x}\)[48]: \((\vec{V},\vec{W})_{\mathbf{x}}=\mathcal{F}_{\mu\nu}V^{\mu}W^{\nu}\). In other words, when the QFIM is used to define the interval \(ds^{2}\), it can be intuitively understood as a differential path length across the quantum state space. A natural consequence of this interpretation of the QFIM is that the vector whose magnitude is maximized under the QFIM's inner product, labeled \(\vec{\mathcal{O}}\), uniquely determines the infinitesimal rotation which changes the state most rapidly. The magnitude of \(\vec{\mathcal{O}}\) is then inversely proportional to the quantum Cramer-Rao bound (QCRB), and thus determines the evolution in which the quantum state is most sensitive to. Importantly, calculating \(\vec{\mathcal{O}}\) and its magnitude is equivalent to finding the eigenvector with the largest eigenvalue of \(\mathcal{F}_{\mu\nu}\) when treated as a matrix [48], \[\mathcal{F}\vec{\mathcal{O}}^{\mu}=\lambda_{\max}\vec{\mathcal{O}}^{\mu}, \tag{3}\] where \(\vec{\mathcal{O}}^{\mu}\) is the column vector representation of \(\mathcal{O}^{\mu}\). In the case that the parameterization of a state may be described unitarily, \(\rho(\mathbf{x})\equiv U(\mathbf{x})\rho(0)U^{\dagger}(\mathbf{x})\), we may further simplify this process since the geometric structure is inherited from the unitary group U(\(\mathcal{H}\)) [50; 51; 48]. If \(\rho(0)=|\Psi\rangle\!\langle\Psi|\) for some prepared state \(|\Psi\rangle\), we can study pure states \(U(\mathbf{x})\,|\Psi\rangle\) belonging to the state-manifold. Here, the expression for \(\mathcal{F}_{\mu\nu}\) simplifies to \[\begin{split}\mathcal{F}_{\mu\nu}&=4\operatorname{ cov}_{\Psi}(\hat{G}_{\mu},\hat{G}_{\nu})\\ &=2\langle\{\hat{G}_{\mu},\hat{G}_{\nu}\}\rangle_{\Psi}-4\langle \hat{G}_{\mu}\rangle_{\Psi}\langle\hat{G}_{\nu}\rangle_{\Psi},\end{split} \tag{4}\] which matches the definition of the Fubini-Study metric [52]. We can further understand derivatives at \(\mathbf{x}\) as \[\partial_{\mu}U(\mathbf{x})\,|\Psi\rangle=-i\hat{G}_{\mu}U(\mathbf{x})\,|\Psi \rangle\,,\quad\partial_{\mu}\equiv-i\hat{G}_{\mu}, \tag{5}\] where \(-i\hat{G}_{\mu}\in\mathfrak{u}(\mathcal{H})\) belongs to the Lie algebra. Therefore, any vector \(\vec{V}\) naturally defines a generator on the Hilbert space according to Eq. (5), where \(V^{\mu}\) is a set of coefficients associated with the observables \(\hat{G}_{\mu}\) in a Hamiltonian. The determination of \(\vec{\mathcal{O}}=-i\mathcal{O}^{\mu}\hat{G}_{\mu}\) is thus equivalent to finding the optimal generator \(\vec{\mathcal{G}}=\mathcal{O}^{\mu}\hat{G}_{\mu}\). Here, the QCRB may be artificially lowered by choosing larger coefficients \(\vec{\mathcal{O}}^{\mu}\) and claiming this leads to a metrological advantage. As a result, we fix the condition that \(\vec{\mathcal{O}}\) is normalized with respect to the operator basis, \(\sum_{\mu}\left(\mathcal{O}^{\mu}\right)^{2}=1\). By further defining a suitable norm \(\mathcal{C}\) such that \(\operatorname{Tr}\Bigl{[}\hat{G}_{\mu}\hat{G}_{\nu}\Bigr{]}=\mathcal{C}\delta_ {\mu\nu}\)[48], the SQL is formally defined for SU(\(n\)) systems at the particle number \(N\). This also defines the Heisenberg limit (HL), which is the fundamental sensitivity bound originating from the Heisenberg uncertainty principle [53; 18], at \(N^{2}\). _Squeezing in a SU(2) system.--_ To demonstrate the validity of our QFIM diagonalization procedure, we first consider states created by nonlinear interactions between \(N\) two-level particles with an underlying SU(2) structure. Each particle's states are labeled with ground state \(|d\rangle\) and excited state \(|u\rangle\). We use the Schwinger boson representation [54] for two modes with creation operators \(\hat{d}^{\dagger}\) and \(\hat{u}^{\dagger}\) representing the "creation" of a particle in the states \(|d\rangle\) and \(|u\rangle\), respectively. As shown in Ref. [15], squeezing a coherent spin state (CSS), \[|\theta,\phi\rangle=\frac{1}{\sqrt{N!}}\left[\cos\left(\frac{\theta}{2}\right) \hat{u}^{\dagger}+\sin\left(\frac{\theta}{2}\right)e^{i\phi}\hat{d}^{\dagger} \right]^{N}|0\rangle\,, \tag{6}\] about a single axis may be accomplished with a nonlinear interaction. In particular, the OAT Hamiltonian \[\hat{H}_{\rm OAT}=\hbar\chi\hat{J}_{z}^{2}=\frac{\hbar\chi}{4}\left(\hat{u}^{ \dagger}\hat{u}-\hat{d}^{\dagger}\hat{d}\right)^{2}, \tag{7}\] correlates quantum fluctuations by twisting the northern and southern hemispheres of the collective Bloch sphere in opposite directions, leading to a squeezed state with particle-particle entanglement. We demonstrate squeezing of a CSS initially oriented along \(\hat{J}_{x}=(\hat{u}^{\dagger}\hat{d}+\hat{d}^{\dagger}\hat{u})/2\), shown in Fig. 1(a), which reaches an optimally squeezed state at time \(t=1/(\chi N^{\frac{2}{3}})\), shown in Fig. 1(b). We now examine this well-known squeezing example through the lens of QFIM diagonalization. The operator basis of the SU(2) group is simply the collective operators \(\hat{G}_{\mu}\in\{\hat{J}_{x},\hat{J}_{y},\hat{J}_{z}\}\), where \(\hat{J}_{y}=i(\hat{d}^{\dagger}\hat{u}-\hat{u}^{\dagger}\hat{d})/2\). Therefore, Eq. (3) only requires the diagonalization of a \(3\times 3\) matrix \(\mathbf{\mathcal{F}}\). Figure 1(c) shows the three eigenvalues of \(\mathbf{\mathcal{F}}\) during the squeezing process. At \(t=0\), the eigenvectors \(\hat{Y}^{\mu}=(0,1,0)^{T}\) and \(\hat{Z}^{\mu}=(0,0,1)^{T}\) have degenerate eigenvalues at the SQL, \(\lambda_{\rm max}=N\). The third linearly independent eigenvector \(\hat{X}^{\mu}=(1,0,0)^{T}\) has a zero eigenvalue, showing the underlying symmetry of the initial CSS. The degenerate eigenvalues then split as squeezing begins. As shown in Fig. 1(c), we find perfect agreement between the largest eigenvalue of \(\mathbf{\mathcal{F}}\) and the analytical solution [11; 16] during the initial squeezing \(t\lesssim 1/(\chi\sqrt{N})\), \[\mathcal{F}_{\rm OAT}=N+\frac{N(N-1)}{4}\left(A+\sqrt{A^{2}+B^{2}}\right), \tag{8}\] with \(A=1-\cos^{N-2}(2\chi t)\) and \(B=4\sin(\chi t)\cos^{N-2}(\chi t)\). We emphasize that this analytical result is found using the exact solution of the squeezing dynamics which allows one to extract the maximum QFI. Instead, with the help of the QFIM, we do not require any such insight into the state and yet can still efficiently find the maximum QFI numerically, deriving the eigenvector visible in Fig. 1(d) displaying the optimal generator \(\hat{\mathcal{G}}\). However, we will see that the QFIM eigendecomposition offers its own insights into symmetries at points of a given system's dynamics. After \(t=0\), the symmetry of the CSS is broken, and the optimal generator jumps to \(\hat{\mathcal{G}}=\sin(\delta)\hat{J}_{z}+\cos(\delta)\hat{J}_{y}\), where we find perfect agreement with the expression \(\delta=\arctan(B/A)/2\) given in Ref. [15]. As squeezing progresses, the optimal generator then rotates towards the equator. At \(t\sim 2/(\chi\sqrt{N})\), the first two eigenvalues become degenerate with the associated eigenvectors \(\hat{X}^{\mu}\) and \(\hat{Y}^{\mu}\), once again showing an underlying symmetry of the state [17]. This symmetry is then broken at \(t\sim\pi/(2\chi)-2/(\chi\sqrt{N})\), causing the two largest eigenvalues to split and a discontinuous jump of the optimal rotation axis from \(\hat{Y}^{\mu}\) to \(\hat{X}^{\mu}\) [purple arrow in Fig. 1(d)] [55]. Therefore, Eq. (8) no longer calculates the maximum QFI because it corresponds only to rotations about \(\hat{J}_{y}\) when \(t\gtrsim 2/(\chi\sqrt{N})\). We find that \(\mathcal{F}_{\rm OAT}\) follows the second eigenvalue down to the SQL while the largest eigenvalue grows to the HL, \(\lambda_{\rm max}=N^{2}\). The final three eigenvalues, one at HL and two at SQL, are only possible in SU(2) systems with a NOON state, which matches the analysis of Ref. [17]. Having demonstrated that the well-known results of OAT follow naturally from the diagonalization of the QFIM, we now turn to a higher dimensional system in which analytical results for the maximum QFI and optimal generator cannot readily be obtained. _Squeezing in higher dimensional systems.--_ We consider a \(N\)-body system in which the constitute particles now have four states \(|u\rangle\), \(|d\rangle\), \(|s\rangle\), and \(|c\rangle\). We again utilize Schwinger bosons with corresponding creation operators \(\hat{u}^{\dagger}\), \(\hat{d}^{\dagger}\), \(\hat{s}^{\dagger}\), and \(\hat{c}^{\dagger}\). Here, the linear dynamics are described by the SU(4) group with six \(\mathfrak{su}(2)\) sub-algebras. Each sub-algebra has the associated raising operators [56]\(\hat{\mathcal{Q}}^{+}=\hat{u}^{\dagger}\hat{d}\), \(\hat{\mathcal{L}}^{+}=\hat{s}^{\dagger}\hat{c}\), \(\hat{\mathcal{M}}^{+}=\hat{u}^{\dagger}\hat{c}\), \(\hat{\mathcal{N}}^{+}=\hat{s}^{\dagger}\hat{d}\), \(\hat{\mathcal{U}}^{+}=\hat{u}^{\dagger}\hat{s}\), and \(\hat{\mathcal{V}}^{+}=\hat{c}^{\dagger}\hat{d}\). These operators can then Figure 1: One-axis twisting with \(N=20\). (a) and (b) Collective Bloch sphere at \(t=0\) and \(t=1/(\chi N^{\frac{2}{3}})\), respectively. The color represents the overlap of the state with a CSS at each point, \(|\langle\theta,\phi|\psi(t)\rangle|^{2}\). (c) The three eigenvalues \(\lambda_{i}\) of \(\mathbf{\mathcal{F}}\). Also plotted as a black dotted-dashed line is \(\mathcal{F}_{\rm OAT}\) from Eq. (8). (d) Location of the optimal generator during the squeezing process. The color represents the QFI for the given generator. The time axis for \(t\lesssim\pi/(2\chi)-2/(\chi\sqrt{N})\) is shown with a black arrow, while the discontinuous jump at \(t\sim\pi/(2\chi)-2/(\chi\sqrt{N})\) is shown with a purple arrow. The eigenvalue then grows at this final axis for the remainder of the process. define the Hermitian components of each algebra according to \(\hat{O}_{x}=(\hat{O}^{+}+\hat{O}^{-})/2\), \(\hat{O}_{y}=-i(\hat{O}^{+}-\hat{O}^{-})/2\), and \(\hat{O}_{z}=[\hat{O}^{+},\hat{O}^{-}]/2\). We can then define an operator basis that spans \(\mathfrak{su}(4)\) with 15 linearly independent operators that satisfy the orthonormality property [48]: \[\begin{split}\hat{G}_{\mu}\in\{\hat{\mathcal{Q}}_{x},\hat{ \mathcal{Q}}_{y},\hat{\mathcal{Q}}_{z},\hat{\mathcal{L}}_{x},\hat{\mathcal{L }}_{y},\hat{\mathcal{L}}_{z},\hat{\mathcal{M}}_{x},\hat{\mathcal{M}}_{y},\\ \hat{\mathcal{N}}_{x},\hat{\mathcal{N}}_{y},\hat{\mathcal{P}}_{z},\hat{\mathcal{U}}_{x},\hat{\mathcal{U}}_{y},\hat{\mathcal{V}}_{x},\hat{ \mathcal{V}}_{y}\},\end{split} \tag{9}\] where \(\hat{\mathcal{P}}_{z}=(\hat{\mathcal{M}}_{z}-\hat{\mathcal{N}}_{z})/\sqrt{2}\). We prepare the state via the nonlinear interaction \[\hat{H}_{TAT}=\hbar\chi\left(\hat{\mathcal{Q}}^{+}+\hat{\mathcal{L}}^{+} \right)\left(\hat{\mathcal{Q}}^{-}+\hat{\mathcal{L}}^{-}\right)=2\hbar\chi \hat{E}^{+}\hat{E}^{-} \tag{10}\] which causes twisting about three of the axes of a 15-dimensional collective SU(4) hypersphere [27]. Here, we have introduced three SU(2) subgroups \(\mathfrak{J}\), \(\mathfrak{K}\), and \(\mathfrak{E}\) generated by algebras with raising operators \(\hat{J}^{+}=(\hat{\mathcal{M}}^{+}+\hat{\mathcal{N}}^{+})/\sqrt{2}\), \(\hat{K}^{+}=(\hat{\mathcal{U}}^{+}+\hat{\mathcal{V}}^{+})/\sqrt{2}\), and \(\hat{E}^{+}=(\hat{\mathcal{Q}}^{+}+\hat{\mathcal{L}}^{+})/\sqrt{2}\), respectively. The \(\mathfrak{J}\) and \(\mathfrak{K}\) algebras might represent the dynamics of the internal and external degrees of freedom of atoms in a dispersive Kapitza-Dirac cavity, while the \(\mathfrak{E}\) algebras represents the entanglement-generating processes [27; 48]. When we begin in a simultaneous eigenstate of \(\hat{J}_{x}\) and \(\hat{K}_{y}\), \(|\psi_{0}\rangle=(N!)^{-\frac{1}{2}}\exp\!\left[-i\hat{J}_{y}\pi/\sqrt{2} \right]\!(\hat{n}^{\dagger})^{N}\,|0\rangle\), the Hamiltonian Eq. (10) causes squeezing as well as non-trivial entanglement between \(\mathfrak{J}\) and \(\mathfrak{K}\). We display the dynamics of the QFIM eigenvalues for \(N=20\) in Fig. 2(a) and the eigenvector of the QFIM with the largest eigenvalue in Fig. 2(b). Figure 2(a) also displays the maximum QFI from operators in the \(\mathfrak{J}\), \(\mathfrak{K}\), and \(\mathfrak{E}\) subgroups, which were the generators considered in Ref. [27]. At \(t=0\), the largest six eigenvalues are degenerate at the SQL, \(\lambda_{i}=N\), indicating that the starting state is a generalized CSS [57]. This mirrors the symmetry between \(\vec{Y}^{\mu}\) and \(\vec{Z}^{\mu}\) initially in OAT, but now over three SU(2) subgroups [58]. As the squeezing begins, the largest two eigenvalues grow together until \(t\sim 1/(\chi\sqrt{N})\) where they reach a maximum value of \(\lambda_{\max}\sim 146\approx 0.366N^{2}\). This degeneracy can be seen in Fig. 2(b) as the optimal generator jumps back and forth between two operators for the beginning of the squeezing process. These two degenerate eigenvalues subsequently fall until they cross the third largest eigenvalue at \(t\sim 5/(3\chi\sqrt{N})\), corresponding to a discontinuous jump in Fig. 2(b). The eigenvalue corresponding to a rotation axis close to \(\hat{\mathcal{M}}_{x}\) then grows rapidly, eventually becoming the largest eigenvalue at \(t\sim\pi/(2\chi)-1/(\chi\sqrt{N})\). This analysis highlights that the QFIM diagonalization unravels the complicated nonlinear dynamics of the high dimensional quantum system. In fact, with the help of this unraveling, we find that at all times the state is more sensitive than what was shown in Ref. [27]. _Multiparameter estimation.--_ So far, we have focused on optimizing single parameter estimation. However, our QFIM diagonalization scheme inherently optimizes multiparameter estimation as well by finding multiple eigenvectors of the QFIM whose complimentary generators commute with one another. This, in turn, could be used in quantum sensors that aim to infer multiple parameters beyond the SQL simultaneously. As an example, at \(t=\pi/(4\chi)\) in Fig. 2(a), the generators associated with the eigenvalues \(\lambda_{1}=0.307\,N^{2}\), \(\lambda_{3}=0.189\,N^{2}\), and \(\lambda_{8}=0.117\,N^{2}\) all commute with one another, meaning one could carry out simultaneous estimation beyond the SQL for all three of the corresponding parameters. The associated generators go as [59]\(\hat{\mathcal{G}}_{1}=c_{1}\sqrt{2}\hat{K}_{z}+c_{2}(\hat{\mathcal{M}}_{x}+\hat{ \mathcal{M}}_{y})\), \(\hat{\mathcal{G}}_{3}=c_{2}\sqrt{2}\hat{K}_{z}-c_{1}(\hat{\mathcal{M}}_{x}+ \hat{\mathcal{M}}_{y})\), and \(\hat{\mathcal{G}}_{8}=c_{3}\hat{\mathcal{N}}_{x}+c_{4}\hat{\mathcal{N}}_{y}\), with real coefficients \(c_{i}\) that satisfy the normalization condition. For the case of the spin-momentum SU(4) system considered in Ref. [27], a portion of these generators may be found to correspond to interactions which are more physically accessible than the whole generator is. We also show additional details relating to this possible physical scheme in the SM [48]. In this physical example, \(\hat{K}_{z}\) could correspond to a linear acceleration while \(\hat{\mathcal{M}}_{x}+\hat{\mathcal{M}}_{y}\) and \(c_{3}\hat{\mathcal{N}}_{x}+c_{4}\hat{\mathcal{N}}_{y}\) may correspond to spatially dependent rotations, thereby creating the opportunity for many combinations of useful interferometry [60; 33; 61]. We thus consider \((\hat{\mathcal{M}}_{x}+\hat{\mathcal{M}}_{y})/\sqrt{2}\), \(\hat{K}_{z}\), and \(\hat{\mathcal{G}}_{8}\) which still have QFIs of \(0.300\,N^{2}\), \(0.195\,N^{2}\), and \(0.117\,N^{2}\), respectively. Since these operators are in three commuting sub-algebras, they can be independently rotated to any arbitrary axis in the respective sub-algebra's Bloch sphere in order to be made relevant for sensing vector quantities or network node interferometry [62; 61]. More generally, within SU(\(n\)), one is guaranteed sets of \(n-1\) many commuting generators [63; 48; 64], thereby guaranteeing sets of \(n-1\) eigenvectors of the QFIM which correspond to simultaneously commuting generators. One could thus select the eigenvector with the largest eigenvalue and search the remaining eigenvectors to find the set of \(n-1\) generators which mutu ally commute and have suitable eigenvalues that scale beyond the SQL. Furthermore, the associated Symmetric Logarithmic Derivatives \(\hat{L}_{\mu}\) are guaranteed to commute such that the optimal measurement basis is the same for each parameter. This ensures that the QCRB is always simultaneously attainable for all \(n-1\) parameters as the elements of the Uhlmann curvature matrix \(\mathbb{U}_{\mu\nu}=-i\operatorname{Tr}\bigl{[}\rho[\hat{L}_{\mu},\hat{L}_{\nu}] \bigr{]}/2\) will vanish [65, 66, 67, 28]. _Conclusion and outlook.--_ We have demonstrated that the optimal generator for quantum sensing can be found by finding the eigenvector associated with the largest eigenvalue of the QFIM. This arises as a natural consequence of maximizing differential path lengths through quantum state space when the QFIM is viewed as a Riemannian metric, generalizing the work of Ref. [68] to any metrological process with an underlying Lie group structure. For the examples we considered, unitary parameterization was assumed, but future steps include examining a channel or hybrid parameterization scheme [69, 28, 70] using QFIM diagonalization. Furthermore, the examples in this Letter have focused on diagonalization schemes with pure states, but the procedure is equally valid with mixed states and the properly defined tangent vectors. Here, one must utilize the more general definition of the QFIM given in Ref. [28]. The use of mixed states is then relevant to experiments where a small amount of entanglement entropy between the system and a bath can be generated through either known or unknown dissipative processes. The examples we considered in this work had underlying SU(2) and SU(4) group dynamics. Already in the case of SU(4), one finds that more care must be taken compared to the SU(2) case when considering larger group structures. For one, unitarily rotating the optimal generator to an arbitrary operator is not always possible in larger group structures as this requires the operators to have the same spectrum [48]. We also outline in the SM [48] how to extend our work to general SU(\(n\)) systems by presenting an algorithm to generate an orthogonal operator basis that spans the quantum state space. Moreover, the underlying formalism of this Letter, which we present in full in the SM [48], extends to any dynamical group structure. This makes our procedure relevant to systems described by \(\operatorname{Sp}(n,\mathbb{R})\)[71, 72], SU(\(m,n\)) [73], or translational groups [74], which are the underlying group structures of nonlinear optics, optical lattices, and optomechanical systems, for example. Interestingly, there have been recent efforts to experimentally infer the quantum geometric tensor [75, 76, 77, 78], which is related to the QFI metric through its real component [48, 52]. This leads to the prospect of allowing one to find the optimal generator for quantum sensing without the need for a full theoretical model, only an understanding of the underlying symmetries. This is necessary for complex systems where such models are difficult to derive or fully simulate. Our QFIM diagonalization procedure thus opens an exciting avenue for experiments with complex systems [79, 80, 81, 82, 24, 25, 26, 83, 84], whose current interest is not parameter estimation, to naturally test if the experiment can be useful as a quantum sensor and how to use any generated entanglement in an efficient manner. In addition, we can combine numerical approaches with the exact diagonalization procedure for these complex systems, so this work will have relevance for quantum optical control and machine learning methods that have been used effectively for quantum design tasks [85, 86, 87, 88, 89]. We thank John Cooper, Klaus Molmer, and Joshua Combes for useful discussions. This research was supported by NSF PHY 1734006; NSF OMA 2016244; NSF PHY Grant No. 2207963; and NSF 2231377. S.B.J. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG): Projects A4 and A5 in SFB/Transregio 185: "OSCAR". J.T.R. and J.D.W. contributed equally to this work.
2305.14890
HARD: Hard Augmentations for Robust Distillation
Knowledge distillation (KD) is a simple and successful method to transfer knowledge from a teacher to a student model solely based on functional activity. However, current KD has a few shortcomings: it has recently been shown that this method is unsuitable to transfer simple inductive biases like shift equivariance, struggles to transfer out of domain generalization, and optimization time is magnitudes longer compared to default non-KD model training. To improve these aspects of KD, we propose Hard Augmentations for Robust Distillation (HARD), a generally applicable data augmentation framework, that generates synthetic data points for which the teacher and the student disagree. We show in a simple toy example that our augmentation framework solves the problem of transferring simple equivariances with KD. We then apply our framework in real-world tasks for a variety of augmentation models, ranging from simple spatial transformations to unconstrained image manipulations with a pretrained variational autoencoder. We find that our learned augmentations significantly improve KD performance on in-domain and out-of-domain evaluation. Moreover, our method outperforms even state-of-the-art data augmentations and since the augmented training inputs can be visualized, they offer a qualitative insight into the properties that are transferred from the teacher to the student. Thus HARD represents a generally applicable, dynamically optimized data augmentation technique tailored to improve the generalization and convergence speed of models trained with KD.
Arne F. Nix, Max F. Burg, Fabian H. Sinz
2023-05-24T08:38:44Z
http://arxiv.org/abs/2305.14890v2
# HARD: Hard Augmentations for Robust Distillation ###### Abstract Knowledge distillation (KD) is a simple and successful method to transfer knowledge from a teacher to a student model solely based on functional activity. However, current KD has a few shortcomings: it has recently been shown that this method is unsuitable to transfer simple inductive biases like shift equivariance, struggles to transfer out of domain generalization, and optimization time is magnitudes longer compared to default non-KD model training. To improve these aspects of KD, we propose Hard Augmentations for Robust Distillation (HARD), a generally applicable data augmentation framework, that generates synthetic data points for which the teacher and the student disagree. We show in a simple toy example that our augmentation framework solves the problem of transferring simple equivariances with KD. We then apply our framework in real-world tasks for a variety of augmentation models, ranging from simple spatial transformations to unconstrained image manipulations with a pretrained variational autoencoder. We find that our learned augmentations significantly improve KD performance on in-domain and out-of-domain evaluation. Moreover, our method outperforms even state-of-the-art data augmentations and since the augmented training inputs can be visualized, they offer a qualitative insight into the properties that are transferred from the teacher to the student. Thus HARD represents a generally applicable, dynamically optimized data augmentation technique tailored to improve the generalization and convergence speed of models trained with KD.1 Footnote 1: Code available at [https://github.com/sinzlab/HARD](https://github.com/sinzlab/HARD) ## 1 Introduction Knowledge distillation (KD) methods [27; 37; 60] are powerful and flexible tools to transfer the knowledge of a given _teacher_ model to the transfer target, the _student_ model, without copying the weights. Instead, these methods match the student's functional activity (e.g. the softmax output) to that of the teacher for the presented inputs. Hence, those methods are independent of architectural details and allow knowledge distillation to be applied in scenarios like model compression [7; 27], continual learning [4; 42; 52], or even neuroscience [35], where traditional transfer learning would be impossible to use. KD methods also appear to be key to training new models that trade off inductive biases for more flexibility and more parameters [17; 53; 55] on smaller data [9; 40; 54]. However, Nix et al. [40] recently showed that current KD methods fail to transfer even simple equivariances between teacher and student. Additionally, previous work showed that KD leads to a larger gap between student and teacher on out-of-domain evaluation performance compared to within domain performance [6; 41], even in cases where the student almost perfectly matches the teacher [6] (see Table 5). This phenomenon is especially pronounced for particularly robust teachers [41]. Thus we expect that transferring robustness properties is a difficult problem for KD in general. We hypothesize that KD methods are in principle capable of transferring most knowledge from a teacher to a student if the training data is chosen adequately. We confirm this hypothesis on a small toy example (Section 3), showing the importance of input data for KD. Motivated by this demonstration, we propose our _Hard Augmentations for Robust Distillation (HARD)_ method, a general framework (Section 4) to generate augmented training inputs which improve knowledge transfer by maximizing the distance between teacher and student while leaving the teacher's output unchanged. Consequently, our framework moves the input in directions that the teacher is invariant to but which are most challenging for the student. Our experiments (Section 5) show that our task-agnostic framework improves transfer effectiveness and thereby solves the problem of of KD not being able to transfer shift equivariance [40]. Additionally, as part of our framework, we propose several parameterized augmentations (Section 4.1) that can be integrated with most existing KD methods and are applicable to a variety of different computer vision tasks. Finally, we demonstrate across multiple different models on the tasks of CIFAR10 and ImageNet that our framework learns interpretable augmentations that improve KD to the same level and in many cases even beyond established data augmentation methods, even when evaluated in an out-of-domain setting. ## 2 Related Work There is a long tradition in using data augmentations to artificially extend training data for deep learning models and particularly in computer vision, be it through adding Gaussian noise, random crops, shifts, flips, or rotations [18; 33]. In recent years, data augmentations became more complex [12; 24; 28; 39; 59; 61], employing a multitude of different heuristics with the aim to improve generalization and in some cases also out-of-domain performance [24]. A particularly popular augmentation method is _Mixup_[61], which randomly interpolates two input samples and their labels respectively. Similarly, _Cutmix_[59] combines two input images by pasting a random crop of one image on top of the other. Also, many studies use parameterized augmentations optimized to improve a given objective [11; 25; 48; 67], and some even optimize the augmentations to improve on an adversarial objective [2; 3; 20; 56; 63; 64; 65], however, without applying them for knowledge transfer. In KD, applying data augmentations is a very effective tool to improve matching between student and teacher [6; 58] and optimizing on a meta level can be useful to aide the teaching [43]. Similar to our work, Haidar et al. [21], Rashid et al. [45], Zhang et al. [62] utilized adversarial objectives to optimize data augmentations for KD, however, they were solely focused on natural language processing tasks and do not optimize the augmentations towards invariance. Inspired by this large body of work we formulate a task-agnostic framework containing only one building block that is specific to the data-domain - the instantiation of the augmentor model generating the augmented data samples - for which we offer a variety of reasonable model choices based on spatial transformer modules [29], Mixup [61], and variational autoencoders [10; 31; 34]. Figure 1: Our task-agnostic HARD framework switches between training the student to match the teacher and training the augmentor to generate new samples on which the student underperforms while maintaining high teacher performance. We optimize the augmentor and student in interchanging phases through a student-teacher loss \(\mathcal{L}_{\tilde{\mathrm{s}}\leftrightarrow\tilde{\mathrm{t}}}\) and teacher-teacher loss \(\mathcal{L}_{\tilde{\mathrm{t}}\leftrightarrow\tilde{\mathrm{t}}}\). We switch between the two phases by comparing the default loss \(\mathcal{L}_{\tilde{\mathrm{s}}}\) on augmented data to pre-defined thresholds. ## 3 Input Data Matters for Functional Transfer We hypothesize that the choice of input data is crucial to successfully knowledge distillation and we illustrate the impact of training data by a simple toy example. To demonstrate this, consider a simple KD task in which we instantiate the teacher-model by the true function \(f_{\mathrm{t}}(x)=\cos(x)\) and the student \(f_{\mathrm{s}}(x)\) by a three layer Multilayer Perceptron (MLP) with ReLU activation [1]. We use input data \(x\) chosen such that it does not capture the teacher's \(\cos(x)\) periodicity (orange points in Figure 2A). Simple KD does neither interpolate between the given training points nor extrapolates beyond them (Figure 2E). Hence the student neural network does not learn the teacher's periodicity and fails to interpolate and extrapolate beyond the training data (Figure 2A). Augmenting the training data with more helpful inputs \(\tilde{x}\) and teacher labels \(f_{\mathrm{t}}(\tilde{x})=\cos(\tilde{x})\) could mitigate this problem. One method successfully applied to KD [6] is to extend the input data through Mixup [61]. When applying this to our illustrative example, we create new training inputs \(\tilde{x}\) through linear interpolation between pairs of input points \(\tilde{x}=(1-\alpha)x_{1}+\alpha x_{2}\) (Figure 2F), and recording the corresponding teacher responses \(f_{\mathrm{t}}(\tilde{x})=\cos(\tilde{x})\). Thus, the student learns to interpolate between training points, but mixup does not enhance extrapolation (Figure 2B). To generate datapoints that would interpolate and extrapolate beyond already available training points, we could simply augment by adding Gaussian noise \(\epsilon\) to the available data points, \(\tilde{x}=x+\epsilon\), hence interpolating and extrapolating beyond the training data (Figure 2G). This strategy helps our student to match the teacher also outside the original training regime (Figure 2C). However, the student only improves within a fixed margin that is determined by the noise distribution's mean and variance. We could obviously improve interpolation and extrapolation by increasing the noise distribution's variance or shifting it's mean, however, as we move to a high dimensional image input space (\(x\in\mathbb{R}\rightarrow\vec{x}\in\mathbb{R}^{N}\)) it becomes unclear how to heuristically select helpful new samples and at the same time random exploration strategies become computationally infeasible. Instead, we propose to optimize a parameterized augmentation to efficiently generate new, hard training samples on which the student lacks performance, as here the student could improve the most. In our toy example, we illustrate this by optimizing the Gaussian's parameters (mean and variance) according to our augmentation framework _HARD_, which we will present in the next section. This provides us with a noise distribution which we use to draw new helpful training examples \(\tilde{x}\) that transfer inter- and extrapolation to the student network (Figure 2D,H). Overall, this toy example shows that learning hard augmentations to select new helpful data points is crucial to efficiently improve extrapolation beyond the training distribution. Figure 2: Fitting the student, a three-layer ReLU MLP, to the teacher function, \(\cos(x)\), for \(10,000\) iterations. We show results for 10 random seeds (**A-D**) and the distribution of (augmented) training inputs as a normalized histogram (**E-H**). We compare baseline (no augmentations) with Mixup, Gaussian noise and an HARD-optimized noise distribution. We report mean-squared-error (MSE) on 100 test inputs sampled from \(\mathcal{U}_{[-10,10]}\). Learning Hard Augmentations for Robust Distillation (HARD) Our task-agnostic HARD framework learns augmenting training images to most efficiently help knowledge-transfer from a teacher to a student model. Our method requires three main components: a teacher model with frozen parameters, a student model that should learn knowledge from the teacher, and a parameterized augmentation model that learns to augment images such that most of the teacher's knowledge is transferred to the student. In classical KD methods[27], the objective is to minimize a distance \(\mathcal{D}\left[f_{\text{s}}(x),f_{\text{t}}(x)\right]\) between the student's activation \(f_{\text{s}}(x)\) and the teacher's activation \(f_{\text{t}}(x)\) on given inputs \(x\in\mathbb{R}^{n}\). Usually, this would be the Kullback-Leibler divergence between the softmax distributions of teacher and student. Unfortunately, only considering training data could miss properties of the teacher (eg. shift invariance) that might be crucial for generalization (see Section 3 for an illustrative example). To resolve this issue, we learn a parametrized augmentation model \(g_{\text{a}}\) to generate new input data points \(\tilde{x}=g_{\text{a}}(x)\) transferring such invariance properties from the teacher to the student. Hence, we define a _teacher-student loss_ considering the more general case of matching student and teacher on augmented inputs \(\tilde{x}\in\mathbb{R}^{n}\): \[\mathcal{L}_{\tilde{\text{s}}\leftrightarrow\tilde{\text{t}}}=\mathcal{D} \left[f_{\text{s}}(\tilde{x}),f_{\text{t}}(\tilde{x})\right]\;. \tag{1}\] To specifically transfer the teacher's invariance properties to the student, we propose a _teacher-teacher loss_ pushing the augmentor towards generating data points on which the teacher is invariant, \[\mathcal{L}_{\tilde{\text{t}}\leftrightarrow\tilde{\text{t}}}=\mathcal{D} \left[f_{\text{t}}(\tilde{x}),f_{\text{t}}(x)\right]\;, \tag{2}\] as these are often useful augmentations for generalization. Using both of these losses, we optimize the augmentor's parameters \(\theta_{\text{a}}\) to generate augmented samples on which the teacher results in similar activations but the student differs from them (Figure 1 top) and simultaneously we optimize the student's parameters \(\theta_{\text{s}}\) to perform well on those augmentations (Figure 1 bottom): \[\max_{\theta_{\text{a}}}\;\lambda_{\text{s}}\mathcal{L}_{\tilde{\text{s}} \leftrightarrow\tilde{\text{t}}}-\lambda_{\text{t}}\mathcal{L}_{\tilde{\text{ t}}\leftrightarrow\tilde{\text{t}}}\qquad\text{and}\qquad\min_{\theta_{ \text{a}}}\;\mathcal{L}_{\tilde{\text{s}}\leftrightarrow\tilde{\text{t}}}\;. \tag{3}\] Here, \(\lambda_{\text{s}}\) and \(\lambda_{\text{t}}\) trade off the loss terms and are treated as hyper-parameters. We train both components separately switching from training the augmentor to training the student when the student's performance on augmented data gets worse than a pre-defined threshold (\(\mathcal{L}_{\tilde{\text{s}}}>\ell_{\text{max}}\)) and we switch back from student to augmentor training when the student's performance on augmented data surpasses a pre-defined threshold (\(\mathcal{L}_{\tilde{\text{s}}}<\ell_{\text{min}}\); Figure 1). To prevent catastrophic forgetting, we save augmentors at every switch and employ an augmentor randomly chosen out of the set of previously saved augmentors in each iteration when training the student. ### The augmentor models To generate new input data points it is important to choose an augmentor that suits the desired application and is powerful enough to generate useful augmentations. Usually, we do not know a priori what useful augmentations are and thus should try to allow as much flexibility as possible. Additionally, some variance over augmentations could benefit the transfer. Thus, all augmentors in our study introduce randomness in the model by adding Gaussian noise into the computation of the augmentation through the reparametrization trick [31]. While our framework is universally applicable across domains, choosing an effective augmentation model likely needs to be addressed for each task individually. In our experiments, we use the following augmentor models: HARD-AffineIn the simplest model, we limit the augmentations to affine transformations of the coordinate grid of pixel locations, i.e. shifts, rotations, scalings, and shears of images. Models implementing such transformations are known as _spatial transformers_[29]. We leverage this model for our augmentor by learning a distribution over the entries of an affine transformation matrix \(\vartheta\in\mathbb{R}^{2\times 3}\) that defines the transformation of the sampling grid, i.e. a transformation that maps the pixel positions from the original image to the augmented image (Figure 3A). HARD-MixAdditionally we consider a slightly more complex augmentor model, which is an adaptive variant of the commonly used Mixup [61] and Cutmix [59] augmentations. However, instead of randomly sampling the ratio and cutout position that are used to combine images, we learn how to combine the images dependent on the input images. We achieve this by performing a patch-wise projection of the input image, followed by comparing each patch with the same query vector sampled from a learned distribution (Figure 3B). We normalize similarities for each patch over each group of images and use the resulting weights to combine the original image patches, giving a combined image. This mechanism allows our augmentor to decide which features of which image are shown to the student, enabling it to explore the interpolated space between images systematically, instead of randomly. As it would not make sense for the teacher to be invariant to an interpolation as it is generated by HARD-Mix, we do not consider the teacher-teacher-loss \(\mathcal{L}_{\texttt{i}\leftrightarrow\texttt{t}}\) in this case and optimize student and augmentor jointly instead. HARD-VAETo lift constraints further, we wanted to use a more powerful augmentor that could generate a large variety of images across the entire image-space. As the augmentor has to generate new samples on-the-fly during the student training, the generation process needs to be very fast, limiting the choice of useful generative models. For this reason, we focus on variants of the variational autoencoder architecture [31], allowing for good image reconstructions which can be achieved reasonably fast in a single forward pass (Figure 3D). For CIFAR, we choose the _very deep VAE_[10] model, which we finetune by solely optimizing parameters of the posterior network from layer 10 onward in the decoder. For the experiments on ImageNet, we use a Residual-Quantized VAE (RQ-VAE) [34] pretrained on ImageNet, which we finetune in its entirety and add a noise vector on the latent state. Hence, as training progresses, the model changes from generating plain reconstructions of a given image to input conditioned generations that serve as our augmentations. ## 5 Experiments ### Transferring equivariance For our initial experiment, we reproduce the setup from Nix et al. [40] to test whether we can transfer the inductive bias from a shift equivariant teacher, CNN and ResNet18 [22], to a student that does not have this inductive bias built into its architecture: a Multi-Layer Perceptron (MLP) and a Vision Transformer (ViT) [17]. When training the students and teachers by themselves on standard MNIST [15] training data, we observe a small drop of generalization performance (-0.6% and -1.2%) between Figure 3: Illustration of the augmentor models used in our experiments. **(A)** HARD-Mix: Image-dependent patch-wise interpolation of multiple images. **(B)** HARD-Affine: Learned distribution of affine transformations in the pixel coordinates. **(C)** HARD-VAE: Finetuning (parts of) a pretrained VAE. teacher and student on the MNIST test set and a large gap (-56.1% and -52.4%) when we evaluate on a version of the test set in which digits were randomly shifted [38]. As another baseline, we applied plain KD to transfer shift equivariance from teacher to student. Consistent with the findings of Nix et al. [40], we only observe a small improvement on the centered (+0.2% and +0.3%) and the shifted (+5.1% and +4.3%) test sets, which likely result from the centered training data we use for transfer. We then test if combining KD with our augmentations produced by HARD-Affine would outperform these baselines. The resulting student model improves significantly on shifted inputs (+28.6% and +39.4%) compared to plain KD and the generated images clearly show that the augmentor learns to shift the digits within the image. Compared to Nix et al. [40] our approach outperforms their results on the ViT task but, while improving the out-of-domain generalization by 28.6% over baseline, stays behind the Orbit performance on the MLP task. This demonstrates that our method while acting on fewer parts of the network compared to Orbit and while being a more general method, can improve or reach better performance when it comes to transferring invariances, and can be generalized to bigger datasets, as we show below. We verify that the student's performance improvement is specifically due to our data generation framework in two control experiments. The first experiment (Random Affine) augments the training inputs of a stand-alone student model with a random affine transformation akin to our augmentor model, but using transformation parameters sampled uniformly from a pre-defined, reasonably constrained range (i.e. ensuring the digit is always fully visible). This student performs well on the shifted test set, however, performance significantly degrades on the centered test set. In comparison, our HARD-Affine model is unconstrained and learns more useful augmentations, leading to better performance on the centered test sets. In our second control (Shifts) we asked how much data augmentation could improve the performance in the best case (without KD). For this, we augment the inputs by the same random shifts that were applied to obtain the shifted test data, leading to great improvements on the shifted test set. However, our learned augmentations achieve scores in a similar range on the shifted evaluation and outperform its results on the centered test set. ### Transfer on natural images After demonstrating that our method successfully captures the difference between teacher and student and bridges a gap in inductive bias, we now want to test whether this effect holds up in more realistic scenarios. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CNN \(\rightarrow\) MLP} & \multicolumn{2}{c}{ResNet18 \(\rightarrow\) ViT} \\ \cline{2-5} Method & Centered & Shifted & Centered & Shifted \\ \hline Teacher only & 99.0 \(\pm\) 0.0 & 91.3 \(\pm\) 0.5 & 99.5 \(\pm\) 0.0 & 92.8 \(\pm\) 0.5 \\ Student only & 98.4 \(\pm\) 0.0 & 35.2 \(\pm\) 0.7 & 98.3 \(\pm\) 0.0 & 40.4 \(\pm\) 0.8 \\ + _Random Affine_ & _92.1 \(\pm\) 0.6_ & _81.0 \(\pm\) 2.0_ & _95.4 \(\pm\) 0.3_ & _90.4 \(\pm\) 1.0_ \\ + _MNIST-C Shifts_ & _98.1 \(\pm\) 0.1_ & _86.5 \(\pm\) 0.3_ & _98.5 \(\pm\) 0.0_ & _93.7 \(\pm\) 0.2_ \\ \hline Orbit [40] & **98.8** & **95.2** & 98.4 & **84.0** \\ KD & 98.6 \(\pm\) 0.0 & 40.3 \(\pm\) 0.6 & 98.6 \(\pm\) 0.1 & 44.7 \(\pm\) 1.9 \\ \hline + HARD\(\frac{\text{\text{\textdegree}}}{\text{\textdegree}}\)-Affine & 98.6 \(\pm\) 0.1 & 68.9 \(\pm\) 2.5 & **99.2 \(\pm\) 0.0** & **84.1 \(\pm\) 2.3** \\ \hline \hline \end{tabular} \end{table} Table 1: MNIST (columns “Centered”) and MNIST-C (columns “Shifted”) test accuracies (mean and standard error of the mean across 4 random seeds) comparing KD without augmentation and our HARD-Affine method to Orbit transfer [40], which also learns and transfers equivariances. The left two columns show the transfer results from a small CNN teacher to a MLP student. The right columns show analogous experiments between a ResNet18 teacher and a small ViT student. The best performing transfer is shown in bold for each column. Examples of our HARD-Affine learned data augmentations shown on the right. We include the controls _Random Affine_ and _MNIST-C Shifts_ (marked by italics). CIFAR experimentsWe begin by applying our framework to CIFAR10 [32] on three different KD scenarios (see Table 2). Specifically, we test scenarios where the student lacks an inductive bias (ResNet18\(\rightarrow\)ViT), where the teacher has more capacity and access to data than the student (ResNet101\({}^{*}\rightarrow\)ResNet18), and to scenarios combining both properties (ResNet101\({}^{*}\rightarrow\)ViT). For all experiments, we keep the experimental setup as close to our previous MNIST experiments as possible (see Appendix A for details). We start by establishing baselines by training only the teacher and only the student models on the data and evaluating default KD. We observe that on this small data set a small ResNet18 performs better (78.5% accuracy) than a larger ViT (68.5%), likely because of the ResNet's superior inductive bias on this task and small data set. Next, we find that adding default data augmentations (random rotations, cropping, horizontal flips) to the student baselines significantly boosts performance to 92.6% and 78.3% for the ResNet18 and ViT, respectively. Adding these default augmentations to typical KD leads to a great performance boost, too (see Table 2). Given that adding default data augmentation to KD already leads to a substantial performance boost, it is particularly noteworthy that the data augmentations learned by HARD-Affine outperform this baseline for the ViT. Qualitatively, the augmented images exhibit a large variety of spatial transformations, suggesting that a difference in these examples lead to the observed performance boost (Table 2, right). We then investigated performance of our HARD-VAE augmentation strategy and found performance improvement over the KD + standard augmentations baseline for transfer to the ViT (+1.0% and +1.9%) student. However, inspecting the augmented images indicates that our augmentor lacks the expected shifts of object positions, but rather learns stylistic changes in the image (Table 2, right). This motivated us to combine HARD-Affine and HARD-VAE augmentation resulting in best performance (up to +7.8%) for all teacher-student pairings (HARD-VAE-Affine in Table 2) and the resulting images demonstrate variability in both style and spatial alignment (Table 2, right). ImageNet experimentsHaving established our methods' performance for CIFAR10, we extend our results to classification on ImageNet [14]. Here we aim to distill a ResNet50 [22] teacher, trained with Deep-augment and AugMix data augmentations [25], into a smaller ResNet18 and ViT-S (small vision transformer variant) [17] that we want to be particularly robust to natural image corruptions. The distillation into ResNet18 allows us to investigate the capability for model compression, because ResNet18 is a smaller network compared to ResNet50, but with a similar architecture. Distillation into a ViT-S architecture with a patch-size of 14 tests additionally if KD transfers the ResNet50's inductive bias of shift equivariance on a larger dataset. We evaluate on common test sets for both in-domain (ID) [5, 46] and out-of-domain (OOD) [19, 23, 25, 26, 57] generalization performance (Tables 3 and 4, respectively). To properly investigate the extrapolation abilities of KD training, we trained a strong KD baseline by applying several \begin{table} \begin{tabular}{l c c c} \hline \hline & ResNet18 & ResNet101\({}^{*}\) & ResNet101\({}^{*}\) \\ & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) \\ & ViT & ViT & ResNet18 \\ \hline Teacher only & 92.5 \(\pm\) 0.0 & 95.5 & 95.5 \\ Student only & 68.5 \(\pm\) 0.5 & 68.5 \(\pm\) 0.5 & 78.5 \\ + Standard Aug. & 78.3 \(\pm\) 0.4 & 78.3 \(\pm\) 0.4 & 92.6 \\ + Random Affine Aug. & 58.9 \(\pm\) 0.4 & 58.9 \(\pm\) 0.4 & 79.3 \\ \hline KD & 67.9 \(\pm\) 0.1 & 68.5 & 84.4 \\ + Standard Aug. & 80.9 \(\pm\) 0.1 & 79.3 & 93.3 \\ \hline + HARD\(\overrightarrow{\text{A}}\)-Affine & **87.8 \(\pm\) 0.8** & 84.4 & 93.5 \\ + HARD\(\overrightarrow{\text{A}}\)-VAE & 81.9 \(\pm\) 0.4 & 81.2 & 91.0 \\ + HARD\(\overrightarrow{\text{A}}\)-VAE-Affine & **87.6 \(\pm\) 0.6** & **87.1** & **94.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracies on the CIFAR10 test set. Standard error of the mean is reported where available across three different seeds. Best transfer is highlighted in bold. The ResNet101\({}^{*}\) models were pretrained on ImageNet. Examples of augmented test images from ResNet18\(\rightarrow\)ViT experiments with samples across different iterations are shown to the right. data augmentations: we randomly switch between Cutmix [59] and Mixup [61], each drawing their interpolation weight from a \(\beta\)-distribution with \(\alpha=1\), as well as AugMix [24] augmentations. For the standalone student training, we additionally apply various lighter data augmentations (Cutmix with \(\alpha=1\), Mixup with \(\alpha=0.1\), and Trivialaugment [39]). Since we ask how KD can be improved in a setting of limited resources, we run our experiments an order of magnitude shorter than proposed for the state-of-the-art in KD [6] (200 epochs for all ResNet18 and 150 epochs for all ViT-S experiments). For student and KD models, we perform a small grid search over learning-rate and weight-decay hyperparameters. We then train the models with our HARD framework based on the hyperparameters of our best performing KD setting. The augmentor-specific settings are selected through a small grid-search in the ResNet18 setting (for details see Appendix A). We first evaluate the ID performance of our methods (Table 3) beginning with the standalone teacher and student baselines, which reveal a larger performance gap between the ResNet18 student and the ResNet50 teacher compared to the ViT-S student (5.1% and 2.6% on the ImageNet validation set, respectively). Plain KD significantly reduces this gap for the ViT-S (+2.1% performance improvement compared to standalone). For the ResNet18 student KD achieves only small (0.7% V2) improvements or no improvements (0.0% Val), even though the initial gap between teacher and student is larger. Applying HARD-Affine, HARD-Mix and HARD-VAE augmentation on this task improves over plain KD across most augmentation models and test sets with student performance gains of up to 0.9% for ResNet18 (HARD-Affine) and 0.6% for ViT-S (HARD-VAE). For ViT-S, our best-performing HARD-VAE method even matches the teacher's performance on 2 out of 3 test sets. For the OOD setting (Table 4), we observe that the initial gap between student and teacher is larger than on ID data across all data sets (up to 35.1% difference), except for Im-A in the ViT-S setting. The aggressive data augmentations we apply for the plain KD baseline favor OOD performance, hence it is expected that plain KD results in good performance improvement over the standalone baseline (up to 21.3% improvement on Im-C). All three HARD approaches transfer some of the teacher's generalization abilities leading to improvements on a number of students and data sets, however, HARD-Affine fails to reach the KD performance in both settings and HARD-VAE underperforms for the ResNet18 student in these OOD scenarios. However, HARD-Mix and HARD-VAE (for ViT-S) outperform plain KD on several test sets and are roughly on par on all others, across the board. Given \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{ResNet50 \(\rightarrow\) ResNet18} & \multicolumn{4}{c}{ResNet50 \(\rightarrow\) ViT-S} \\ \cline{2-10} & Im-A & Im-R & Im-C \(\downarrow\) & Sketch & Style & Im-A & Im-R & Im-C \(\downarrow\) & Sketch & Style \\ \hline Teacher & 3.8 & 46.8 & 53.0 & 32.6 & 21.2 & 3.8 & 46.8 & 53.0 & 32.6 & 21.2 \\ Student & 1.6 & 30.0 & 88.1 & 18.4 & 4.4 & **8.0** & 26.3 & 78.1 & 13.8 & 6.6 \\ \hline KD & 1.6 & **40.2** & 69.2 & 26.0 & 13.4 & 3.3 & 45.0 & 56.8 & 29.6 & 18.7 \\ \hline + HARD\(\overrightarrow{\pi}\)-Affine & 1.5 & 38.2 & 73.1 & 24.9 & 10.4 & 3.4 & 40.8 & 62.2 & 26.2 & 14.5 \\ + HARD\(\overrightarrow{\pi}\)-Mix & **1.8** & 39.9 & **68.8** & **26.1** & **13.7** & 3.5 & **45.4** & **56.2** & 29.9 & **19.2** \\ + HARD\(\overrightarrow{\pi}\)-VAE & 1.7 & 39.5 & 72.5 & 25.8 & 12.1 & 3.4 & **45.4** & 57.4 & **30.7** & 18.1 \\ \hline \hline \end{tabular} \end{table} Table 4: In-domain evaluation for ImageNet: Reporting Top-1 accuracy in % on ImageNet-A [26], ImageNet-R [25], ImageNet-Sketch [57] and ImageNet-Style [19] and mean-corruption-error on ImageNet-C (lower is better) [23]. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{ResNet50 \(\rightarrow\) ResNet18} & \multicolumn{4}{c}{ResNet50 \(\rightarrow\) ViT-S} \\ \cline{2-10} & Val & Real & V2 & Val & ReaL & V2 \\ \hline Teacher & 75.8 & 83.1 & 63.7 & 75.8 & 83.1 & 63.7 \\ Student & 70.7 & 78.1 & 57.4 & 73.2 & 79.4 & 60.3 \\ \hline KD & 70.7 & 78.7 & 58.1 & 75.3 & 82.8 & 62.9 \\ \hline + HARD\(\overrightarrow{\pi}\)-Affine & **71.6** & **79.5** & 58.6 & 74.9 & 82.3 & 62.4 \\ + HARD\(\overrightarrow{\pi}\)-Mix & 71.4 & 79.4 & 58.6 & 75.7 & 83.0 & 63.3 \\ + HARD\(\overrightarrow{\pi}\)-VAE & 71.0 & 78.9 & **58.7** & **75.8** & **83.1** & **63.5** \\ \hline \hline \end{tabular} \end{table} Table 3: In-domain evaluation for ImageNet: reporting Top-1 accuracy in % on ImageNet-Validaton [14], ImageNet-ReaL [5] and ImageNet-V2 [46] with KD from a robust ResNet50 [25] teacher to ResNet18 (columns 2-4) and ViT-S (columns 5-7) students. that we chose a very strong baseline by applying aggressive state-of-the-art data augmentations we find these results especially encouraging. ## 6 Discussion InterpretabilityHARD enables us to gain insight into the distillation mechanism as the augmented images illustrate the knowledge that is transferred (Figure 4). As expected, HARD-Affine learns to downscale images to shift and rotate the images such that the object in the image is shown in different places (row 2-4 in Figure 4) and scales such that the images is cropped (row 1). As HARD-Mix is a dynamically learnable extension of mixup, it either merges two objects into the same picture (row 1 and 4), especially if they are not in the same position, or uses one image to change the style (row 2) or background (row 3) of another. Finally, HARD-VAE mostly impacts the style of an image and additionally adds small distortions to specific image regions, which is noticable by the altered image brightness and the blurring of some high-frequency features. Limitations and broader impactState-of-the-art knowledge distillation typically deals with huge models (billions of parameters) and incredibly long training times (>9,000 epochs) [6; 13]. In comparison, our study is computationally lightweight in requiring approximately 400 A100GPU days across all our experiments. We believe exploring even more flexible augmentor models with a semantically meaningful latent space as for example diffusion models [44; 47; 49; 50] could improve our proposed methods even further. However, generating a single image with out-of-the-box diffusion models requires multiple seconds. This is prohibitively long, so leave exploring their usability in our proposed dynamic data augmentation technique for future work. In general, KD allows us to distill smaller models that perform similar to large foundation models. Improving the distillation process to be more efficient lowers the barrier of applying KD across labs with various compute budget and decreases environmental impact. At the same time, transferring generalization abilities effectively and consistently results in smaller distilled models that are appealing to use, thus we would expect such smaller models to be used abundantly hence lowering the general carbon footprint for model usage. In conclusion, our study proposes avenues to efficiently improve KD in terms of performance, efficiency, and hence environmental impact. ## 7 Conclusion In this work we introduced a general, task-agnostic, and modular framework to extend knowledge distillation by learnable data augmentations. The augmentation models are optimized to generate inputs on which teacher and student disagree, keeping the teacher's predictions unchanged at the same time. We show that these augmentations can solve the issue of KD and transfer equivariance properties, even in cases where the teacher's inductive biases are distinct from the student's. We further demonstrate that our learned augmentations achieve performance competitive to established classical data augmentation techniques even when student and teacher share similar inductive biases. Overall our framework offers a powerful tool that enhances transfer performance and offers a unique insights into the transferred knowledge through its interpretable augmentations. Figure 4: Example augmentations applied to images of the ImageNet validation set obtained from augmentor models in the ViT-S setting at the end of training. ### Acknowledgements Furthermore, we thank Felix Schluter for his helpful insights into evaluation problems as well as Mohammad Bashiri, Pawel Pierzchlewicz and Suhas Shrinivasan for helpful comments and discussions. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Arne Nix and Max F. Burg. This work was supported by the Cyber Valley Research Fund (CyVy-RF-2019-01), by the German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center (FKZ: 01IS18039A), by the Deutsche Forschungsgemeinschaft (DFG) in the SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms (TP12), project number: 276693517, and funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 432680300 - SFB 1456. FHS is supported by the Carl-Zeiss-Stiftung and acknowledges the support of the DFG Cluster of Excellence "Machine Learning - New Perspectives for Science", EXC 2064/1, project number 390727645.
2310.10840
Direct observation of small scale capillary wave turbulence using high speed digital holographic microscopy
It is now known that capillary waves driven upon a fluid interface by high frequency ($>1$~MHz) ultrasound exhibit capillary wave turbulence: the appearance of waves with phase and wavelength far removed from the excitation signal that drives them. An important step towards understanding atomization phenomena driven in this system, these capillary waves may now be studied using high-speed digital holographic microscopy. We observe Zakharov-Kolmogorov weak wave turbulence for a limited range of input power, and find broader turbulence phenomena outside this range. We see discrete thresholds as the input power is increased, where higher and higher frequency responses are driven in the capillary waves with sudden onset between regimes. Here, we employ spatial analysis to find one such extension of the capillary wave response to higher frequencies, suggesting there is additional information in the spatial distribution of the capillary wave that is rarely if ever measured. We verify via frequency modulation that nonlinear resonance broadening is present, which undermines the use of Faraday wave or parametric wave theories to characterize these waves, important in the context of atomization which is not a Faraday wave process.
William Connacher, Jeremy Orosco, Oliver Schmidt, James Friend
2023-10-16T21:27:05Z
http://arxiv.org/abs/2310.10840v1
Direct observation of small scale capillary wave turbulence using high speed digital holographic microscopy ###### Abstract It is now known that capillary waves driven upon a fluid interface by high frequency (\(>1\) MHz) ultrasound exhibit capillary wave turbulence: the appearance of waves with phase and wavelength far removed from the excitation signal that drives them. An important step towards understanding atomization phenomena driven in this system, these capillary waves may now be studied using high-speed digital holographic microscopy. We observe Zakharov-Kolmogorov weak wave turbulence for a limited range of input power, and find broader turbulence phenomena outside this range. We see discrete thresholds as the input power is increased, where higher and higher frequency responses are driven in the capillary waves with sudden onset between regimes. Here, we employ spatial analysis to find one such extension of the capillary wave response to higher frequencies, suggesting there is additional information in the spatial distribution of the capillary wave that is rarely if ever measured. We verify via frequency modulation that nonlinear resonance broadening is present, which undermines the use of Faraday wave or parametric wave theories to characterize these waves, important in the context of atomization which is not a Faraday wave process. ## I Introduction Weakly nonlinear interactions between a large number of waves with random phase result in wave turbulence that is typically modeled with the well developed statistical theory, 'weak' wave turbulence (WWT) [1]. Ample experimental work has been done on wave turbulent systems including liquid surface waves [2; 3], plasmas [4; 5], and solid plates [6]. Studies in this area frequently provide evidence in support of WWT theory. A fundamental result of WWT is the Kolmogorov-Zakharov (KZ) spectrum, which shows the power spectral density (PSD) of the waves has a power-law dependence upon the frequency (or wavenumber) over a conservative regime of the wave response. For liquid surface waves dominated by surface tension (_i.e._, capillary waves), the KZ spectrum is represented by \[S(f)\propto\epsilon^{1/2}\Big{(}\frac{\gamma}{\rho}\Big{)}^{1/6}f^{-\alpha}, \tag{1}\] where \(\epsilon\) is the mean energy flux, \(\gamma\) is the surface tension, \(f\) is the frequency, \(\rho\) is the liquid density, and \(\alpha=17/6\) for capillary waves [7; 8]. In a log-log plot of the PSD, the power law produces a PSD that is linearly dependent upon \(-\alpha\). In this model, energy is said to 'cascade' from low to high frequency in the system with the predicted scaling when certain conditions are met: i) the domain is infinite, ii) there is sufficient scale separation between energy injection and energy dissipation, iii) the cascade is driven by weakly nonlinear three-wave interactions, and iv) the wave interactions are local (_i.e._, they occur between waves with similar wavelengths). Recent studies have attempted to reconcile discrepancies between the idealized conditions of WWT and the results of experiments where these idealizations are routinely violated. Connaughton _et al._[9] put forward a simple model that explains how quasi-resonances in finite domain capillary wave systems cause deviations from the predictions of WWT. At low levels of nonlinearity, discrete resonances are not broad enough to permit energy to traverse the eigengrid spacing imposed by the bounded domain, and so the energy cascade typical of WWT is stunted. Falcon _et al._[10] observed that the frequency scalings in a gravity-capillary wave turbulence system are dependent on the input power. Falcon and Laroche [11] observed that the depth of the liquid also has an impact on the capillary wave spectrum, causing it to deviate from a power law, but the mechanism remains unknown. Deike _et al._[12] measured the wave height spectrum with varying viscosity and showed in real systems that dissipation, indicated by a deviation in the linear spectral slope, occurs in the (theoretically energy conservative) inertial region, between energy injection and the dissipation zone. They propose a new way to measure energy flux in the system and use it to account for non-ideal dissipation. We have previously investigated the effect of increasing nonlinearity beyond what can be considered weak and also showed experimental evidence of finite domain effects [13]. In the current work, we investigate how energy traverses length scales in a capillary wave system with a data-driven approach, taking into account much smaller scales than have been previously studied. We are primarily interested in milli to micro-scale capillary waves because of their connection to ultrasonic atomization phenomena. Vibration of a surface in contact with liquid in the kHz or MHz range above a threshold amplitude produces many small droplets, on the order of microns, from the liquid surface [14; 15; 16]. In the kHz range, the size of the resulting droplets can be related to Faraday wave theory, where the driving frequency produces capillary waves at half the driving frequency. Lang [16] showed experimentally that using this Faraday wave assumption along with Kelvin's equation relating frequency to wavelength yields a good estimate of the median droplet diameter, \(D=\kappa((\beta\pi\gamma)/(\rho F^{2}))^{1/3}\), where \(\kappa=0.34\) is a fitting parameter, \(\gamma\) is surface tension, \(\rho\) is density, and \(F\) is driving frequency. This relationship holds for low atomization rates (\(\sim 0.01\) mL/s) and ultrasound frequencies between 10-800 kHz. Outside this frequency range, the value of \(\kappa\) has been changed post-hoc to \(\kappa=3.8\)[17] in an attempt to fit the equation to the results when the frequency was increased to 20 MHz. Post-hoc reasoning was used even for the original definition of \(\kappa=0.34\): a wave crest represents one half of a wavelength and therefore it makes sense that \(\kappa<0.5\). Lang photographically verified that the capillary waves appearing on the surface form a uniform lattice with wavelength that decreases with increasing ultrasound frequency. Subsequent studies have demonstrated that the droplet size also depends on viscosity, power input, and flow rate with more detailed empirical correlations [18; 19; 20]. Lang was not able to explain the measured variation in diameter about the median. Many of these classical conceptions break down in the context of MHz-order, high power ultrasonic atomization. A fundamental assumption of Faraday wave theory--that the excitation frequency is on the same order as the principal capillary wave frequency--fails at frequencies beyond the 100 kHz range. In these small sessile droplet systems, the first oscillations that appear are on the order of 100 Hz, three orders of magnitude lower than the excitation frequency. Moreover, one would expect to see a capillary wave response at one-half the excitation frequency for it to be a Faraday wave. It has been shown experimentally that no peak in the spectrum exists at or near one-half the MHz-order driving frequency [21]. In fact, a broadband spectrum wave, indicating WWT, develops on the surface at powers far below the threshold for atomization. In this context, Lang's simple approach of deducing droplets from wavelengths becomes untenable because there is no single frequency at which capillary waves occur--they no longer appear in a uniform lattice. The actual droplet size distributions measured from MHz-order ultrasound have two and sometimes three peaks and often disagree with Lang's equation [22; 23; 24; 25]. Recent work claims to have solved the problem of determining the droplet size distribution [24]; a Gamma function is fit to droplet size distributions demonstrating that they obey well studied corrugation and ligamentation processes found in sprays. This helps to explain the variation about the median that Lang could not explain. However, the Gamma function takes two parameters, the width of the ligament distribution and the ligament corrugation, so that the method has no predictive power but simply exchanges one unknown, the distribution of capillary waves, for another. The authors suggest that, because they measure two droplet size peaks, then there must be a bi-modal distribution of capillary wavelengths on the surface of the liquid. This makes intuitive sense based on a paradigm of Faraday waves and, without measuring waves directly, it is difficult to validate this suggestion. The difficulty associated with gaining direct knowledge of the waves in this context originates in the time and space scales of the waves, which are much smaller and faster than most modern experimental equipment can reliably observe. In the current work, we study capillary waves experimentally at micro-length and time scales. Our system is a millimeter length, microliter volume of water that completely insulates capillary wave phenomena from the effects of gravity while also clearly expressing finite domain effects. We drive the system using high frequency (\(\sim\)7 MHz), high power (\(\sim\)500 mW) ultrasound that pushes the wave interactions up to and beyond the weak nonlinearity assumption. We measure the surface displacement field at 115 kfps, representing a time period of 8.7 \(\mu\)s, using aspoke high-speed digital holographic microscope (DHM) capable of producing full field of view measurements of the fluid interface's displacement at up to 120 kfps [26]. We first apply standard, single-point time series analysis to this data to obtain standard amplitude spectra. We then analyze the broader spatial data using techniques that compliment and improve upon the results of single-point time series analysis. Berhanu and Falcon [27] were the first to use this type of data although they were limited to larger length and time scales. They showed that linear and nonlinear dispersion relations extracted from their data matched WWT theories using spatial and temporal Fourier methods. Another commonly used spatial technique is modal analysis using eigenfunction decomposition. The aim is to reveal coherent structures that may not be best represented by sinusoids or wavelets [28]. Proper orthogonal decompositions (POD) produce independent spatial modes optimized by energy with no requirement set on time behavior and have been used extensively in turbulence studies [29; 30]. We will show that standard modal analyses do not reveal coherent structures in our system. However, they successfully quantify energy shifts across length scales. It is well known that isotropic turbulent systems have POD modes that reduce to a Fourier basis [28; 31]. The POD modes of our system are indeed closely related to 2D sinusoidal gratings. This detail allows us to attach a well-defined length scale to each mode. Using this approach, we track how energy flows across length scales as a function of power input. ## II Experimental setup Our system consists of a sessile drop of water on a thickness-mode \(25\times 25\) mm piezoelectric resonator (_see_ Fig. 1). The resonator has a 6-mm diameter circular window in the electrodes that allows light to pass through. The electrodes were formed by sputter depositing a 10-nm titanium bonding layer followed by 500 nm of gold on both sides of a transparent, double-side optically polished (\(<0.1\) nm roughness), 0.5-mm thick, 127.68\({}^{\circ}\) Y-rotated, X-propagating lithium niobate (LN) wafer. The 36\({}^{\circ}\) Y-rotated cut is often used in thickness mode devices, but we have found the 127.68\({}^{\circ}\) cut to be superior for operation at resonance [15]. A photoresist mask was used on both sides to coat the circular window region with AZ photoresist polymer. After titanium and gold deposition, the photoresist was dissolved, lifting off the metal films and leaving the transparent LN exposed [32]. This procedure was repeated on the second side to produce a thickness-mode resonator with a fundamental resonance at 7 MHz. A 200 \(\mu\)m thick with a 9.5 mm inner diameter and a 12.7 mm outer diameter circular polyimide tape (Polymide Film Electrical Tape 1205, 3M, Maplewood, MN USA) ring was cut and attached to the resonator around the window area in order to repeatably constrain the sessile droplet shape and location during experiments. We used a pipette to place 66 \(\mu\)L of distilled (ultrapure) water on the LN within the ring so that the drop was pinned around the entire perimeter. To reduce the effects of residual surfactants and contaminants--because these effects cannot be eliminated--the surface was carefully rinsed thrice before conducting each experiment with the ultrapure water. The fluid volume was chosen so that the surface of the drop would be relatively flat compared with the field of view while not deviating significantly from the inherent contact angle of water on LN. All this was sought to ensure the contact line with the ring remains pinned. This geometry also limits the system to deep water capillary waves for the frequencies and wavelengths in this study, so that we can avoid having to consider four-wave interactions in gravity capillary waves as gravity waves and the lateral fluid dynamics from shallow capillary waves [1]. In fact, very few experiments solely consider capillary waves, as traditional methods for observing such waves require the reduction of gravity through spaceflight, parabolic aircraft flights, or special low-gravity experimental settings [2]. Reducing the apparent gravity, \(g\), reduces the critical capillary frequency \(f_{c}\propto g^{(3/4)}\), above which waves are dominated by surface tension. We instead limit the waves that can be generated by reducing the size of the sessile droplet "container", thereby preventing most gravity waves and allowing us to focus on high frequency waves. The resonator was driven directly with a signal generator (WF1946, NF Corporation, Yokohama, Japan), except for the highest power data set which used a linear 10 W amplifier (210L, E&I, Rochester, NY USA). The signal passed to the resonator was a continuous, fixed amplitude, single frequency sine wave at the fundamental resonance frequency of the thickness mode device. The resonator was placed upon a 1 mm thick, \(40\times 40\) mm steel plate with a 10 mm diameter hole aligned with the transparent window in the LN resonator. One spring probe was placed in contact with the top electrode, while another was placed in contact with the steel plate, forming an electrical contact with the bottom electrode. The current and voltage were measured with probes connected to an oscilloscope and their product was averaged over 1 million cycles to determine the true power entering the resonator. The signal was applied to the resonator \(0.5\) seconds after beginning measurement with the DHM and maintained until after the measurement was completed at six seconds of elapsed time. Experiments were performed over a range of power inputs with all other conditions controlled. We recorded eighteen data sets between 0-250 mW. Figure 1: A schematic of our experimental setup incorporating (a) high-speed digital holographic microscopy (DHM) with a (b) sessile droplet placed upon a single-crystal lithium niobate thickness-mode ultrasound resonator. The droplet is placed directly over a transparent ”window” in the lithium niobate substrate with a gold electrode on both sides everywhere else. An expanded laser propagates from below through the droplet and into the DHM to produce a holographic image captured by the high-speed camera to then produce a (c) height map over the measurement region (indicated in (b) as Measurement). For scale in (b), the lithium niobate resonator is \(25\times 25\) mm in size and is 0.5 mm thick. The zero power data set was recorded for only two seconds. We measured the surface displacement in time using a customized digital transmission holographic microscope (DHM, LynceeTec, Lausanne, Switzerland) coupled to an ultra high speed camera (NOVA S12, Photron, San Diego, CA USA). In this approach, laser light, with a wavelength of 660 nm, is split into a sample and a reference beam. The sample beam passes through the window in the resonator and the sessile drop and is then combined with the reference beam to form a phase image and an intensity image at the camera. The phase delay caused by passing through varying distances of water produces a two-dimensional map of the change in the height of the water surface. The measurable change is up to one wavelength of light without post-processing. However, if the height change is sufficiently smooth and gradual, the phase jumps caused by height changes greater than the wavelength of light can be unwrapped to still produce accurate measurements of the fluid interface's deflection. The height resolution of the system is on the order of 10 nm. We refer the reader to Cuche _et al._[33] for details on the DHM technology. We used a 10X objective lens and recorded at 115,200 frames per second with a \(200\times 200\) pixel field of view, producing a pixel size at the fluid interface of 1.625 \(\mu\)m. Thus, we expected to observe behavior at frequencies up to 60 kHz and with reasonable amplitude accuracy up to approximately 30 kHz according to the Nyquist-Shannon sampling theorem [34]. ## III Time-series analysis In order to obtain the power spectrum, we extracted the data describing the transverse displacement of the central pixel and performed Welch's method [35] with blocks of \(2^{15}\) time steps (_see_ Fig. 2 and Appendix A) to minimize noise. Most studies of wave turbulence inject energy at a range of frequencies lower than the expected dissipation range in order to ensure nonlinear wave interactions and the observation of a turbulent cascade [10; 36]. When these systems are driven parametrically, at a single frequency, it is common to observe Faraday waves indicated by a dominant peak at one half the driving frequency and well-ordered patterns in space at low power [2]. With sufficient power, however, Faraday wave systems do become turbulent [37]. Our system is fundamentally different, because we drive the system at a frequency five orders of magnitude higher than the frequency of the capillary wave cascade's starting frequency. It is important to note here that the driving signal of 7 MHz cannot be observed in Fig. 2 because the frequency is beyond our high-speed DHM measurement range. However, Zhang _et al._[38] have proposed a mechanism by which large frequency excitation can stimulate much lower frequency wave behavior in a similar system, supported by past data captured by Blamey _et al._[21] where the spectral response at a single point _was_ measured to 25 MHz. The mechanism occurs by the generation of an acoustic standing wave in the parent droplet as a cavity. This produces a spatially varying acoustic pressure upon the fluid interface that causes its deformation. If at low amplitude, the deformation is static [39], however, it quickly becomes dynamic beyond an excitation threshold dependent on the specific experimental setup. As the amplitude of the interface's deformation grows larger, the standing acoustic wave in the fluid droplet changes shape, leading to a change in the acoustic pressure upon the interface, changing its shape, and so on to produce capillary wave motion at spatiotemporal scales associated with the 100 Hz range capillary wave resonances in this system. Those capillary wave resonances interact via a nonlinear three-wave mechanism to produce the cascade [1]. As has been seen in numerical simulations by Pushkarev and Zakharov, the finite domain in our system fails to produce a cascade at low power inputs where the nonlinearity is very weak [40]. Based on the model from Connaughton _et al._[9], there are multiple thresholds of resonance broadening and therefore of nonlinearity, all driven by power input. At each of these thresholds, the cascade increases in length towards larger wavenumbers, consistent with experimental observations of the cascade growing to higher frequencies with increases in input power [21]. It seems clear from their work that the specific thresholds for a given system depends on the spacing of the quasi-resonances and the relationship between forcing and broadening. The amplitude spectra in Fig. 2 clearly show a consistent peak at around 30 Hz with harmonics except for the 0 and 5 mW results. The 0 mW data indicates the noise floor of our measurement system, a combination of shot noise in the measurement system and thermal excitation of the fluid interface. At 3.5, 5, and 7 mW, our system is highly intermittent, which explains why the 5 mW line is not separated from the 0 mW line. Up to 11.5 mW the capillary wave energy remains confined to frequencies below about 100 Hz. As the power is increased to 14.5 mW there is a broadening of the harmonics and the beginning of a broadband response that extends the cascade to 200 Hz. At 16 mW the cascade abruptly extends to about 10 kHz, and continues to extend upward in frequency beyond our DHM's measurement range from 31 mW and up. These characteristics seem to support Connaughton's model of finite domain, "frozen" turbulence [9]. The predicted value of \(\alpha=-17/6\) associated with the Zakharov-Kolmogorov (ZK) capillary wave cascade occurs only in a specific range of power inputs in a specific frequency range, 0.1-1 kHz at 15-35 mW (_see_ Fig. 3). The initial jump to large \(\alpha\) above the slope of the noise floor at 0 mW in Fig. 2 is a direct result of a finite domain. After the initial cascade extension is complete at around 20 mW, we observe the appearance of a corner frequency separating two regions of constant \(\alpha\) at each power input. Below this frequency, the slope \(\alpha\) is shallower than the predicted value, and above this frequency it is steeper. This spectral response is the opposite of what is typically observed in systems that contain both gravity and capillary forces driving the waves [10]. This would seem to indicate a transition to a stronger dissipation mechanism at the frequency where the shoulder occurs, 1 kHz at 23 mW and monotonically in creasing to 4 kHz at 246 mW. Conversely, the low frequency range shifts from \(\alpha=-17/6\) just after cascade completion in keeping with WWT towards smaller values of \(\alpha\) as power increases. This may indicate a transition from weakly to strongly nonlinear wave interactions consistent with observations of this transition in related experiments [13]. We also observe frequency broadening of the low frequency modes as the power is increased from 16 to 108 mW, in concert with the growth of the high frequency portion of the cascade. The broadening of peaks with increased input power again supports Connaghton's finite domain nonlinear resonance broadening model. Increased nonlinear interactions among discrete resonances allow energy to move down the cascade more easily and may explain faster growth at high frequency. Notably, we do not see any distinct resonance peaks above 100 Hz at the highest power. This power is well below the threshold of atomization onset for this system, so the presence of capillary waves at a certain frequency seems not to foreshadow the peaks of a droplet size distribution in this system as was suggested by Kooij _et al._[24]. To further explore this issue, we then sought to explicitly modulate the input signal at 5 kHz in an attempt to drive the appearance of a dominant capillary wave at this modulation frequency. At the relatively low power of 53 mW with the sinusoidal modulation of the input frequency, the modulation frequency does appear in the capillary wave with a corresponding strong response peak in Fig. 4(a), along with three harmonics up to about 20 kHz. As we increase the power to 100 mW, still well below the atomization threshold, the 5 kHz amplitude peak is still strong but broader while the harmonics are much less prominent. Increasing the power to 155 mW further broadens the 5 kHz peak with reduction in its maximum amplitude of about 50%; the harmonics are completely absent. Increasing the power beyond the measurement capabilities of the DHM to drive atomization at 500 mW, we found that there is no observable difference between the droplet size distribution for the atomized droplets whether or not the 5 kHz modulation is present, as determined using laser diffraction droplet sizing (Spraytec, Malvern Panalytical, Malvern UK) and plotted in Fig. 4(b). ## IV Spatial mode analysis Looking beyond the information that can be gleaned from observation of a single point on the fluid interface over time, we next consider the entire field of view and seek to determine how the fluid interface evolves over time _and_ space as the input power is changed. We use the machinery of proper orthogonal decomposition to support this effort [29; 30]. Figure 3: The frequency exponent of the wave height spectra, plotted here as average values determined from the slope of the spectra over the ranges 500–1000 Hz and 4–10 kHz, rarely coincides with the ZK prediction of \(\alpha=-17/6\). The low frequency component grows rapidly to more than \(-5\) at low powers, a consequence of the appearance of the capillary wave, but shallows to \(-2\) as the power increases beyond 50 mW. The change in the high frequency component from \(-2\) to \(-4\) occurs at the same time the low frequency component shallows from \(-5\) to \(-2\). As the input power increases from 100 mW, the high frequency component decreases through the ZK prediction to approach a value of \(-2\). Figure 2: Power spectral density of the interfacial displacement plotted with respect to the frequency for a single point on the fluid interface at 7 MHz excitation. Note the red line indicates no input power, representing the noise floor with our measurement method. As the input power is increased, the appearance of capillary wave oscillations is apparent at about 5–7 mW, and progressive expansion of the cascade to higher frequencies is clear as the power is increased from 14.5 mW to 225 mW. The modal character of the capillary waves is lost from apparent frequency broadening at about 61 mW. The observed spectral slopes are generally lower than the theoretical value of \(-17/6\). Colors indicate power input values where there were notable changes in the spectra. When computing the POD modes and singular values, we selected \(2^{13}\) frames from each data set. We transformed each frame into a 40,000 element column vector and collected all the frames together into a \(40,000\times 8,192\) matrix, \(F\). The mean frame was subtracted from each frame to produce a matrix, \(X\). We then performed a singular value decomposition (SVD) upon \(X=USV^{T}\), which produced a \(40,000\times 8,192\) matrix \(U\), a \(8,192\times 8,192\) matrix \(S\), and a third matrix \(V\); some details are provided in Appendix B. The matrix \(U\) contains 8,192 modes, the diagonal matrix \(S\) contains the singular values corresponding to those modes, and \(V\) represents the corresponding eigenvectors. By virtue of SVD the singular values are automatically sorted in descending order and are equal to the square root of the eigenvalues, \(\lambda\), of the classical eigenvalue problem: \(R\phi_{\text{l}}=\lambda_{\text{l}}\phi_{\text{l}}\), where \(R\) is the auto-covariance matrix and \(\phi\) are the modes [28]. When the measured data is velocity, the eigenvalues are proportional to the kinetic energy. In our case where the measurement field is displacement, the singular values are therefore proportional to the amplitude scaled by the number of time steps. This means that the most important spatial modes in terms of describing the amplitude of the water surface are the first columns of \(U\): they represent the largest displacement components. The modes greater than 475 in each data set were discarded based on the appearance of a prominent corner in the singular value distribution at or below this mode number (_see_ Fig. 5), suggesting the modes above 475 play little to no role in defining the displacement of the fluid interface. For example, the first 64 POD modes and modes 201-264 are plotted in Fig. 5. The characteristic length scale clearly decreases with increasing mode number. For the wavelength range where the field of view offers sufficient resolution (\(<200\,\mu\)m), it is evident from 2D FFT (see Supplementary Information) that only one length scale is present in each mode, and even for lower modes it is reasonable to assume by inspection that each mode is roughly composed of sinusoids of a single wavelength. In order to quantify the wavelength of the modes, we utilized a custom algorithm based on two-dimensional fast Fourier transforms (2DFFT). For each mode we reshaped its column vector into a \(200\times 200\) image and performed 2DFFT, which produced another \(200\times 200\) image. Each point in this new image is associated with a wavelength and a pixel value associated with the strength of that wavelength. For small wavelengths it is sufficient to define the wavelength as the pixel distance to the pixel with the largest value. However, for larger wavelengths, poor resolution in the wavelength space means we must take an average of some number of the pixels. Based on trial and error, we chose to average eight pixels with the largest values. Furthermore, in order to eliminate potential bias towards modes that are aligned with the square camera window we performed this procedure on 45 different rotations of the mode image and took the average of the wavelength values over these rotations. In this way, we were able to obtain the relationship between wavelength and mode number in Fig. 6 for the capillary waves in this system at each input power. This allows us to plot the singular values for each mode, which are equal to wave amplitudes as we described earlier, against their respective length scale in Fig. 7. Energy is clearly restricted to length scales greater than 500 \(\mu\)m until the power reaches 7 mW. There is also a clear transition between 98 and 108 mW where energy shifts towards much smaller length scales. This second transition was not apparent from Fourier analysis alone (_see_ Fig. 2). Increasing power in this range abruptly extends the cascade, as we could not see in Fig. 2 due to frequency resolution, similar to the cascade extension between 7 and 16 mW that _is_ observable in Fig. 2. Finally, notice that while the wave amplitude at scales greater than approximately 200 \(\mu\)m grow monotonically with an increase in input power, the wave amplitude at smaller length scales grows much more rapidly as the input power increases from 61 mW to 225 mW. Figure 4: The effect of using 5 kHz modulation of the 7 MHz input signal upon the (a) capillary wave spectrum. At low amplitudes, the effect of the modulation is significant: compare 73 mW without modulation and 53 mW with modulation, and note how the response is significantly perturbed at 5 kHz. At greater amplitudes, the nonlinear resonance broadening appears to _reduce_ the amplitude of the 5 kHz peak: compare 155 mW to the 100 mW data, both with the 5 kHz modulation. These input powers are well below what is required for atomization, and there is (b) no significant difference between the droplet sizes that are generated from atomization at 500 mW through the use of signal modulation at 5 kHz. In order to understand if modes of the same number between data sets are related, we took the covariance between the \(U\) matrix for each data set and that of a compiled data set, \(U_{c}\). The compiled data set, comprised of 1024 frames, was constructed from frames taken randomly from each individual data set, upon which the same POD procedure was used to obtain \(U_{c}\). The covariance reveals how similar each \(U\) matrix is to a common basis, \(U_{c}\). Figure 8 shows the covariance of \(U_{61}\) for the 61 mW case with \(U_{c}\). It also highlights cross-sections (vertical lines) that appear in Fig. 8 where they reveal components of the given mode in terms of the basis modes (_i.e.,_ the dot product). For a given mode in an individual data set, the distribution of modal components tend to be centered near the corresponding mode in the basis. We can quantify the deviation of this distribution from the corresponding basis mode using the first and second moments, analogous to the expected value and variance, respectively. The first moment (\(\mu_{1}\)) is given by \[\mu_{1}=\frac{1}{\Delta}\sum_{j=1}^{n}jD(j), \tag{2}\] where \(D(j)=U_{ci}\cdot U_{ij}\) is the distribution of the \(i^{\text{th}}\) components of the \(j^{\text{th}}\) mode of \(U_{ij}\) across the \(i\) modes of \(U_{c}\) (e.g., the cross-section); and \(\Delta=\sum D(j)\) is the sum of these components over \(j\). The deviation in the expected value is then the difference between \(\mu_{1}\) and \(a\). The second moment, variance \(\mu_{2}\), is given by \[\mu_{2}=\sum_{j=1}^{n}D(j)(\mu_{1}-j)^{2}. \tag{3}\] The variance is a convenient way to show that the modes identified by POD are essentially common between different data sets, not merely just their length scale. They also Figure 5: An example of the results from principal component analysis, showing (a) the first 64 modes and (b) the 201st to 264th modes for the interfacial displacement from the 16 mW data set. The distribution of the singular value amplitudes in terms of the fraction of the total energy each mode possesses is (c) plotted with respect to the mode number; in this case the vast majority of the response is in the first \(\sim\) 40 modes and modes beyond about 475 contribute negligibly to the response. Figure 6: The computed wavelength for each mode found via 2DFFT of the principal component analysis results. The largest measurable modes have a wavelength twice the field of view. Note how the distribution of wavelengths does not significantly change as the input power is increased from 0 to 460 mW. Figure 7: Singular values scaled by the number of frames from POD plotted against the wavelength obtained by performing a 2DFFT algorithm on each mode. This is the same plot as Fig. 5c), but with the y-axis scaled to be amplitude and with the x-axis transformed from mode to wavelength. quantify how the modes themselves change with power input--setting aside the singular values, which rank the amplitudes of the modes within a data set at a single power. We plot the deviation of the first and second moments for each mode of every data set in Fig. 9. The first three lines (in red) have a similar shape, but then suddenly the 7 mW line is different, which aligns with the sudden changes in both Fig. 2 and Fig. 7. The way this shape changes indicates that the modes around the 20th mode are interacting with a much larger number of adjacent modes than they were at only a slightly lower power. The fact that the cross-sections in Fig. 8 are distributed generally around their corresponding basis mode is an expression of local wave interaction. When the power crosses some threshold near 6 mW, the modes near 20 suddenly become less localized and interact with modes possessing wavelength scales of greater difference. As power increases from 7 to 42 mW the moment lines gradually move towards the zero power line, indicating that the modes interact more locally and they are more closely described by the corresponding basis modes. Once again, however, at 50 mW the modes beyond mode 45 become de-localized, but again move towards the zero power lines at slightly larger powers. This process repeats from 86 mW for modes near the 50th mode. The final de-localization apparent in the data is centered closer to the 140th mode, indicating even smaller length scales. This de-localization coincides with the transition identified in Fig. 7. ## V Conclusions Fourier analysis often used in wave turbulence problems identifies the relationship between wave height and frequency for micro-scale capillary waves in a small ses Figure 8: To determine the broadening of the capillary wave modes, we plot the (a) covariance of the 61-mW input power case with respect to the original mode numbers. If there were no broadening of the responses, this would be a diagonal line from upper left to lower right. Instead, it spreads laterally but remains locally distributed as nearly diagonal, indicating that broadening does indeed occur but there are no apparent sub- or super-harmonic parametric resonant cascades across significantly different wavelength scales. The (a) colored vertical lines in the plot are next (b) plotted showing the amplitude of the product of the chosen mode with the basis mode. Figure 9: (a) First and (b) second moments versus basis mode number as a proxy for the wavelength. Recall that the modes are sequentially ordered from largest wavelength to smallest. sile droplet. The effects of the finite sized droplet produce frozen turbulence and low frequency (relative to the high-speed DHM measurement capabilities) repeated and abrupt cascade extension at specific input powers. Immediately upon cascade completion we see a small region of capillary wave dynamics that correspond to the classic ZK theory of WWT where \(\alpha=-17/6\). At greater powers, there is a corner frequency separating \(\alpha\) into two frequency domains, a relatively shallow-sloped region at lower frequencies and a steeper-sloped region at higher frequencies. These observations suggest an increasingly strong nonlinear wave interaction at low frequencies and increasing dissipation at larger frequencies. Spatial analysis with POD provides us with a direct link between amplitude and wavelength. Energy flowing towards smaller length scales exhibits a nonlinear dependence upon the input power, with discrete thresholds repeatedly appearing as the power is increased. These shifts are analogous to cascade extension, most of which are observable using standard Fourier analysis. However, spatial analysis allows us to identify a small scale, high frequency cascade extension that was not apparent with Fourier analysis. The covariance between each data set and the basis modes provides us with information about how coherent modes change with power input. We are able to identify delocalization events where waves interact with wavelengths of greater difference as the power is increased. Uniform Faraday waves are a poor model for capillary wave dynamics in this system, and therefore are unlikely to be appropriate for predicting atomization which requires greater power. Resonant peaks in the capillary wave response are quickly broadened by the nonlinear coupling, even when the input signal is modulated to force the resonance into existence. This occurs at input power levels well below the atomization threshold, and it remains unclear how to predict which wavelengths will be preferentially amplified--if any--in order to form droplets at a larger power input. A theory of wave turbulence beyond the weak and infinite regimes dictated by WWT are therefore needed to predict which waves will produce droplets in high frequency ultrasound atomization systems. ## VI Acknowledgements We are grateful to the Office of Naval Research, United States (grants 13423461 and 12368098) and the W.M. Keck Foundation, United States for funding provided to J. Friend in support of this work. J. Orosco is grateful for support provided by the University of California's Presidential Postdoctoral Fellowship program. We are also grateful to Yves Emery and team at LynceeTec for assistance with adapting the DHM to this project's needs. Fabrication was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) of UCSD, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (Grant ECCS-1542148).
2305.12239
Off-Policy Average Reward Actor-Critic with Deterministic Policy Search
The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.
Naman Saxena, Subhojyoti Khastigir, Shishir Kolathaya, Shalabh Bhatnagar
2023-05-20T17:13:06Z
http://arxiv.org/abs/2305.12239v2
# Off-Policy Average Reward Actor-Critic with ###### Abstract The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an \(\epsilon\)-optimal stationary policy with a sample complexity of \(\Omega(\epsilon^{-2.5})\). We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments. Machine Learning, Reinforcement Learning, Policy Learning, Reinforcement Learning, Policy Learning, Reinforcement Learning, Policy Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning Learning, Reinforcement Learning, constant term. Further, the policy evaluation step in this case consists of finding not just the Differential Q-value function but also the average reward. Thus, because of the required estimation of two quantities instead of one, the role of the optimization algorithm and the target network increases here. Therefore we implement the proposed ARO-DDPG algorithm by using target network and by carefully selecting the optimization algorithm. The following are the broad contributions of our paper: * We provide both on-policy and off-policy deterministic policy gradient theorems for the average reward performance metric. * We present our Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) algorithm. * We show a comparison of our algorithm on several environments with other state-of-the-art average reward algorithms in the literature. * We perform asymptotic convergence analysis using ODE-based method and also provide a finite time analysis of our three timescale stochastic approximation based actor-critic algorithm using a linear function approximator. Silver et al. (2014); Lillicrap et al. (2016) and Xiong et al. (2022) individually address one of the aspects of discounted reward performance criteria for deterministic policies such as policy gradient theorem, implementation of practical algorithm and convergence analysis. In this paper we provide a comprehensive treatment of average reward performance criteria for deterministic policies by covering policy gradient theorem, implementation of practical algorithm and convergence analysis. The rest of the paper is structured as follows: In Section 2, we present the preliminaries on the MDP framework, the basic setting as well as the policy gradient algorithm. Section 3 presents the deterministic policy gradient theorem and our proposed ARO-DDPG algorithm. Section 4 then presents the main theoretical results related to the convergence analysis. Section 5 presents the experimental results. In Section 6, we discuss other related work and Section 7 presents the conclusions. The detailed proofs for the convergence analysis are available in the Appendix. ## 2 Preliminaries Consider a Markov Decision Process (MDP) \(M=\{S,A,R,P,\pi\}\) where \(S\subset\mathbb{R}^{n}\) is the (continuous) state space, \(A\subset\mathbb{R}^{m}\) is the (continuous) action space, \(R:S\times A\mapsto\mathbb{R}\) denotes the reward function with \(R(s,a)\) being the reward obtained under state \(s\) and action \(a\). Further, \(P(\cdot|s,a)\) denotes the state transition function defined as \(P:S\times A\mapsto\mu(\cdot)\), where \(\mu:\mathcal{B}(S)\mapsto[0,1]\) is a probability measure. Deterministic policy \(\pi\) is defined as \(\pi:S\mapsto A\). In the above, \(\mathcal{B}(S)\) represents the Borel sigma algebra on \(S\). Stochastic policy \(\pi_{r}\) is defined as \(\pi_{r}:S\mapsto\mu^{\prime}(\cdot)\), where \(\mu^{\prime}:\mathcal{B}(A)\mapsto[0,1]\) and \(\mathcal{B}(A)\) is the Borel sigma algebra on \(A\). **Assumption 2.1**.: The Markov process obtained under any policy \(\pi\) is ergodic. Assumption 2.1 is necessary to ensure existence of a unique steady state distribution of the Markov process. ### Discounted Reward MDPs In discounted reward MDPs, discounting is controlled by \(\gamma\in(0,1)\). The following performance metric is optimized with respect to the policy: \[\eta(\pi)=\mathbb{E}^{\pi}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})]=\int_ {S}\rho_{0}(s)V^{\pi}(s)\,ds. \tag{1}\] Here, \(\rho_{0}\) is the initial state distribution and \(V^{\pi}\) is the value function. \(V^{\pi}(s)\) denotes the long term discounted reward acquired when starting in the state \(s\). \[V^{\pi}(s_{t})=\mathbb{E}^{\pi}\Big{[}R(s_{t},a_{t})+\gamma V^{\pi}(s_{t+1})| s_{t}\Big{]}. \tag{2}\] ### Average reward MDPs The performance metric in the case of average reward MDPs is the long-run average reward \(\rho(\pi)\) defined as follows: \[\rho(\pi)=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}^{\pi}\Big{[}\sum_{t=0}^{N-1} R(s_{t},a_{t})\Big{]}=\int_{S}d^{\pi}(s)R^{\pi}(s)\,ds, \tag{3}\] where \(R^{\pi}(s)\stackrel{{\triangle}}{{=}}R(s,\pi(s))\). The limit in the first equality in (3) exists because of Assumption 2.1. The quantity \(d^{\pi}(s)\) in the second equality in (3) corresponds to the steady state probability of the Markov process being in state \(s\in S\) and it exists and is unique given \(\pi\) from Assumption 2.1 as well. \(V^{\pi}_{diff}\) is the differential value function corresponding to the policy \(\pi\) and is defined in (4). Further, the differential Q-value or action-value function \(Q^{\pi}_{diff}\) is defined in (5). \[V^{\pi}_{diff}(s_{t})=\mathbb{E}^{\pi}[\sum_{i=t}^{\infty}R(s_{i},a_{i})-\rho( \pi)|s_{t}]. \tag{4}\] \[Q^{\pi}_{diff}(s_{t},a_{t})=\mathbb{E}^{\pi}[\sum_{i=t}^{\infty}R(s_{i},a_{i})- \rho(\pi)|s_{t},a_{t}]. \tag{5}\] **Lemma 2.2**.: _There exists a unique constant \(k(=\rho(\pi))\) which satisfies the following equation for differential value function \(V_{diff}\) :_ \[V^{\pi}_{diff}(s_{t})=\mathbb{E}^{\pi}[R(s_{t},a_{t})-k+V^{\pi}_{diff}(s_{t+1})| s_{t}] \tag{6}\] Proof.: See Lemma A.12 in the appendix for the proof. ### Policy Gradient Theorem Unlike in Q-learning where we try to find the optimal Q-value function and then infer the policy from it, the policy gradient theorem (Sutton et al., 1999; Silver et al., 2014; Degris et al., 2012) allows us to directly optimize the performance metric via its gradient with respect to the policy parameters. Q-learning can be visualized to be a value iteration scheme while an algorithm based on the policy gradient theorem can be seen as mimicking policy iteration. Sutton et al. (1999) provided the policy gradient theorem for on-policy optimization of both the discounted reward and the average reward algorithms, see (7)-(8), respectively. \[\nabla_{\theta}\eta(\pi)=\int_{S}\omega^{\pi}(s)\int_{A}\nabla_{\theta}\pi_{r} (a|s,\theta)Q^{\pi_{r}}(s,a)\,da\,ds. \tag{7}\] \[\nabla_{\theta}\rho(\pi)=\int_{S}d^{\pi}(s)\int_{A}\nabla_{\theta}\pi_{r}(a|s, \theta)Q^{\pi_{r}}_{diff}(s,a)\,da\,ds. \tag{8}\] In (7) \(\omega^{\pi}\) denotes the long term discounted state visitation probability density which is defined in (9) while \(d^{\pi}(s)=\lim_{t\to\infty}P_{t}^{\pi}(s)\) is the steady state probability density on states. \(P^{\pi}\) denotes the transition probability kernel for the Markov chain induced by policy \(\pi\) and \(P_{t}^{\pi}\) is the state distribution at instant \(t\) given by (10). \[\omega^{\pi}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}P_{t}^{\pi}(s). \tag{9}\] \[P_{t}^{\pi}(s)=\int_{S\times S\ldots}\rho_{0}(s_{0})\prod_{k=0}^{t-1}P^{\pi}(s _{k+1}|s_{k})\,ds_{0}\ldots\,ds_{t-1}. \tag{10}\] The policy gradient theorem in Sutton et al. (1999) is only valid for on-policy algorithms. Degris et al. (2012) proposed an approximate off-policy policy gradient theorem for stochastic policies, see (11), where \(d^{\mu}\) stands for the steady state density function corresponding to the policy \(\mu\). \[\nabla_{\theta}\eta(\pi)\approx\int_{S}d^{\mu}(s)\int_{A}\nabla_{\theta}\pi_{ r}(a|s,\theta)Q^{\pi}(s,a)\,da\,ds. \tag{11}\] Silver et al. (2014) came up with the deterministic policy gradient theorem for discounted reward setting, see (12), which eventually led to the development of very successful Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2016) algorithm and Twin Delayed DDPG (TD3) algorithm (Fujimoto et al., 2018). In the next section we show how we extend the same development for average reward criterion. \[\nabla_{\theta}\eta(\pi)=\int_{S}\omega^{\pi}(s)\nabla_{a}Q^{\pi}(s,a)|_{a= \pi(s)}\nabla_{\theta}\pi(s,\theta)\,ds. \tag{12}\] ## 3 Proposed Average Reward Algorithm We now propose the deterministic policy gradient theorem for the average reward criterion. The policy gradient estimator has to be derived separately for both the on-policy and off-policy settings. Obtaining the on-policy deterministic policy gradient estimator is straight forward but dealing with the off-policy gradient estimates involves an approximate gradient (Degris et al., 2012). ### On-Policy Policy Gradient Theorem We cannot directly use the second equality of (3) to derive the policy gradient theorem because of the inability to take the derivative of steady state density function. Therefore one needs to use Lemma 2.2 to obtain the average reward deterministic policy gradient theorem. **Theorem 3.1**.: _The gradient of \(\rho(\pi)\) with respect to policy parameter \(\theta\) is given as follows:_ \[\nabla_{\theta}\rho(\pi)=\int_{S}d^{\pi}(s)\nabla_{a}Q^{\pi}_{diff}(s,a)|_{a= \pi(s)}\nabla_{\theta}\pi(s,\theta)\,ds. \tag{13}\] Proof.: See Theorem A.13 in the appendix for the proof. ### Compatible Function Approximation The result in this section is mostly inspired from Silver et al. (2014). Recall that \(Q^{\pi}_{diff}(s,a)\) is the 'true' differential \(Q\)-value of the state-action tuple \((s,a)\) under the parameterized policy \(\pi\). Now let \(Q^{w}_{diff}(s,a)\) denote the approximate differential \(Q\)-value of the \((s,a)\)-tuple when function approximation with parameter \(w\) is used. Lemma 3.2 says that when the function approximator satisfies a compatibility condition (cf. (14,15)), then the gradient expression in (13) is also satisfied by \(Q^{w}_{diff}\) in place of \(Q^{\pi}_{diff}\). **Lemma 3.2**.: _For on-policy case, assume that the differential Q-value function (5) satisfies the following:_ 1. \[\nabla_{w}\nabla_{a}Q^{w}_{diff}(s,a)|_{a=\pi(s)}=\nabla_{\theta}\pi(s,\theta).\] (14) 2. _Differential Q-value function parameter_ \(w=w^{*}_{\epsilon}\) _optimizes the following error function:_ \[\begin{split}\zeta(\theta,w)=&\frac{1}{2}\int_{S}d^{ \pi}(s)\|\nabla_{a}Q^{\pi}_{diff}(s,a)|_{a=\pi(s)}\\ &-\nabla_{a}Q^{w}_{diff}(s,a)|_{a=\pi(s)}\|^{2}\,ds.\end{split}\] (15) _Then,_ \[\begin{split}&\int_{S}d^{\pi}(s)\nabla_{a}Q^{\pi}_{diff}(s,a)|_{a= \pi(s)}\nabla_{\theta}\pi(s,\theta)\,ds\\ &=\int_{S}d^{\pi}(s)\nabla_{a}Q^{w}_{diff}(s,a)|_{a=\pi(s)} \nabla_{\theta}\pi(s,\theta)\,ds.\end{split} \tag{16}\] _Further, in the case when a linear function approximator is used, we obtain_ \[\nabla_{a}Q^{w}_{diff}(s,a)|_{a=\pi(s)}=\nabla_{\theta}\pi(s,\theta)^{\intercal}w. \tag{17}\] Proof.: See Lemma A.14 in the appendix for a proof. An important implication of Lemma 3.2 also is that the dimension of the matrix on the left hand side and the right hand side of (14) should be the same. Hence the dimensions of the parameters \(\theta\) (used in the parameterized policy) and \(w\) (used to approximate the differential Q-value function) are the same. Lemma 3.2 shows that the compatible function approximation theorem has the same form in the average reward setting as the discounted reward setting. ### Off-Policy Policy gradient theorem In order to derive off-policy policy gradient theorem it is not possible to use the direction adopted by Degris et al. (2012) for off-policy stochastic policy gradient theorem for the discounted reward setting. We first mention our proposed approximate off-policy deterministic policy gradient theorem and then explain why some alternatives would not have worked. **Assumption 3.3**.: For the Markov chain obtained from the policy \(\pi\), let \(K(\cdot|\cdot)\) be the transition kernel and \(S^{\pi}\) the steady state measure. Then there exists \(a>0\) and \(\kappa\in(0,1)\) such that \[D_{TV}(K^{t}(\cdot|s),S^{\pi}(\cdot))\leq a\kappa^{t},\forall t,\forall s\in S.\] Assumption 3.3 states that Markov chain generated by a policy \(\pi\) follows uniform ergodicity property. This assumption is necessary to get an upper bound on the total variation norm of steady state probability distribution of two policies. Further this assumption allows for fast mixing of markov chain and i.i.d sampling of transitions from buffer for convergence analysis purpose. **Theorem 3.4**.: _The approximate gradient (\(\widehat{\nabla_{\theta}\rho}(\pi)\)) of the average reward \(\rho(\pi)\) with respect to the policy parameter \(\theta\) is given by the following expression:_ \[\widehat{\nabla_{\theta}\rho}(\pi)=\int_{S}d^{\mu}(s)\nabla_{a}Q ^{\pi}_{diff}(s,a)|_{a=\pi(s)}\nabla_{\theta}\pi(s,\theta)\,ds. \tag{18}\] _Further, the approximation error is \(\mathcal{E}(\pi,\mu)=\|\nabla_{\theta}\rho(\pi)-\widehat{\nabla_{\theta}\rho }(\pi)\|,\) where \(\mu\) represents the behaviour policy with parameter \(\theta^{\mu}\) and \(\nabla_{\theta}\rho(\pi)\) is the on-policy policy gradient from Theorem 3.1. \(\mathcal{E}\) satisfies_ \[\mathcal{E}(\pi,\mu)\leq Z\|\theta-\theta^{\mu}\|, \tag{19}\] _where, \(Z=2^{n+1}C(\lceil\log_{\kappa}a^{-1}\rceil+1/\kappa)L_{t}\) with \(L_{t}\) being the Lipchitz constant for the transition probability density function (Assumption A.1). Constants \(a\) and \(\kappa\) are from Assumption 3.3, \(n\) is the dimension of the state space, and \(C=\max_{s}\|\nabla_{a}Q^{\pi}_{diff}(s,a)|_{a=\pi(s)}\nabla_{\theta}\pi(s, \theta)\|.\)_ Proof.: See Theorem A.15 in the appendix for a proof. Theorem 3.4 suggests that the approximation error in the gradient increases as the difference between the target policy \(\pi\) and the behaviour policy \(\mu\) increases. ### Off-Policy Alternatives In this section we will talk about what alternatives could be thought of in place of what is suggested in Section 3.3 and why those alternatives would not work. 1. One can possibly take inspiration from Degris et al. (2012) and define an objective function, \(\rho_{new}(\pi)\), as in (20), which is a naive off-policy version of (3). \[\rho_{new}(\pi)=\int_{S}d^{\mu}(s)R^{\pi}(s)\,ds.\] (20) If, however, we take the derivative of \(\rho_{new}(\pi)\) defined above, we get the policy update rule as in (21). \[\nabla_{\theta}\rho_{new}(\pi)=\int_{S}d^{\mu}(s)\nabla_{a}R(s,a) |_{a=\pi(s)}\nabla_{\theta}\pi(s,\theta)\,ds.\] (21) The update rule (21) only considers the reward function and not the transition dynamics of the MDP. In (18), the derivative of the objective function includes the differential Q-value function which encapsulates both the information of the reward function and the transition dynamics of the MDP and hence is valid derivative. Therefore we cannot use \(\rho_{new}\), given in (20). 2. A lot of work in the off-policy setting relies on importance sampling ratios. Recently a few works devised a method to estimate the steady state probability density ratio of the target and behavior policies (Zhang et al., 2020; 20; Liu et al., 2018; Nachum et al., 2019). The ratio of steady state densities could be used for deterministic policy optimization but there are certain issues which prohibit its usage, see (22). \[\nabla_{\theta}\rho(\pi)\] \[=\int_{S}d^{\mu}(s)\tau(s)\nabla_{a}Q^{\pi}_{diff}(s,a)|_{a=\pi(s )}\nabla_{\theta}\pi(s,\theta)\,ds.\] (22) Here, \(\tau(s)\) is the steady state probability density ratio defined as \(d^{\pi}(s)/d^{\mu}(s)\). In order to calculate \(\tau(s)\) we need information about (\(\pi(a|s),\mu(a|s)\) and \(P(s^{\prime}|s,a)\)). We need the ratio \(\pi(a|s)/\mu(a|s)\) and for deterministic policies the ratio would be \(\delta(a-\pi(s)/\delta(a-\mu(s))\), where \(\delta(\cdot)\) is the Dirac-Delta function: \[\frac{\delta(a-\pi(s))}{\delta(a-\mu(s))}=\begin{cases}0&\text{ if }a=\mu(s)\\ \infty&\text{ if }a=\pi(s)\\ \frac{0}{0}&\text{ otherwise.}\end{cases} \tag{23}\] From (23), it is clear that the ratio \(\delta(a-\pi(s)/\delta(a-\mu(s))\) will be undefined for almost all actions \(a\in A\). Thus, we cannot use this ratio for deterministic policies. Otherwise, we need \(P(s^{\prime}|s,\pi(a))\) and \(P(s^{\prime}|s,\mu(a))\). It is possible to get the information about \(P(s^{\prime}|s,\mu(a))\) by sampling from the Markov process generated by the policy \(\mu\) but obtaining this information about \(P(s^{\prime}|s,\pi(a))\) is impossible as in the off-policy setting data from \(\pi\) is assumed to be simply unavailable. In the next section we will use the policy gradient theorems defined in this section to implement practical actor-critic algorithm. ### Actor-Critic Update rule In our paper actor refers to policy and critic refers to the approximate differential Q-value function and average reward estimate combined. **Assumption 3.5**.: \(\alpha_{t},\beta_{t},\) and \(\gamma_{t}\) are the step sizes for critic, target critic parameter, and actor parameter updates respectively. \[\alpha_{t}=\frac{C_{\alpha}}{(1+t)^{\sigma}}\quad\beta_{t}=\frac{C_{\beta}}{( 1+t)^{u}}\quad\gamma_{t}=\frac{C_{\gamma}}{(1+t)^{v}}\] Here, \(C_{\alpha},C_{\beta},C_{\gamma}>0\) and \(0<\sigma<u<v<1\). \(\alpha_{t}\) is at the fastest timescale, \(\beta_{t}\) is at slower timescale and \(\gamma_{t}\) is at the slowest timescale. Please note that target critic parameter refers to copy of main critic parameter that are updated using polyak averaging. The critic parameters are estimated using the TD(0) update rule with target critic parameters. We are using target critic parameters to ensure stability of the iterates of the algorithm. Let \(\{s_{i},a_{i},s^{\prime}_{i}\}_{i=0}^{n-1}\) denote the batch of sampled data from the replay buffer. \[\xi_{t}^{j}= \frac{1}{2}\sum_{i=0}^{n-1}\biggl{(}R(s_{i},a_{i})-\overline{ \rho_{t}}-Q_{diff}^{w^{j}}(s_{i},a_{i})\] \[+\min(Q_{diff}^{\overline{w^{i}}},Q_{diff}^{\overline{w^{i}}})(s^ {\prime}_{i},\pi(s^{\prime}_{i},\overline{\theta_{t}}))\biggr{)}^{2}j\in\{1,2\} \tag{24}\] \[\xi_{t}^{3}= \frac{1}{2}\sum_{i=0}^{n-1}\biggl{(}R(s_{i},a_{i})-\rho_{t}-\min (Q_{diff}^{\overline{w^{i}}},Q_{diff}^{\overline{w^{i}}})(s_{i},a_{i})\] \[+\min(Q_{diff}^{\overline{w^{i}}},Q_{diff}^{\overline{w^{i}}})(s^ {\prime}_{i},\pi(s^{\prime}_{i},\overline{\theta_{t}}))\biggr{)}^{2} \tag{25}\] Equations (24) and (25) correspond to the Bellman error for the differential Q-value function approximator and the average reward estimator respectively. Note that we are using the double Q-value function approximator. Here \(\bar{\rho_{t}}\) represents the target estimator for average reward at time, \(Q_{diff}^{w^{i}}\) represents the differential Q-value function parameterized by target differential Q-value parameter \(\bar{w^{i}_{t}}\) and \(\bar{\theta_{t}}\) represents the target parameter for actor at time t, respectively. \[w^{j}_{t+1}=w^{j}_{t}-\alpha_{t}\nabla_{w^{j}}\xi_{t}^{j}\quad j\in\{1,2\} \tag{26}\] \[\rho_{t+1}=\rho_{t}-\alpha_{t}\nabla_{p}\xi_{t}^{3} \tag{27}\] Our aim is to find the value of parameters for differential Q-value function and average reward estimator such that the Bellman equation is satisfied. Hence, the Bellman error in (24) is used to update the differential Q-value function parameters \(w^{j}_{t}\) using (26) and the Bellman error in (25) is used to update the estimator of the average reward \(\rho_{t}\) using (27). Our approach is motivated from the update rule for the differential Q-value function and the average reward parameters given in Wan et al. (2021b) (equations (3) and (4)) and Zhang et al. (2021b)(Algorithm 2). \[\nu_{i}=\nabla_{a}min(Q_{diff}^{w^{1}},Q_{diff}^{w^{2}})(s_{i},a)|_{a=\pi(s_{i} )}\nabla_{\theta}\pi(s_{i},\theta_{t}) \tag{28}\] \[\theta_{t+1}=\theta_{t}+\gamma_{t}\biggl{(}\sum_{i=0}^{n-1}\nu_{i}\biggr{)} \tag{29}\] Actor update is performed using Theorem 3.4. Actor parameter, \(\theta_{t}\), is updated using empirical estimate (28) of the gradient in (18). \[\overline{w^{j}_{t+1}}=\overline{w^{j}_{t}}+\beta_{t}(w^{j}_{t+1}-\overline{ w^{j}_{t}})\quad j\in\{1,2\} \tag{30}\] \[\overline{\rho_{t+1}}=\overline{\rho_{t}}+\beta_{t}(\rho_{t+1}-\overline{ \rho_{t}}) \tag{31}\] \[\overline{\theta_{t+1}}=\overline{\theta_{t}}+\beta_{t}(\theta_{t+1}-\overline{ \theta_{t}}) \tag{32}\] Equation (30) - (32) are used to update the target Q-value function parameter \(\overline{w^{j}_{t}}\), target average reward estimator \(\overline{\rho_{t}}\) and target actor parameter \(\overline{\theta_{t}}\). ## 4 Convergence Analysis In this section we present the asymptotic convergence analysis and finite time analysis of the on-policy and off-policy average reward actor critic algorithm with linear function approximators. First we mention the assumptions taken to perform the convergence analysis followed by the main results. **Assumption 4.1**.: \(\phi^{\pi}(s)\big{(}=\phi(s,\pi(s))\) denotes the feature vector of state s and satisfies \(\|\phi^{\pi}(s)\|\leq 1\). The assumption above is just taken for the sake of convenience. **Assumption 4.2**.: The reward function is uniformly bounded, viz., \(|R^{\pi}(s)|\leq C_{r}<\infty\). Assumption 4.2 is required to make sure that the average reward objective function is bounded from above. **Assumption 4.3**.: \(Q_{diff}^{w}(s,a)\) is Lipchitz continuous w.r.t to \(a\). Thus, \(\forall w\quad\|Q_{diff}^{w}(s,a_{1})-Q_{diff}^{w}(s,a_{2})\|\leq L_{a}\|a_{1}-a _{2}\|\). Continuity of approximate Q-value function w.r.t action is enforced using Assumption 4.3. Without the continuity property, the approximate differential Q-values will not generalize for unseen action values. **Assumption 4.4**.: Parameterised policy \(\pi(s,\theta)\) is Lipchitz continuous w.r.t \(\theta\). Thus, \(\|\pi(s,\theta_{1})-\pi(s,\theta_{2})\|\leq L_{\pi}\|\theta_{1}-\theta_{2}\|\). Assumption 4.4 is a common regularity assumption for convergence of actor. It can be found in Wu et al. (2020), Xiong et al. (2022) and Zou et al. (2019). **Assumption 4.5**.: The state feature mapping (\(\phi^{\pi}(s)=\phi(s,\pi(s))\) defined for a policy \(\pi\) with parameter \(\theta\) is Lipschitz continuous w.r.t \(\theta\). Thus, \(\max_{s}\|\phi^{\pi_{1}}(s)-\phi^{\pi_{2}}(s)\|\leq L_{\phi}\|\theta_{1}- \theta_{2}\|\). Continuity of state action feature w.r.t action is required to ensure generalisation of Q-values to unseen action values. Using this continuity of state action feature with Assumption 4.4 we can satisfy Assumption 4.5. ### Asymptotic Convergence We prove the asymptotic convergence of the three timescale stochastic approximation on-policy algorithm (Algorithm 4) using ODE-based method (Borkar, 2009; Kushner and Clark, 2012; Lakshminarayanan and Bhatnagar, 2017) in two steps. First we keep the policy parameter \(\theta\) fixed and prove the convergence of differential Q-value function parameter \(w_{t}\), average reward estimator \(\rho_{t}\), target differential Q-value function parameter \(\bar{w}_{t}\) and target average reward estimator \(\bar{\rho}_{t}\) in Theorem 4.6 (given below). Later we prove the convergence of policy parameter \(\theta_{t}\) using the point of convergence of critic parameters because the policy parameter are updated at the slowest timescale \(\gamma_{t}\). **Theorem 4.6**.: _In Algorithm 4, let policy parameter \(\theta_{t}\) be kept constant at \(\theta\). The differential Q-value function parameter \(w_{t}\) and the target differential Q-value function parameter \(\bar{w}_{t}\) converges to \(w(\theta)^{\star}\). Also, average reward estimator \(\rho_{t}\) and target average reward estimator \(\bar{\rho}_{t}\) converges to \(\rho(\theta)^{\star}\). (Note: The point of convergence \(w(\theta)^{\star}\) and \(\rho(\theta)^{\star}\) are defined in Theorem A.37.)_ Proof.: See Theorem A.37 in the appendix for the proof. Theorem 4.6 uses the two timescale stochastic approximation stability result from Lakshminarayanan and Bhatnagar (2017). Later, taking inspiration from Bhatnagar and Lakshminan (2012) and invoking the Theorem 5.3.1 of Kushner and Clark (2012) we prove the convergence of policy parameter \(\theta_{t}\) in Theorem 4.7. **Theorem 4.7**.: \(\Gamma_{C_{\theta}}:\mathbb{R}^{d}\to C_{\theta}\) _is a projection operator, where \(C_{\theta}\) is compact convex set and \(\hat{\Gamma}_{C_{\theta}}(\theta)\nabla_{\theta}\rho(\theta)\) refers to directional derivative of \(\Gamma_{C_{\theta}}(\cdot)\) in the direction \(\nabla_{\theta}\rho(\theta)\) at \(\theta\). Let \(K=\{\theta\in C_{\theta}|\hat{\Gamma}_{C_{\theta}}(\theta)\nabla_{\theta}\rho( \theta)=0\}\) and \(K^{\epsilon}=\{\theta^{\prime}\in C_{\theta}|\exists\ \theta\in K\ \|\theta^{\prime}-\theta\|<\epsilon\}\). \(\forall\epsilon>0\ \exists\delta\) such that if \(\sup_{\pi}\|e^{\pi}\|<\delta\) then \(\theta_{t}\) converges to \(K^{\epsilon}\) as \(t\to\infty\) with probability one. \(e^{\pi}\) is the function approximation error defined in Lemma A.38._ Proof.: See Theorem A.39 in the appendix for the proof. Theorem 4.7 essentially argues that the actor update scheme in Algorithm 4 tracks the ODE \(\dot{\theta}(t)=\hat{\Gamma}_{C_{\theta}}(\theta(t))(\nabla_{\theta}\rho( \theta(t))+e^{\pi(t)})\) and converges to an \(\epsilon-\)neighbourhood of the set \(K\). Moreover, when \(\sup_{\pi}\|e^{\pi}\|\to 0\), the actor update scheme tracks \(\dot{\theta}(t)=\hat{\Gamma}_{C_{\theta}}(\theta(t))(\nabla_{\theta}\rho( \theta(t)))\) and convergence to the set \(K\). Conclusions of Theorem 4.7 will continue to hold for off policy algorithm (Algorithm 5) by suitably setting the value of l2-regularisation coefficient. ### Finite Time Analysis We perform finite time analysis by finding an upper bound on the expected squared norm of the policy gradient (\(\min_{0\leq t\leq T}\mathbb{E}\|\nabla_{\theta}\rho(\theta_{t})\|^{2}\)) for both Algorithms 2 and 3. We first identify error in the parameters of the algorithm and define the dependency graph of error, as shown in Figure 1 for Algorithm 2. In Figure 1, an arrow from one error (source) to the other error (destination) shows that an upper bound on the destination error depends on an upper bound on the source error. Exploiting the dependency graph of errors we finally find an upper bound on the expected squared norm of policy gradient (\(\min_{0\leq t\leq T}\mathbb{E}\|\nabla_{\theta}\rho(\theta_{t})\|^{2}\)) in terms of time \(T\). #### 4.2.1 On-Policy Analysis In Algorithm 2, we define the error for policy parameter as the expected squared norm of policy gradient (\(\frac{1}{T}\sum_{t=0}^{T-1}E\|\nabla_{\theta}\rho(\theta_{t})\|^{2}\)). The error for differential Q-value function parameter \(w_{t}\) and target differential Q-value function parameter \(\bar{w}_{t}\) is defined as \(\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\Delta w_{t}\|^{2}\) and \(\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\Delta\bar{w}_{t}\|^{2}\) respectively. Here, \(\Delta w_{t}=w_{t}-w_{t}^{*}\), \(\Delta\bar{w}_{t}=\bar{w}_{t}-w_{t}^{*}\) and \(w_{t}^{*}\) is the optimal differential Q-value function parameter for policy parameter \(\theta_{t}\). The error for target differential Q-value function is defined by taking inspiration from Theorem 4.6, as Theorem 4.6 says both \(w_{t}\) and \(\bar{w}_{t}\) converge to the same point. The error for average reward estimator \(\rho_{t}\) and target average reward estimator \(\bar{\rho}_{t}\) is defined as \(\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\Delta\rho_{t}\|^{2}\) and \(\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\Delta\bar{\rho}_{t}\|^{2}\) respectively. Here, \(\Delta\rho_{t}=\rho_{t}-\rho_{t}^{*}\), \(\Delta\bar{\rho}_{t}=\bar{\rho}_{t}-\rho_{t}^{*}\) and \(\rho_{t}^{*}\) is the optimal average reward estimate for policy parameter \(\theta_{t}\). Using all the aforementioned errors in the parameter we define a dependency graph in Figure 1 and obtain an upper bound on the expected squared norm of policy gradient (\(\min_{0\leq t\leq T-1}\mathbb{E}\|\nabla_{\theta}\rho(\theta_{t})\|^{2}\)) in Theorem 4.8. **Theorem 4.8**.: _The on-policy average reward actor critic algorithm (Algorithm 2) obtains an \(\epsilon\)-accurate optimal point with sample complexity of \(\Omega(\epsilon^{-2.5})\). We obtain_ \[\min_{0\leq t\leq T-1} \mathbb{E}\|\nabla_{\theta}\rho(\theta_{t})\|^{2}\] \[=\mathcal{O}\bigg{(}\frac{1}{T^{2/5}}\bigg{)}+3C_{\pi}^{2}(C_{a \phi}^{2}\tau^{2}+\frac{4}{M}C_{\pi}^{2}C_{w_{e}}^{2}),\] \[\leq\epsilon+\mathcal{O}(1).\] _Here, \(\|\nabla_{\theta}\pi(s)\|\leq C_{\pi}\) (Assumption 4.4), \(\tau=\max_{t}\|w_{t}^{*}-w_{e,\epsilon}^{*}\|\), \(w_{e}^{*}\) is the optimal differential Q-value function parameter according to Lemma 3.2. Constant \(C_{w_{e}^{*}}\) is defined in Lemma A.33. \(M\) is the size of batch of samples used to update parameters. \(C_{a\phi}\) is the Lipchitz constant defined in Assumption A.9._ Proof.: See Theorem A.22 in the appendix for the proof. We started the analysis with a three timescale stochastic approximation algorithm but later observed that the best sample complexity is achieved when critic parameter and target critic parameters are updated on the same time-scale, i.e. \(u=\sigma\) (Assumption 3.5). The extra terms \(3C_{\pi}^{2}C_{a\phi}^{2}\tau^{2}\) and \(12C_{\pi}^{4}C_{w_{\pi}^{*}}^{2}/M\) exist in the bound established in Theorem 4.8 because of function approximation error and empirical expectation respectively. \(3C_{\pi}^{2}C_{a\phi}^{2}\tau^{2}\) can be reduced using high capacity function approximator such as neural network. \(12C_{\pi}^{4}C_{w_{\pi}^{*}}^{2}/M\) can be made small by increasing the size of the batch \(M\) used for empirical expectation. The same error terms are also present in the finite time analysis by Xiong et al.. Let \(K_{2}=\{\theta\mid\nabla_{\theta}\rho(\theta)=0\}\) and \(K_{2}^{*}=\{\theta^{\prime}|\exists\theta\in K_{2}\ \|\theta^{\prime}-\theta\|<\epsilon\}\). \(\forall\epsilon>0\ \exists\delta\) such that if \(|3C_{\pi}^{2}(C_{a\phi}^{2}\tau^{2}+\frac{4}{M}C_{\pi}^{2}C_{w_{\pi}}^{2})|<\delta\) then \(\theta_{t}\) converges to \(K_{2}^{*}\) with rate \(\mathcal{O}(T^{-2/5})\). #### 4.2.2 Off-Policy Analysis In Algorithm 3, we define the error for policy parameter as the expected squared norm of approximate policy gradient (\(\frac{1}{T}\sum_{t=0}^{T-1}E\|\widehat{\nabla_{\theta}\rho}(\theta_{t})\|^{2}\)). Error in rest of the parameters is defined in the same way as the on-policy case and a similar dependency graph of errors will be obtained. Using the dependency graph of errors in parameters an upper bound in terms of time \(T\) on the expected squared norm of approximate policy gradient (\(\min_{0\leq t\leq T-1}E\|\widehat{\nabla_{\theta}\rho}(\theta_{t})\|^{2}\)) is obtained in Theorem 4.9. **Theorem 4.9**.: _The off-policy average reward actor critic algorithm (Algorithm 3) with behavior policy \(\mu\) obtains an \(\epsilon\)-accurate optimal point with sample complexity of \(\Omega(\epsilon^{-2.5})\). Here \(\theta_{\mu}\) refers to the behavior policy parameter and \(\theta_{t}\) refers to the target or current policy parameter. We obtain_ \[\min_{0\leq t\leq T-1}\mathbb{E}\|\widehat{\nabla_{\theta}\rho}( \theta_{t})\|^{2}\] \[=\mathcal{O}\bigg{(}\frac{1}{T^{2/5}}\bigg{)}+3C_{\pi}^{2}(C_{a \phi}^{2}\tau^{2}+\frac{4}{M}C_{\pi}^{2}C_{w_{e}}^{2})+\mathcal{O}(W_{\theta}^ {2})\] \[\leq\epsilon+3C_{\pi}^{2}(C_{a\phi}^{2}\tau^{2}+\frac{4}{M}C_{ \pi}^{2}C_{w_{e}}^{2})+\mathcal{O}(W_{\theta}^{2})\] _where \(W_{\theta}:=\sup_{t}\|\theta^{\mu}-\theta_{t}\|\)._ _Here, \(\|\nabla_{\theta}\pi(s)\|\leq C_{\pi}\) (Assumption 4.4), \(\tau=\max_{t}\|w_{t}^{*}-w_{e,t}^{*}\|\), \(w_{e}^{*}\) is the optimal differential Q-value function parameter according to Lemma A.16. Constant \(C_{w_{\pi}^{*}}\) is defined in Lemma A.33. \(C_{a\phi}\) is Lipchitz constant defined in Assumption A.9. \(M\) is the size of batch of samples used to update parameters._ Figure 1: Dependency of errors in different types of parameters in Algorithm 2 on one another. Proof.: See Theorem A.25 the appendix for a proof. Here also we found that two timescale stochastic approximation algorithm has better sample complexity than three timescale version. We have the same extra term in the bound as established in Theorem 4.8 with an additional term of \(\mathcal{O}(W_{\theta}^{2})\). The extra term \(\mathcal{O}(W_{\theta}^{2})\) denotes the error induced because of not using the samples from the current policy for performing updates. \(W_{\theta}^{2}\) will be small when replay buffer is used because replay buffer contains data from policies similar to the current policy. This explains why policy gradient theorem in Theorem 3.4 can be used with replay buffer. Let \(K_{3}=\{\theta\ |\ \widehat{\nabla_{\theta}}\rho(\theta)=0\}\) and \(K_{3}^{\varepsilon}=\{\theta^{\prime}|\}\theta\in K_{3}\ \|\theta^{\prime}-\theta\|<\epsilon\}\). \(\forall\epsilon>0\ \exists\delta\) such that if \(|3C_{\pi}^{2}(C_{\alpha\phi}^{2}\tau^{2}+\frac{4}{M}C_{\pi}^{2}C_{w_{\epsilon }}^{2})+C^{\prime}W_{\theta}^{2}|<\delta\) with \(C^{\prime}>0\) then \(\theta_{t}\) converges to \(K_{3}^{\varepsilon}\) with rate \(\mathcal{O}(T^{-2/5})\). ## 5 Experimental Results We conducted experiments on six different environments using the DeepMind control suite (Tassa et al., 2018) and found the performance of ARO-DDPG 1 to be superior than the other algorithms (Figure 2). All the environments selected are infinite horizon tasks. Maximum reward per time step is 1. None of the tasks have a goal reaching nature. We performed all the experiments using 10 different seeds. We show here performance comparisons with two state-of-the-art algorithms: the Average Reward TRPO (ATRPO) (Zhang and Ross, 2021) and the Average Policy Optimization (APO) (Ma et al., 2021) respectively. In general for the average reward performance, not many algorithms are available in the literature. We implemented the ATRPO algorithm using the instructions available in the original paper. We performed hyperparameter tuning and found the original hyper-parameters suggested by the author for ATRPO are the best. Footnote 1: Pytorch implementation of ARO-DDPG could be found at this URL: [https://github.com/namansaxena9/ARODDPG](https://github.com/namansaxena9/ARODDPG) For our proposed algorithm we trained the agent for 1 million time steps and evaluated the agent after every 5,000 time steps in the concerned environment. The length of each episode for the training phase was taken to be 1,000 and for the evaluation phase it was taken to be 10,000. The reason for taking longer episode length for evaluation phase was to compare the long term average reward performance of the algorithms. We also tried using episode length of 10,000 for training phase and found that to be giving poor average reward performance. We do not reset the agent if before completing 10,000 steps, it lands in a state from where it is unable to escape of its own. The agent continues to get a reward of zero by default for the remaining length of the episode. That way the cost of failure is high. While training we updated the actor after performing a fixed number of environment steps. We updated the differential Q-value function neural network with more frequency as compared to the actor neural network. We used target actor and differential Q-value networks along with target estimator of the average reward parameter for stability while using bootstrapping updates. We updated the target network using polyak averaging. We tried to enforce multiple timescales in our algorithm by using different update frequency for actor, critic and polyak averaging for target networks. We also borrowed the double Q-network trick from Fujimoto et al. (2018). Complete information regarding the set of hyper-parameters used is provided in the appendix. ## 6 Related Work Actor-Critic algorithms for average reward performance criterion is much less studied compared to discounted reward performance criterion. One of the earliest works on the average reward criterion is Mahadevan (1996). In this paper, Mahadevan compares the performance of R-learning with that of Q-learning and concludes that fine tuning is required to get better results from R-learning. R-learning is the average reward version of Q-learning. Later in 1999, Sutton et al. derived the policy gradient theorem for both discounted and average reward criteria (Sutton et al., 1999), which formed the bedrock for development of the average reward actor-critic algorithms. The first proof of asymptotic convergence of average reward actor-critic algorithms with function approximation appeared in Konda and Tsitsiklis (2003). A temporal difference learning based off-policy control algorithm has been proposed in Maei et al. (2010). An incremental off-policy search algorithm based on the cross entropy method has been proposed in (Joseph and Bhatnagar, 2018). Further, in Bhatnagar et al. (2007, 2009), incremental update natural policy gradient algorithms for the average reward setting have been proposed in the on-policy setting and asymptotic convergence proofs of the same provided. An off-policy variant of the natural actor-critic algorithm has been proposed in Diddigi et al. (2022). Recently, Wan et al. presented a Differential Q-learning algorithm and claimed that their algorithm is able to find the exact differential value function without an offset. Further, Wan et al. provided an extension of the options framework from the discounted setting to the average reward setting and demonstrated the performance of the algorithm in the Four-Room domain task. One of the major contributions in off-policy policy evaluation is made by Zhang et al. (2021a). Here Zhang et al. gave a convergent off-policy evaluation scheme inspired from the gradient temporal difference learning algorithms but involving a primal-dual formulation making the policy evaluation step feasible for a neural network implementation. Zhang et al. (2021b) provided another con -regularisation. In our work we use the same policy evaluation update. Our work in this paper is actually an extension of the work of Silver et al. (2014) from the discounted to the average reward setting. In Xiong et al. (2022), a finite time analysis for deterministic policy gradient algorithm was done for the discounted reward setting. We performed the finite time analysis for the average reward deterministic policy gradient algorithm and in particular obtain the same sample complexity for our algorithm as reported by Wu et al. (2020) for stochastic policies. ## 7 Conclusion and Future Work In this paper we presented a deterministic policy gradient theorem for both on-policy and off-policy settings considering average reward performance criteria. We then proposed the Average Reward Off-policy Deep Deterministic Policy Gradient(ARO-DDPG) algorithm using neural network and replay buffer for high dimensional MuJoCo based environments. We observed superior performance of ARO-DDPG over existing average reward algorithms (ATRPO and APO). We first showed the asymptotic convergence using ODE-based method. Later we provided finite time analysis for the on-policy and off-policy algorithms based on the proposed policy gradient theorem and obtained the sample complexity of \(\Omega(\epsilon^{-2.5})\). Lastly to extend the current line of work, one could try using natural gradient descent based update rule for deterministic policy. Further in the current work we tried optimizing the average reward performance (gain optimality). In the literature, optimizing the differential value function for all the states is mentioned as part of achieving Blackwell optimality. Hence actor-critic algorithms could be designed that not only optimize average reward performance but also differential value function (bias optimality). It would also be interesting to devise similar algorithms for constrained MDPs as with (Bhatnagar, 2010; Bhatnagar and Lakshmanan, 2012; Bhatnagar et al., 2013). ## Acknowledgements N. Saxena was supported by a Ministry of Education (MoE) scholarship and a project from DST-ICPS. S. Khatagir was supported by a Ministry of Education (MoE) scholarship. S. Kolathaya was supported by a Pratiksha Trust Young Investigator Fellowship and the SERB grant no CRG/2021/008115. S. Bhatnagar was supported by the J.C. Bose Fellowship, Project No. DFTM/02/3125/M/04/AIR-04 from DRDO under the DIA-RCOE scheme, a project from DST-ICPS, and the RBCCPS, IISc. Figure 2: Comparison of performance of different average reward algorithms
2307.13038
Charge and Entanglement Criticality in a U(1)-Symmetric Hybrid Circuit of Qubits
We study critical properties of the entanglement and charge-sharpening measurement-induced phase transitions in a non-unitary quantum circuit evolving with a U(1) conserved charge. Many critical properties appear distinct from the generic non-conserving case and percolation; however, upon interpreting the critical features as mixtures of both entanglement and charge-sharpening transitions, many critical features are brought within range of the generic case. Nonetheless, the multifractal properties of the entanglement transition remain distinct from the generic case without any symmetry, indicating a unique universality class due to the U(1) symmetry. We compute entanglement critical exponents and correlation functions via various ancilla measures, use a transfer matrix for multifractality, and compute correlators associated with charge sharpening to explain these findings. Through these correlators, we also find evidence consistent with the charge-sharpening transition being of the Berezinskii-Kosterlitz-Thouless type (including the predicted "jump" in stiffness), which simultaneously argues for a broad critical fan for this transition. As a result, attempts to measure critical properties in this system will see anomalously large exponents consistent with overlapping criticality.
Ahana Chakraborty, Kun Chen, Aidan Zabalo, Justin H. Wilson, J. H. Pixley
2023-07-24T18:00:04Z
http://arxiv.org/abs/2307.13038v1
# Charge and Entanglement Criticality in a U(1)-Symmetric Hybrid Circuit of Qubits ###### Abstract We study critical properties of the entanglement and charge-sharpening measurement-induced phase transitions in a non-unitary quantum circuit evolving with a U(1) conserved charge. Many critical properties appear distinct from the generic non-conserving case and percolation; however, upon interpreting the critical features as mixtures of both entanglement and charge-sharpening transitions, many critical features are brought within range of the generic case. Nonetheless, the multifractal properties of the entanglement transition remain distinct from the generic case without any symmetry, indicating a unique universality class due to the U(1) symmetry. We compute entanglement critical exponents and correlation functions via various ancilla measures, use a transfer matrix for multifractality, and compute correlators associated with charge sharpening to explain these findings. Through these correlators, we also find evidence consistent with the charge-sharpening transition being of the Berezinskii-Kosterlitz-Thouless type (including the predicted "jump" in stiffness), which simultaneously argues for a broad critical fan for this transition. As a result, attempts to measure critical properties in this system will see anomalously large exponents consistent with overlapping criticality. ## I Introduction: Exploring the properties of far-from-equilibrium, non-unitary dynamics provides a theoretical challenge that links the theory of open quantum systems and quantum information while also serving as a crucial bridge to understanding the limits of quantum information processing, including quantum error correcting codes. Moreover, recently developed noisy intermediate scale quantum (NISQ) devices [1] can directly probe theoretical predictions regarding the nature and universality of quantum dynamic protocols. In fact, NISQ devices are poised to probe the interplay of entangling unitary dynamics and disentangling projective measurements [2; 3; 4; 5] thanks to technical developments allowing for the possibility of local, mid-circuit measurements [6]. Exemplays of this paradigm are measurement-induced phase transitions (MIPTs) where the rate of mid-circuit measurements competing with the rate of entangling unitary dynamics (embodied through the quantum circuit model in Fig. 1(a)) drives a transition in the dynamics of the system's entanglement [7; 8; 9; 10; 11; 12; 13; 14; 15]. These transitions are analytically tractable in special limits [7; 10; 13; 16], but numerical evidence for the generic dynamics involving qubits indicate a distinguished universality class characterized by a multifractal logarithmic conformal field theory [16; 17; 18; 19; 20; 21; 22]. A class of MIPTs were described in Ref. [23] as "purification transitions," (for many cases, these coincide with the aforementioned entanglement transitions). Information theoretically, the "mixed phase" can robustly encode quantum information (which could be decoded) while the "pure phase" destroys any encoded information (i.e., it is theoretically impossible to decode). Investigating the transition then not only tells you that the entanglement obeys a universal description, but also how quantum information is being destroyed. This aspect can change when the dynamics are enriched with a symmetry [24]; in particular, with a continuous global U(1) (i.e., charge conserving) symmetry [25; 26; 27], a new transition can appear where the system can undergo "charge sharpening" at a measurement rate \(p_{\#}\) slower than the rate at which it purifies \(p_{c}\). Namely, an initial state that has equal weight in all charge sectors will collapse into a single sector on short (i.e., an \(\sim O(1)\)) time scale for \(p>p_{\#}\) while still being volume law entangled. In contrast, in the charge fuzzy phase this collapse to a single sector occurs on an \(\sim O(L)\) time scale for a linear system size \(L\). In these dynamics, the charge sector that is labeled by charge density \(\mathcal{Q}=Q/L\), where \(Q\) is the total conserved charge, is a (classical) bit of information that can be lost prior to any quantum information. For the qubit system and \(\mathcal{Q}=1/2\), these transitions appear to be close to one another (Ref. [25] estimates \(p_{\#}\approx 0.094(4)\) and \(p_{c}\approx 0.110(3)\)), and therefore any NISQ device which probes the loss of this quantum information at the entanglement transition \(p_{c}\) may also probe some of the features of the nearby sharpening transition as the available system sizes may not be larger than the two correlation lengths (\(\xi\) and \(\xi_{\#}\)) of each respective transition. Elucidating and understanding the critical properties of the MIPT that one can discern in this setup is one of the major achievements of our present work. The generic MIPT enriched by a global U(1) symmetry and resolved into individual charge sectors \(\mathcal{Q}\) is shown in Fig. 1 (c). Here, we say \(p<p_{\#}(\mathcal{Q})\) is defined as _charge fuzzy_ while \(p>p_{\#}(\mathcal{Q})\) is _charge sharp_ and the transition occurs within the volume-law entangled phase of the system. At larger measurement rates \(p>p_{c}(\mathcal{Q})\) the model undergoes an MIPT to an area-law entangled phase. Thanks to the perspective of a purification transition, we can putatively probe the sharpening transition and the MIPT separately. In particular, by coupling an auxiliary ancilla qubit into the system that either couples across charge sectors (e.g. between \(\mathcal{Q}\) and \(\mathcal{Q}+1\)) or within a given sector (e.g. between two distinct orthogonal spin states with equivalent \(\mathcal{Q}\)) as depicted in Fig 1(b), we should in principle be able to examine the sharpening transition and the entanglement transition separately. However, the nature of the critical properties of the sharpening transition may have a significant impact on these conclusions. In particular, in Ref. [26] it was shown that a modified version of these dynamics (in the limit of an infinite on-site Hilbert space dimension) has a charge sharpening transition which falls into the Berezinskii-Kosterlitz-Thouless (BKT)[28; 29] universality class but has no effect on the entanglement transition which remains controlled by percolation. There are several important implications of this field theoretic prediction. Firstly, the critical nature of the BKT transition, makes the entire charge fuzzy phase "critical" in a similar fashion that terminates at the BKT transition [26]. Second, this critical dynamics could in principle be seen in a single sector at late times, as probed through correlation functions (not entanglement measures) averaged over measurement outcomes [26]. Third, the finite size cross-over regime of the critical fan, defined by the correlation length (\(\xi_{\#}\)) for a BKT transition is known to be broader than a second order transition (\(\xi\)), see Fig. 2. As a result, small system size calculations probing the MIPT could receive a contribution from the sharpening critical point (and vice versa). In this work, we study the nature of the universality class of the MIPT in the presence of a continuous U(1) symmetry and the charge sharpening transition in a qubit chain. First, focusing on the middle charge sector \(\mathcal{Q}=1/2\), we obtain anomalously large (typical) critical exponents and effective central charge of the log-CFT. We argue that these values are offset by the nearby sharpening transition, which by assuming it's a BKT transition allows us to obtain similar in magnitude (within our numerical accuracy) critical properties to the MIPT without any symmetry. Moreover, by studying correlation functions of the conserved charge, we show that critical sharpening fluctuations are present within a single sector, and these modes can still look critical at our estimate of the MIPT measurement rate for the sizes we probe the problem at. This provides a consistent picture to our interpretation, while at the same time providing evidence that the nature of the sharpening transition is BKT like. Thanks to the multifractal nature of the MIPT, we are able to ascertain critical properties that are expected to be unaffected by the nearby BKT transition (as it is known to not be a multifractal CFT), which we compute in order to show that the MIPT in the presence of a U(1) symmetry belongs to its own distinct universality class. ## II Summary of the main results: In this manuscript, we present a sector-resolved analysis of the universality class of the entanglement transition in a Figure 1: **Model, ancilla qubit, and sector resolved phase diagram**. (a) shows brick-layer structure of the monitored quantum circuit consisting of U(1) symmetric Haar random gates (the blue bricks) and projective \(\sigma^{z}\) measurements (the red crosses) at a rate \(p\) on the 1D chain of qubits. The U(1) symmetric gate given in Eq.1 acting on two neighbouring qubits (\(\sigma_{1}\) and \(\sigma_{2}\))is also shown. (b) An ancilla (\(A\)) qubit is maximally entangled to two orthogonal states, \(|\psi_{0};j=\uparrow\rangle_{\mathcal{Q}}=P_{j}^{\uparrow}|\psi_{0}\rangle_{ \mathcal{Q}}\) and \(|\psi_{0};j=\downarrow\rangle_{\mathcal{Q}}=P_{j}^{\downarrow}|\psi_{0}\rangle_ {\mathcal{Q}}\) by projecting a single site \(j\) of the state \(|\psi_{0}\rangle_{\mathcal{Q}}\) to up and down state respectively. These states belong to same global charge sector \(\mathcal{Q}\) and the entanglement entropy of \(A\) serves as the order parameter of the entanglement transition within a sector \(\mathcal{Q}\). (c) shows the critical measurement rates, \(p_{c}\) (red circles) for the entanglement transition and \(p_{\#}\) (blue cross) for the charge-sharpening transition, for different charge densities \(\mathcal{Q}\); \(p_{\#}<p_{c}\) for all charge densities \(\mathcal{Q}\). \(p_{c}\) is obtained from the finite-size scaling of the ancilla entanglement entropy when \(A\) is coupled in one charge sector while for \(p_{\#}\), \(A\) is coupled across sectors (\(\mathcal{Q}\) and \(\mathcal{Q}+1\)). We fit \(p_{c}(\mathcal{Q})\) to obtain \(p_{c}(\mathcal{Q})=0.44\mathcal{Q}(1-\mathcal{Q})\) shown by red solid line while the blue solid line is simply a guide to eye. U(1) symmetry preserving random-Haar circuit with projective measurements (shown in Fig. 1(a)). In addition, we present a detailed study of the MIPT and the critical sharpening properties in the middle sector. We adopted an efficient method to couple an ancilla to the charge-conserving circuit (shown in Fig. 1(b)). This allows us to study the critical properties in different charge sectors, even those which are far off from half-filling, allowing our finite-size numerics to access larger system sizes thanks to the constrained many-body Hilbert space. The main findings of our work are summarized below: 1. We compute the sector-resolved phase diagram in Fig. 1(c) which shows that the system undergoes a charge-sharpening transition at the measurement rate \(p_{\#}(\mathcal{Q})\) followed by the volume-law to area-law entanglement transition at \(p_{c}(\mathcal{Q})>p_{\#}(\mathcal{Q})\) in all charge-sectors. We find the functional form of \(p_{c}(\mathcal{Q})\propto\mathcal{Q}(1-\mathcal{Q})\). Each sector is found to have a Lorentz invariant critical point at the entanglement transition. 2. We present numerical results showing that the estimated critical exponents of the entanglement transition are anomalously large in the U(1) symmetric circuit compared to those without any symmetry [17; 30] or percolation. To show this, we compute the anomalous scaling dimension exponent, \(\eta\) using the mutual information of a pair of ancillas in section V.1. We also study the non-unitary log-CFT governing the entanglement transition in section V.2 and obtain the effective central charge (\(c_{\rm eff}\)) of the log-CFT from the free energy, the typical scaling dimension \(x_{1}^{\rm typ}=\eta/2\) (finding excellent agreement with the ancilla computation) and the higher cumulants (e.g., the \(2\)nd cumulant \(x_{1}^{(2)}\)) that probes the multi-fractal nature of the correlation functions. These critical exponents are listed in table 1. 3. The critical sharpening properties are presented in a single, fixed charge sector, not only across sectors in section VI. These correlations show a BKT-like scaling behaviour with the expected quantized jump in the stiffness at the transition. Moreover, they remain critical at the MIPT within the accessible system sizes available due to a broad finite-size critical fan as one would expect from a BKT transition (see Fig. 2). 4. The close proximity of the BKT-sharpening criticality and the entanglement transition leads us to interpret the entanglement critical exponents listed in table 1 as receiving a contribution from the critical sharpening contribution. In particular, the finite-size estimation of the critical exponents can be reinterpreted as receiving a combined contribution from the two transitions: their known values in the BKT universality class (denoted by the subscript \(\#\)) and the MIPT with a U(1) symmetry (denoted by the subscript \(E\)) and we compare these values (in magnitude) to the MIPT transition without any symmetry (denoted by the subscript \(H\)). In table 1, we presented the values of these exponents (denoted by a subscript \(E\)) once we subtracted their known BKT values. \(c_{\rm eff,E}\) and \(\eta_{E}\) closely support the above perspective while also adding support to the sharpening transition being BKT-like. 5. To carefully separate out the effect of the sharpening transition we look into the multifractal properties of the log-CFT, which are assumed to not be contaminated by the BKT transition. As a result, we are able to firmly show that the presence of a U(1) symmetry enhances the multifractality of the log-CFT, thus providing numerical evidence that the MIPT in the presence of a U(1) symmetry belongs to its own distinct universality class. Figure 2: **Schematic finite-size cross over diagram showing the two nearby transitions**. The system undergoes a charge sharpening transition from the charge fuzzy to sharp phase at the measurement rate \(p=p_{\#}\). This is followed by the usual measurement induced phase transition from volume-law entanglement to area-law phase at \(p=p_{c}\). These three phases, namely the entangled-fuzzy, entangled-sharp and the disentangled-sharp, are marked in the figure. The red and the blue dashed lines show the correlation lengths \(\xi\) and \(\xi_{\#}\) corresponding to the MIPT and the BKT-sharpening transition respectively. The latter has a broad critical fan, intertwining these two transitions at smaller system sizes accessible in finite-size numerics. \begin{table} \begin{tabular}{c c c c c} & \(U(1)\) & No-symmetry & BKT & Subtracting BKT \\ & symmetry & \((H)\) & \((\#)\) & values \((E)\) \\ \hline \(c_{\rm eff}\) & 1.27(1) & 0.25(3) & 1 & 0.27(1) \\ \(x_{1}^{\rm typ}=\eta/2\) & 0.28(2) & 0.14(2) & 0.125 & 0.16(2) \\ \(x_{1}^{(2)}\) & 0.65(2) & 0.15(2) & NA & 0.65(2) \\ \end{tabular} \end{table} Table 1: A comparison of the critical exponents governing the entanglement transition with and without U(1) symmetry is shown. No-symmetry values taken from Ref. [17]. Their known values in the BKT universality class are also listed. The values of \(c_{\rm eff,E}\) and \(\eta_{E}\) (in the last column) obtained after subtracting the corresponding BKT values are close to the corresponding no-symmetry values, while the \(2\)nd cumulant \(x_{1}^{(2)}\) is large compared to that without symmetry. This suggests that the symmetry constraint changes the universality class. ## III Model and Ancilla probes The quantum circuit consists of a 1D chain of qubits with local charge \(q_{i}=(\sigma_{i}^{z}+1)/2\). The time-evolution of the circuit repeats the brick-layer structure, shown in Fig. 1(a), consisting of (i) the entangling unitary gates \(U_{i,i+1}\) acting on the bond of the nearest-neighbor sites \(i\) and \(i+1\). \(U_{i,i+1}\) conserve the total charge \(q\in{0,1,2}\) on the bond and are chosen from the set of generic Haar-random unitary gates of the form \[U=\begin{pmatrix}e^{i\phi_{0}}&0&0\\ 0&\framebox{$U_{2\times 2}$}&0\\ 0&0&e^{i\phi_{1}}\end{pmatrix} \tag{1}\] where \(U_{2\times 2}\) is a \(2\times 2\) Haar-random matrix and \(\phi_{0}\) and \(\phi_{1}\) are random phases. (ii) each site is subjected to projective \(\sigma_{i}^{z}\) measurement with a rate \(p\). The non-unitary dynamics conserve the global U(1) charge \(Q=\sum_{i=1}^{L}q_{i}\). This constrains the dimension of the effective Hilbert space accessible during the dynamics if the initial condition is chosen from a fixed \(\mathcal{Q}=Q/L\) sector. This allows us to explore larger finite system sizes for the sectors away from half-filling. We use periodic boundary conditions unless mentioned otherwise. ### Ancilla to study the entanglement transition In our present work, we aim to extract critical exponents that correspond to a power law decay of the correlation functions of the CFT, via mutual information between two ancilla qubits. To couple an ancilla to the U(1) symmetric circuit in a fixed charge sector, previous work in Ref. [25] utilized a single ancilla coupled to a "bond" between two neighboring qubits that requires post-selecting to couple into only the \(\ket{\uparrow\downarrow}\) and \(\ket{\downarrow\uparrow}\) states. Extending this approach to two ancillas as we need to compute their mutual information suffers from low statistics due to post-selecting now on two bonds. To circumvent this issue, we implement an approach (that utilizes a "single-site ancilla" protocol schematically shown in Fig. 1(b)) by entangling each ancilla qubit to a single site of the system, yet conserving the global charge. We initialize the system in a random Haar state \(\ket{\psi_{0}}_{\mathcal{Q}}\) in the global charge sector \(\mathcal{Q}\). We then create two orthogonal states \(\ket{\psi_{0};j=\uparrow}_{\mathcal{Q}}=P_{j}^{\uparrow}\ket{\psi_{0}}_{ \mathcal{Q}}\) and \(\ket{\psi_{0};j=\downarrow}_{\mathcal{Q}}=P_{j}^{\downarrow}\ket{\psi_{0}}_{ \mathcal{Q}}\) by projecting a single site \(j\) of the system to up and down state respectively. We note that the states \(\ket{\psi_{0};j=\uparrow}_{\mathcal{Q}}\) and \(\ket{\psi_{0};j=\downarrow}_{\mathcal{Q}}\) remain in the same global charge sector \(\mathcal{Q}\). The ancilla qubit with two orthogonal states \(\ket{\Uparrow}\) and \(\ket{\Downarrow}\) is now maximally entangled to the site \(j\) as, \[\ket{\Psi}=\frac{1}{\sqrt{2}}\Big{[}\frac{\ket{\psi_{0};j=\uparrow}_{\mathcal{ Q}}}{\||\psi_{0};j=\uparrow\rangle}\|\Downarrow+\frac{\ket{\psi_{0};j= \downarrow}_{\mathcal{Q}}}{\||\psi_{0};j=\downarrow\rangle}\|\Uparrow\rangle \Big{]}. \tag{2}\] This single-site ancilla method allows us to get around the post-selection problem as we increase the system size and hence can be used to probe the entanglement transition at different global conserved charge sectors, including those which are away from the middle sector (\(Q=L/2\) or \(\mathcal{Q}=1/2\)). Figure 3: **Entanglement Transition**. Entanglement entropy \(S_{E}\) of the ancilla serves as the order parameter of the MIPT with \(p\). (a) shows the crossing of \(S_{E}\) vs \(p\) for different \(L=12\) to \(48\) with total charge \(Q=L/12\) taken at time \(t=2L\). This yield \(p_{c}=0.031(4)\) and \(\nu=2.68(31)\) from the finite-size scaling collapse shown in the inset. (b) shows the correlation of the critical expo nents \(\nu\) and \(\nu_{\#}\) at \(p_{c}\) (red circles) and \(p_{\#}\) (blue cross) respectively; this data was obtained from the finite-size scaling of \(S_{E}\) and \(S_{\#}\) (shown in appendix A and B) for different \(\mathcal{Q}\) following Eqs. (4) and (5) respectively. (c) shows the collapse of \(S_{E}\) vs \(t/L^{z(\mathcal{Q})}\) for different \(L=12\) to \(48\) at \(\mathcal{Q}=12\). This gives \(z(\mathcal{Q}=1/12)=0.97\). \(z\approx 1\) for all sectors as shown in the inset demonstrating each transition is Lorentz invariant. ### Ancilla to study the charge-sharpening transition To probe the charge-sharpening (CS) transition, we couple the ancilla to two neighboring sectors (\(\mathcal{Q}\) and \(\mathcal{Q}+1\)) [25], \[|\Psi\rangle_{\#}\equiv\frac{1}{\sqrt{2}}\left[\frac{|\psi_{0}\rangle_{\mathcal{ Q}}}{\||\psi_{0}\rangle_{\mathcal{Q}}\|}|\Uparrow\rangle+\frac{|\psi_{0}\rangle_{ \mathcal{Q}+1}}{\||\psi_{0}\rangle_{\mathcal{Q}+1}\|}|\Downarrow\rangle\right]. \tag{3}\] Here \(|\psi_{0}\rangle_{Q}\) and \(|\psi_{0}\rangle_{Q+1}\) are two initial states belonging to two different charge sectors \(\mathcal{Q}\) and \(\mathcal{Q}+1\) which maximally couple to two orthogonal states of the ancilla \(\Uparrow\) and \(\Downarrow\) respectively. ## IV Phase diagram: The model introduced in the previous section exhibits a phase diagram [25; 26; 27] that depends on the conserving charge sector as we show in Fig. 1(c). Focusing on the middle sector or coherent superpositions over all charge sectors, with increasing \(p\) the model exhibits two distinct phase transitions: an entanglement phase transition from the volume-law to area-law phase at the critical measurement rate \(p_{c}\) which is preceded by a charge sharpening transition at the critical measurement rate \(p_{\#}<p_{c}\) in the volume-law phase. ### Entanglement Transition We calculate the entanglement entropy \(S_{E}\) of the ancilla with the system in the steady state which serves as the order parameter of the entanglement transition. We study variation of \(S_{E}\) with \(p\) for different system sizes \(L\) commensurate with the global charge Q. We perform a finite size scaling (FSS) analysis of the order parameter at the entanglement transition point, \[S_{E}(t,L;\mathcal{Q})\sim h_{\mathcal{Q}}[(p-p_{c})L^{1/\nu(\mathcal{Q})},t/ L^{z(\mathcal{Q})}] \tag{4}\] where \(h_{\mathcal{Q}}(x,y)\) is a two-parameter universal scaling function with a correlation length exponent \(\nu(\mathcal{Q})\) and dynamic exponent \(z(\mathcal{Q})\) for different \(\mathcal{Q}\) sectors. \(\nu(\mathcal{Q})\) governs the divergence of the correlation length \(\xi\sim(p-p_{c})^{-\nu}\) at the entanglement transition. Following previous work [25], we examine the \(p\) dependence of \(S_{E}\) by fixing the aspect ratio of time and length of the system (\(t/L=2\)) and ensure our crossing and collapse is unaffected by this choice. At the middle sector (\(\mathcal{Q}=1/2\)), our scaling analysis yields \(p_{c}=0.110(3)\) and \(\nu=1.65(16)\) (see appendix A) which are in agreement with the "bond-ancilla" method results [25] and thus establishes the consistency of the "single-site ancilla" method in the conserving circuit. In Fig. 3(a) we show \(S_{E}\) vs \(p\) at \(Q=L/12\) for \(L=12\) to \(48\) and the inset shows the scaling collapse with \(p_{c}=0.031(4)\) and \(\nu=2.68(31)\). We summarize the dependence of the global conserved charge density \(\mathcal{Q}=Q/L\) on the entanglement critical point \(p_{c}(\mathcal{Q})\) in Fig. 1(c) (with red circles) and the corresponding correlation length exponent \(\nu(\mathcal{Q})\) in Fig. 3(b) (with red circles) respectively. Fig. 1(c) shows that changing the value of the conserved charge density \(\mathcal{Q}\) from the half-filling condition weakens the entangling unitary gates and \(p_{c}\) decreases. \(p_{c}(\mathcal{Q})\) fits well with the variance of the Binomial distribution \(p_{c}(\mathcal{Q})=0.44\mathcal{Q}(1-\mathcal{Q})\) shown by red solid line. Fig. 3(b) shows that \(\nu\) increases as \(\mathcal{Q}\) deviates from the half-filling, but with large statistical error at \(\mathcal{Q}\) away from half-filling. To this end, at the critical point \(p_{c}(\mathcal{Q})\), we collapse \(S_{E}(t)\) with \(t/L^{z(\mathcal{Q})}\) which gives an estimation of \(z(\mathcal{Q})\) for each charge sector. Fig. 3(c) shows the collapsed data at \(\mathcal{Q}=1/12\) for \(L=12,24,36\) and \(48\). Our numerical analysis finds \(z(\mathcal{Q})\approx 1\) for all \(\mathcal{Q}\) sectors as shown in the inset of Fig. 3(c). This establishes conformal invariance across all charge sectors we have considered. ### Sharpening Transition The entanglement entropy \(S_{\#}\) of the previously mentioned ancilla probes the charge-shaprening transition point \(p_{\#}\) with the corresponding correlation length exponent \(\nu_{\#}\) through the following equation, \[S_{\#}(L;\mathcal{Q})\sim g_{\mathcal{Q}}[(p-p_{\#}(\mathcal{Q}))L^{1/\nu_{\# }(\mathcal{Q})}] \tag{5}\] where \(g_{\mathcal{Q}}(x,y)\) is a two-parameter universal scaling function at the charge-sharpening transition for charge sectors \(\mathcal{Q}\). The dependence of \(p_{\#}(\mathcal{Q})\) and \(\nu_{\#}(\mathcal{Q})\) on global conserved charge density \(\mathcal{Q}=Q/L\) are shown in Fig. 1(c) (with blue cross) and in Fig. 3(b) (with blue cross) respectively (data collapse not shown). For all \(\mathcal{Q}\) sectors, the charge-sharpening transition precedes the entanglement transition, \(p_{\#}(\mathcal{Q})<p_{c}(\mathcal{Q})\). Interestingly, for \(\mathcal{Q}\) sectors away from half-filling, the two critical points are approaching each other within their error bars. Moreover, our estimate of the critical exponents \(\nu\) and \(\nu_{\#}\) become strongly dependent on the sector and we cannot discern them at the small system sizes we can reach. As the error bars on \(\nu\) and \(\nu_{\#}\) are growing as we move away from the middle sector and the largest deviation from the middle sector to \(\mathcal{Q}=1/12\) differ by at most 'two \(\sigma\)', we cannot rule out the possibility that these exponents do not depend on the sector and the trend we witness is a finite size effect. With the recent prediction of the charge sharpening transition being a BKT transition in the large Hilbert space limit, this large \(\nu_{\#}\) at the small charge density sectors suggests that our scaling ansatz could be incorrect and instead is suggestive that the power law should be replaced with the logarithmic dependence due to the nature of the BKT correlation length (see appendix B). ## V Critical properties of the entanglement transition ### Mutual Information Probe of the Correlation Functions We use the "single-site ancilla" method explained in the previous section to compute the correlation function at the critical measurement rate of the entanglement transition. We initialize the spin-chain in a fixed global charge sector \(\mathcal{Q}\) and evolve it under U(1) symmetric monitored dynamics up to late time \(t_{0}=2L\). In the steady state, we couple two ancillas \(A\) and \(B\) to the circuit at \((r_{1},t_{1})\) and \((r_{2},t_{2})\) and calculate their mutual information given by, \(I_{n}(A,B)=S_{n}(A)+S_{n}(B)-S_{n}(A\cup B)\), where \(S_{n}\) denote the Renyi entropy of order \(n\). To compute the order parameter correlation, we set \(t_{1}=t_{2}=t_{0}\) and fix \(|r_{1}-r_{2}|=bL\) with \(b\) being a constant. The correlation function obeys a one-parameter finite-size scaling ansatz with time \(t-t_{0}\) as, \[C(t-t_{0},L/2,Q)\sim\frac{1}{L^{\eta(Q)}}k_{Q}\Bigg{(}\frac{t-t_{0}}{L}\Bigg{)}. \tag{6}\] Here, we calculate three different anomalous dimension exponent \(\eta\) depending on the spatial separation and the boundary conditions (BC) in the circuit: (a) \(\eta\) for \(|r_{1}-r_{2}|=L/2\) with periodic boundary condition, (b) \(\eta_{\perp}\) for \(|r_{1}-r_{2}|=L/2\) with open boundary condition and (c) \(\eta_{\parallel}\) for \(|r_{1}-r_{2}|=L\) with open boundary conditions and geometries given in Fig. 4. The scaling collapse of these three cases is shown in Fig. 5(a) at \(Q=L/4\) for \(L=8\) to \(20\) for \(n=1\). In Table 2 we show the dependence of the three anomalous dimension exponents \(\eta,\eta_{\perp},\eta_{\parallel}\) on the Renyi index \(n\). With increasing \(n\), the three exponents saturate to the values (e.g., \(\eta=0.57(1)\), \(\eta_{\parallel}=1.27(4)\), \(\eta_{\perp}=0.77(3)\) at \(Q=L/4\)) which are distinct from their corresponding value in the monitored Haar-random circuit without the U(1) symmetry constraint presented in Ref. [30]. In Fig. 5(b) we show the variation of the exponent \(\eta\), with the global charge \(\mathcal{Q}\). The blue solid line with triangles shows that the \(\eta\) exponent (for \(n=\infty\)) remains same over different charge sectors (see appendix C for more details) but is different from value \(\eta_{H}=0.26(1)\) without the symmetry shown by blue dashed line. ### Probing the log-CFT In this section, we will study the non-unitary CFT that governs the MIPT in the dynamics of the circuit. We compute its critical properties ranging from the effective central charge, typical anomalous scaling dimension, and the higher cumulants to study the multi-fractal nature of the correlation functions at the transition. Our numerical results indicate that U(1) symmetry changes the universality class, yielding a distinct multi-fractal scaling at the entanglement transition. Figure 4: Depiction of geometries for the pair of ancilla qubits used in the computation of \(\eta\). Respectively, (a) \(\eta\), (b) \(\eta_{\perp}\), and (c) \(\eta_{\parallel}\). Figure 5: **Probing \(\eta\)**. Mutual information \(C(t-t_{0})\) between two ancilla qubits coupled to the monitored U(1) symmetric circuit in the same charge sector \(\mathcal{Q}=Q/L\). (a) shows the scaling collapse of \(C(t-t_{0})\) following Eq. (6) for different system sizes from \(L=8\) to 20 at \(\mathcal{Q}=1/4\) for different geometries of the circuit: \(\eta=0.45(2)\) for periodic BC at \(|r_{1}-r_{2}|=L/2\), \(\eta_{\parallel}=1.09(4)\) for open BC at \(|r_{1}-r_{2}|=L\) and \(\eta_{\perp}=0.62(2)\) for open BC at \(|r_{1}-r_{2}|=L/2\). Geometries follow Fig. 4, and we use the von Neumann entropy for this plot. (b) shows the dependence of the bulk critical exponent \(\eta\) on the charge sector with fixed density \(\mathcal{Q}\) (at \(n\to\infty\)). \(\eta\) exponent remains stable across all \(\mathcal{Q}\) sectors by blue triangles. We also show the values \(\eta_{E}\) once we subtract the known BKT values \(\eta_{\#}=0.25\) by yellow circles. The blue dashed line show the value \(\eta_{H}=0.26(1)\) in absence of symmetry in the circuit obtained from Ref. [30]. #### iii.1.1 Free Energy The non-unitary time-evolution of a monitored random circuit can be described by the transfer-matrix method within the paradigm of statistical mechanics [13; 17; 22]. At late time, the dynamics is governed by the leading Lyapunov exponents \(\lambda_{0},\lambda_{1},\dots\) of the transfer matrix. Using the conformal invariance at the entanglement transition point, the circuit dynamics can be mapped to \((1+1)d\) non-unitary conformal field theory characterized by a universal number called the effective central charge \(c_{\rm eff}\). It can be related to the free-energy density \(f=F/A\) obtained from the leading Lyapunov exponent \(\lambda_{0}\) as [17], \[\frac{F(L)}{A}=-\frac{\lambda_{0}}{\alpha L}=f(L\to\infty)-\pi\frac{c_{\rm eff }}{6L^{2}}+\dots, \tag{7}\] where \(A=\alpha Lt\) is the area of the cylinder of the CFT defined from a circuit of spatial and temporal length \(L\) and \(t\) respectively with a space-time anisotropy factor \(\alpha\) to be computed numerically (see Appendix D for details). Due to the conservation law, the nature of the initial state has a strong dependence on free energy. As \(F=-\ln Z\) where \(Z\) is the partition function of the statistical mechanics model [17] that involves a trace over all states, we begin by considering initial states that are equal superpositions over all charge sectors and discuss the effects of projecting into a given charge sector afterward. In the quantum circuit, each trajectory is defined by a particular initial condition, a set of unitary gates and a set of measurement outcomes indexed by \(\vec{m}\). \(F\) is calculated from the Born probabilities, \(p_{\vec{m}}\) of each measurement outcome in a trajectory, \(F=-\sum_{\vec{m}}p_{\vec{m}}\ln p_{\vec{m}}\) and then finally averaging over random initial conditions, unitary gates, and measurement outcomes [17]. We first numerically compute the free energy density at \(p=p_{c}(\mathcal{Q}=1/2)\) choosing the initial conditions (a product initial state or a Haar initial state) randomly with unconstrained total charge. As time evolves, we calculate free energy in each trajectory from the Born probabilities of the measurement events. Averaging over all such trajectories at a long time gives the total free energy \(F(t)\) from all accessible trajectories evolved under the constrained dynamics. \(F(t)\) grows linearly with time \(t\) at late times with the slope \(\lambda_{0}\) (we waited till time \(t\sim 6L\) to reach the steady state and extract \(\lambda_{0}\)). The corresponding free energy density \(f(L)\) decays as \(1/L^{2}\) (consistent with the CFT prediction) as shown in Fig. 6(a) by red circles. We compute the slope of this curve \(m_{0}(L_{\rm min})\) by systematically eliminating the smallest system sizes and considering data only from \(L=L_{\rm min}\) to \(L=18\). These fits are shown by dashed blue lines where the darker color corresponds to larger \(L_{\rm min}\). This yields \(m_{0}(L_{\rm min})=m_{0}(\infty)+b/L_{\rm min}^{2}\). The effective central charge is computed from \(c_{\rm eff}=-\delta m_{0}(\infty)/\pi\). Our numerics predicts \(c_{\rm eff}=1.27(1)\) which is significantly different from the result \(c_{\rm eff}=0.25(3)\) without U(1) symmetry in the circuit [17]. We now bin the free energy into each charge sector \(F^{Q}\) that it is projected into in the late time limit (\(t\gg L\)). We see that the probability that a given state ends up in a state with total charge \(Q\) is precisely given by the binomial distribution \(P(Q,L)=\binom{L}{Q}/2^{L}\). We plot it as \(P(\mathcal{Q},L)=L\times P(Q,L)\) vs \(\mathcal{Q}\) in Fig. 6(b),. Importantly, this shows that in the thermodynamic limit average quantities over initial states that consider all conserving sectors eventually converge to the middle sector. To relate the free energy over all sectors (i.e., the grand potential) and the free energy in the middle sector (i.e., the Gibbs free energy) requires introducing a chemical potential \(\mu\), such that \(F^{\mathcal{Q}}=F+\mu(Qt)\), where \(Qt\) is the total charge in space-time. This implies that in addition to the \(1/L^{2}\) dependence expected in Eq. (7) there is a contribution from \(\mu~{}\sim L^{-z}=1/L\) to all of the Lyapunov exponents in a fixed sector. Focusing on \(f^{\mathcal{Q}}=F^{Q}/(\alpha Lt)\) for \(\mathcal{Q}=1/2\) as shown in Fig. 6(a) open orange squares, we find that the data is well fit using the modified scaling form \[f^{\mathcal{Q}}=f^{\mathcal{Q}}(L\to\infty)+a\mathcal{Q}/L-\pi\frac{c_{\rm eff }}{6L^{2}}, \tag{8}\] but with the same effective central charge, which provides evidence for our relation between the two free energies. We show this fit by green dashed lines for three different values of \(L_{\rm min}=8,10\) and \(12\) where the darker color corresponds to larger \(L_{\rm min}\), by fitting the data from \(L=L_{\rm min}\) to \(L=18\). Moreover, in the thermodynamic limit (\(L\to\infty\)), the Gibbs free energy (\(f^{\mathcal{Q}}(L\to\infty)=0.118\)) is approaching the free energy over all sectors (\(f(L\to\infty)=0.119(1)\)), allowing us to conclude that \(f^{\mathcal{Q}}(L\to\infty)=f(L\to\infty)\) as expected. Lastly, the solid green triangles in Fig. 6(a) show the data when the system is initialized in \(\mathcal{Q}=1/2\) sector and hence the circuit is restricted to \(\mathcal{Q}=1/2\) throughout its evolution. These data points closely overlap with \(f^{\mathcal{Q}=1/2}\) where the trajectories are binned to the middle sector at late times, establishing that the time-evolution is ergodic as we expect. #### iii.1.2 Leading Scaling Dimension We now turn to probing the scaling dimensions at the critical point. To do this, we compute the next leading Lyapunov exponent \(\lambda_{1}\) which is related to the higher generalized free energy as \(f_{1}=F_{1}/A=\lambda_{1}/\alpha L\). Following the procedure \begin{table} \begin{tabular}{c c c|c c|c c} \hline n & \multicolumn{2}{c|}{\(\eta\)} & \multicolumn{2}{c|}{\(\eta_{\parallel}\)} & \multicolumn{2}{c}{\(\eta_{\perp}\)} \\ \hline & \(\mathcal{Q}=1/2\) & \(1/4\) & \(1/2\) & \(1/4\) & \(1/2\) & \(1/4\) \\ 1 & 0.46(2) & 0.45(2) & 0.97(2) & 1.09(4) & 0.61(1) & 0.62(2) \\ 2 & 0.55(3) & 0.54(1) & 1.13(3) & 1.27(4) & 0.72(3) & 0.71(3) \\ 5 & 0.57(3) & 0.57(1) & 1.17(3) & 1.27(3) & 0.75(2) & 0.77(3) \\ \(\infty\) & 0.57(3) & 0.57(1) & 1.17(3) & 1.27(4) & 0.76(2) & 0.77(3) \\ No-symmetry & 0.26(1) & \multicolumn{2}{c|}{0.49(2)} & \multicolumn{2}{c}{0.34(1)} \\ \hline \end{tabular} \end{table} Table 2: We tabulate the anomalous scaling dimension exponents \(\eta\) together with the surface critical exponents \(\eta_{\parallel}\) and \(\eta_{\perp}\) using open BC with \(|r_{1}-r_{2}|=L\) and \(|r_{1}-r_{2}|=L/2\) respectively for two values of \(\mathcal{Q}=1/2\) and \(1/4\). For Renyi indices \(n>1\), all the exponents converge to values that are distinct from those without any symmetry in the circuit. prescribed in Ref.[17] to compute \(\lambda_{1}\) (in addition to \(\lambda_{0}\)), we create a pair of orthogonal states \(|v_{0}\rangle\) and \(|v_{1}\rangle\). As time evolves, in each time-step of a trajectory, \(|v_{0}\rangle\) and \(|v_{1}\rangle\) are subjected to identical \(U\) gates and measurements (locations and at which state it will be projected to) governed by the Born probabilities \(p_{\vec{m}}\) of \(|v_{0}\rangle\). After each time-step, the pair of vectors need to be reorthogonalized. \(F_{1}\) is calculated from the Born probabilities \(\tilde{p}_{\vec{m}}\) of each measurement outcomes in the evolution of \(|v_{1}\rangle\), \(F_{1}=\sum_{\vec{m}}\tilde{p}_{\vec{m}}\ln\tilde{p}_{\vec{m}}\). The difference between the two leading Lyapunov exponents grows as \(1/L^{2}\), \[f_{1}(L)-f(L)=\frac{\lambda_{1}-\lambda_{0}}{\alpha L}=\frac{2\pi x_{1}^{\rm typ }}{L^{2}}, \tag{9}\] shown in Fig. 6(c) by the red circles. This is used to calculate the scaling dimension of the most relevant operator \(x_{1}^{\rm typ}\). We numerically estimate \(x_{1}^{\rm typ}=0.27(2)\) (obtained from the slope \(m_{1}(L_{min})\) shown in the inset by eliminating the smallest \(L\)s in the fit shown by dashed blue lines in Fig. 6(c)). \(x_{1}^{\rm typ}\) is related to the decay of the bulk correlation function through the exponent \(\eta=2x_{1}^{\rm typ}=0.55(3)\). This gives an independent estimate of \(\eta\) from CFT and importantly matches with our previous result from the order-parameter correlation function shown in Fig. 5(b) (solid blue line with triangles). Lastly, we considered probing the free energy differences after they are projected into the middle sector \(f_{1}^{\mathcal{Q}=1/2}(L)-f(L)^{\mathcal{Q}=1/2}\) shown in Fig. 6(c) by orange squares and find that the chemical potential contribution, \(\mu\) in Eq. (8), cancels precisely leaving us with an additional estimate of \(\eta=2x_{1}^{\rm typ}=0.51(4)\) (using the same fitting form given in Eq. (9)) that is in good agreement with that obtained from \(f_{1}(L)-f(L)\) with \(\eta=2x_{1}^{\rm typ}=0.55(3)\) sampled over all charge sectors (see appendix E for more details). #### iii.2.3 Multi-fractality In this subsection, we will probe the multifractal properties of the critical correlation function at the entanglement transition. We know that at a generic critical point, where the system is scale-invariant, all moments of the correlation function vanish as a power law, \[\mathcal{E}[C^{n}(r)]\sim\frac{B_{n}}{r^{2x_{1}(n)}}, \tag{10}\] where \(\mathcal{E}[\dots]\) denotes averaging over different random samples. If the scaling exponent \(x_{1}(n)\) of the \(n\)th moment is a linear function of \(n\), then the system is self-averaging, otherwise, it shows a multifractal scaling of the correlators. \(x_{1}(n)\) can be obtained from the cumulant expansion, \[\log\mathcal{E}[C^{n}(r)]= n\mathcal{E}[\log C(r)]\] \[+\frac{n^{2}}{2!}\mathcal{E}[\{\log C(r)-\mathcal{E}[\log C(r)]\} ^{2}]+\dots,\] which yields, \[x_{1}(n)=nx_{1}^{\rm typ}+\frac{n^{2}}{2!}x_{1}^{(2)}+\dots. \tag{12}\] Figure 6: **Scaling of the Free energy of the log-CFT**. At \(p=p_{c}(\mathcal{Q}=1/2)=0.11\) of the monitored Haar-random circuit with U(1) symmetry, both (a) free energy density \(f(L)\) and (c) difference between generalized free energies, \(f_{1}(L)-f(L)\) show a \(1/L^{2}\) scaling with system size. The slope of the former gives the effective central charge \(c_{\rm eff}=1.27(1)\), while that of the latter gives the scaling dimension of the most relevant operator \(x_{1}^{\rm typ}=2\eta=0.27(2)\) matching with that shown Fig. 5. (b) shows the rescaled probability distribution \(P(\mathcal{Q},L)=L\times P(Q,L)\) vs \(\mathcal{Q}\) that an initial state with unrestricted global charge ends up in sector \(\mathcal{Q}\) at late time (\(t\gtrsim 4L\)), which follows a binomial distribution dominated by the middle sector in \(L\to\infty\) limit. The lines are not fits but simply plots of the binomial distribution. The initial conditions in (a) and (c) are randomly chosen from all \(\mathcal{Q}\) sectors, except the orange squares in (a) and (c) respectively are the results selecting only those trajectories with \(Q=L/2\) charge in the steady state. The green triangles in (a) correspond to the trajectories initialized and hence remained in \(\mathcal{Q}=1/2\) sector throughout the evolution. Here, the coefficient of the linear term \(x_{1}^{\rm typ}\) governs the decay of the average typical correlation function and finite higher cumulants \(x_{1}^{(n)}\) represent the multifractal nature of the correlator. We computed \(x_{1}^{\rm typ}\) from the difference between the average (over different trajectories) Lyapunov exponents \(\lambda_{1}\) and \(\lambda_{0}\) in the previous subsection. Here we compute the 2nd cumulant \(k_{2}\) defined by, \[k_{2}=\frac{\mathcal{E}[\left(\lambda_{1}^{\vec{m}}-\lambda_{0}^{\vec{m}} \right)^{2}]-\left(\mathcal{E}[\lambda_{1}^{\vec{m}}-\lambda_{0}^{\vec{m}}] \right)^{2}}{\alpha L}=2\pi\frac{x_{1}^{(2)}}{L^{2}}. \tag{13}\] Here, \(\lambda_{0}^{\vec{m}}\) and \(\lambda_{1}^{\vec{m}}\) are the Lyapunov exponents from each trajectory \(\vec{m}\) in the circuit. We numerically estimate \(x_{1}^{(2)}=0.65(2)\) from \(k_{2}(L)\) vs \(1/L^{2}\) shown in Fig. 7(a). A large value of \(x_{1}^{(2)}\) suggests the presence of strong multifractality. This evidence is further verified by collapsing the distribution of \(Y(t)=\alpha t(\lambda_{1}^{\vec{m}}-\lambda_{0}^{\vec{m}})\) onto the universal curve \(H(s)\) which, satisfies (up to a universal prefactor) [17; 19; 31], \[P[Y(t)]\sim\sqrt{\left(\frac{L}{2\pi\alpha t}\right)}\exp\left[-\frac{2\pi \alpha t}{L}H\left(\frac{2\pi\alpha t}{L}Y(t)\right)\right]. \tag{14}\] Fig. 7(b) shows the data collapse onto a single curve \(H(s)\) vs \(s\) for different values of \(L\) and \(t\) (\(L=8\) to \(18\) at \(t=16L\) by red lines and \(t=11L\) to \(27L\) for \(L=18\) by blue lines). This data collapse of \(P[Y(t)]\) using a single curve \(H(s)\) establishes multifractal scaling of correlation functions. \(H(s)\) shows a minimum at \(s=s_{\rm min}\) given by, \[s_{\rm min}=\frac{dx_{1}(n)}{dn}|_{n=0}=x_{1}^{\rm typ}, \tag{15}\] In Fig. 7(b), we shifted the x-axis to \(s-s_{\rm min}\) where \(H(s)\) shows a minimum near the typical scaling dimension \(s_{\rm min}=x_{1}^{\rm typ}=0.27(2)\). The broadening of \(H(s)\) is set by the variance \(x_{1}^{(2)}=0.65(2)\). We compare the multifractal spectrum \(H(s)\) that we have obtained with the MIPT without any symmetry in Fig. 7 (b) (shown as a black dashed line). After shifting each curve by the location of the minimum in \(H(s)\), \(s_{\rm min}\), we are able to compare their shapes, which demonstrates that the higher average moments all differ as well. Thus providing strong evidence that the U(1) symmetry has modified the universality class of the MIPT. ## VI Critical properties of the sharpening transition Motivated by the description of the charge sharpening transition in the infinite Hilbert space limit we analyze the sharpening transition in terms of observables that were shown to exhibit BKT scaling [26]. In the large Hilbert space limit this is facilitated by mapping onto a weakly entangled model that can be simulated at large sizes with matrix product states. Here, however, we do not have such a mapping for qubit chains and are restricted to small system sizes where clearly identifying a BKT scaling unambiguously is close to impossible. Instead, here our goal is to ascertain if the BKT sharpening diagnostics are both consistent with our numerical results on small sizes and look critical in a window around the sharpening transition to see how it could affect the computed properties of the MIPT. Therefore, we turn to numerically probe the charge-sharpening transition based on our finite-size numerics following Ref.[26]. We first compute the average spin-spin connected correlation function, \[C_{z}(r)=\mathcal{E}\left[\langle\sigma_{z}^{z}\sigma_{0}^{z}\rangle-\langle \sigma_{r}^{z}\rangle\langle\sigma_{0}^{z}\rangle\right]. \tag{16}\] Figure 7: **Multifractal scaling of the log-CFT**. Evidences of multifractality in (a) the 2nd cumulant \(k_{2}\) vs \(1/L^{2}\) giving a large \(x_{1}^{(2)}=0.65(2)\) and (b) collapse of distribution of \(Y(t)\) onto a single universal curve \(H(s)\) for different values of \(L\) and \(t\) (\(L=8\) to \(18\) at \(t=16L\) by red lines and \(t=11L\) to \(27L\) for \(L=18\) by blue lines). The universal curve \(H(s)\) without U(1) symmetry, extracted from Ref. [17], is shown by the black dashed curve. The \(x\)-axis is plotted as \(s-s_{\rm min}\) where the minimal value \(s_{\rm min}=0.27\) for U(1) symmetric case and \(s_{\rm min}=0.14\) without U(1) symmetry. The initial conditions are randomly chosen from all \(\mathcal{Q}\) sectors. Here, \(\langle\dots\rangle\) denotes the expectation value of the operator in the quantum state of a trajectory and \(\mathcal{E}[\dots]\) denotes averaging over different trajectories constituting the quantum circuit. The correlation function is expected [26] to go like \(C_{z}(r)\sim\rho_{s}/r^{2}\) for \(p\leq p_{\#}\) where \(\rho_{s}\) is the superfluid stiffness and decay exponentially for \(p>p_{\#}\). We check for this behavior by numerically calculating \(C_{z}(r=L/2)\) for different system sizes \(L=8\) to \(22\) starting from initial conditions in \(\mathcal{Q}=1/2\) sector. Fig. 8(a) shows \(C_{z}(r)\) vs \(r\) in a log-log plot for different values of \(0.06\leq p\leq 0.24\). For small values of \(p\lesssim 0.14\), \(C_{z}(r)\) shows a power-law decay while for larger values of \(p\sim 0.24\), \(C_{z}(r)\) decays faster than an algebraic decay and resembles an exponential decay. A power-law fit \(C_{z}(r)\approx 1/r^{e_{z}}\) from the largest \(4\) system sizes is shown by dashed lines and the slope \(e_{z}\) vs \(p\) is shown in the inset. The power-law function fits the data well for smaller \(p\) values indicated by smaller error bars, while for larger values of \(p\), the error-bar increases and the algebraic fit breaks down. Moreover, \(e_{z}\approx 2\) over a range of values, \(p\lesssim 0.14\) showing signatures of the BKT prediction of a critical charge fuzzy phase and a critical regime extending into the vicinity of our estimate of the MIPT. Hence, this analysis strongly suggests that the critical fluctuations of the charge sharpening transition still look critical at the measurement-induced transition \(p_{c}\) on our available system sizes. Next, we consider the string correlation function \(C_{W}(r)\)[26] that is defined in terms of the string operator \(W_{[0,r]}\), namely, \[W_{[0,r]}=\prod_{0<i<r}\sigma_{i}^{z},\] \[C_{W}(r)=\mathcal{E}\left[\langle W_{[0,r]}\rangle^{2}\right]. \tag{17}\] \(C_{W}(r)\) is expected to show a power-law decay in the charge fuzzy phase \(C_{W}(r)\sim 1/|r|^{2\pi\rho_{s}}\) and become independent of \(r\) above the sharpening transition. We calculate the string correlator \(C_{W}(r=L/2)\) for \(L=8\) to \(22\) at different values of \(p\) shown in Fig. 8(b). \(C_{W}(r)\) shows a nice power-law decay for small \(p\) values and becomes a constant for larger \(p\) values. Hence, this numerical result is qualitatively in good agreement with the expectation from the BKT prediction. A power law fit \(C_{W}(r)\sim 1/|r|^{\varepsilon_{W}}\) (shown by the dashed lines) yields the slope \(e_{W}\) which is shown as a function of \(p\) in the inset. The fitted power-law \(e_{W}\) monotonically decreases with increasing \(p\) as expected in any finite-size analysis. This prediction also suggests that these string correlations remain critical up to our estimate of the entanglement transition as they appear within a given sector. In order to provide an additional test of the BKT transition, we introduce a finite size scaling function for the stiffness \(\tilde{\rho}_{s}(p,L)\). From the expectation \(C_{W}(r)\sim 1/|r|^{2\pi\rho_{s}}\) we construct, \[\tilde{\rho}_{s}(p,L)=-\frac{\log C_{W}(L/2)}{2\pi\log(L/2)}, \tag{18}\] which shows a crossing for different system sizes \(L=8\) to \(20\) as a function of \(p\) as depicted in Fig. 8(c). From the crossing we estimate the location of the sharpening transition \(p_{\#}\) and perform a finite-size BKT scaling analysis from the ansatz, \[\tilde{\rho}_{s}\sim h_{s}[(p-p_{\#})(\log L/a)^{2}], \tag{19}\] where \(h_{s}\) is an unkown scaling function with \(p_{\#}\) and \(a\) as the fitting parameter shown in the inset. Our analysis gives \(p_{\#}=0.093(7)\), which is very close to our previous estimates of the sharpening transition (e.g. in Fig. 1). Moreover, the value of the stiffness is expected to undergo a quantized jump if the sharpening transition is of the predicted BKT type of Figure 8: **Charge sharpening from correlations**: (a) shows the spin-spin connected correlation function \(C_{z}(r)\) vs \(r\) for different \(p\) values in a log-log plot. The decay is power-law for \(p\lesssim 0.14\). For large \(p\), the decay becomes faster than an algebraic decay. The inset shows the power-law slope, \(e_{z}\) which remains \(2\) for \(p\lesssim 0.14\). (b) shows the power-law decay of the string correlator \(C_{W}(r)\sim 1/|r|^{\varepsilon_{W}}\) for smaller values of \(p\) in a log-log plot. For large \(p\), \(C_{W}(r)\) becomes a constant with \(e_{W}\to 0\) as shown in the inset. These features are consistent with a BKT criticality. (c) shows a finite-size estimate of \(\rho_{s}\) extracted from \(C_{W}\) with \(p\) for different \(L\). A finite-size BKT scaling collapse following Eq.19, shown in the inset as a function of \(p_{BKT}=(p-p_{\#})(\log L/a)^{2}\), yields \(p_{\#}=0.093(7)\). Last, we note that the stiffness \(\tilde{\rho}_{s}\) appears to cross close to where the predicted universal value at the transition ought to be \(\rho_{s}=\frac{1}{\pi}\)[26]. size \(1/\pi\)[26]. Based on our estimate of the crossing in \(\tilde{\rho}_{s}\) at \(p_{\#}\) we find good agreement with the stiffness jumping by \(1/\pi\) within the error bars of our estimate of the sharpening transition. All of this together provides strong evidence that the sharpening transition in qubit chains is of the BKT type. In the following subsection we discuss a possible interpretation of the entanglement critical properties that we have found with values that are anomalously large relative to measurement-induced criticality without any symmetry. Following the perspective that our finite size estimates of the critical properties are going to "feel" the sharpening transition on finite size simulations, due to its close proximity as depicted in the finite size cross-over diagram in Fig. 2, we are able to provide a clear explanation of our results as being shifted by the critical properties of a BKT sharpening transition. ## VII Interpretation of the estimated critical entanglement properties Our numerical results presented in Sec.IV suggest that the two critical points, the charge-sharpening and the entanglement transition, are approaching to each other as we go to \(\mathcal{Q}\) sectors away from half-filling. Hence, the entanglement critical properties, which are of our main interests in the present work, may be influenced by the nearby charge-sharpening transition. To ascertain this more carefully, in the previous section we showed that on our finite size simulation the charge sharpening correlations remain critical at the entanglement transition. A scenario for how this fits into the finite size scaling regime we access is depicted in the schematic diagram for the two transitions in Fig. 2. Here we pose a plausible reinterpretation of our above estimates of the critical exponents considering the nearby sharpening criticality. First, we discuss if the presence of the nearby sharpening transition affects our estimate of the exponent \(\eta\). Due to the close proximity of the charge-sharpening transition with the entanglement transition and the fact that the sharpening fluctuations can be critical within a given sector, the scaling dimension we pick up should come from the combination of the measurement-induced transition and the sharpening transition, i.e. \(\eta=\eta_{\#}+\eta_{E}\). Here \(\eta_{\#}=1/4\) is assumed for the BKT charge sharpening transition and \(\eta_{E}\) is the critical exponent of the U(1) symmetric entanglement transition once we subtracted the BKT values from the finite-size estimation \(\eta\). As such, this would provide the estimate of \(\eta_{E}=0.32(3)\), which is close to \(\eta_{H}=0.26(1)\) from the Haar random problem without symmetry. Second, we turn to our estimate of the effective central charge \(c_{\rm eff}\). The BKT sharpening transition is also a CFT (but it is not multifractal) and has a well-defined central charge \(c_{\#}=1\). Our analysis of the Free energy will then acquire a shift from these critical modes and we therefore expect that our estimate of the effective central charge is offset \(c_{\rm eff}=c_{\#}+c_{\rm eff,E}\), which yields a \(c_{\rm E,eff}=0.27(1)\) that is quite close to the transition without symmetry \(c_{\rm eff,H}=0.25(2)\). Thus, the interpretation of our estimate of the effective central charge being contaminated the nearby sharpening transition provides a natural explanation of our large estimate. Conversely, it also provides additional evidence that the sharpening transition is indeed a BKT transition. Last, we turn to the multifractal properties of the transition. As the BKT transition is not multifractal, it ensures that its contribution to \(x_{1}(n)\) in Eq. (12) is linear in \(n\). Combining this with Eq. (12), we see that the only possible effect of the nearby BKT transition on the multifractal scaling dimension is to shift the location of the minimum of \(H(s)\) to \(s_{\rm min}=\frac{\eta_{E}+\eta_{\#}}{2}\). Using our numerical estimate of \(s_{\rm min}\approx 0.27(2)\) from Fig. 6, we get an estimate \(\eta_{E}=0.3(3)\) This is consistent with our understanding from section VI. Moreover, all the higher cumulants i.e., \(x_{1}^{(n)}\) are unaffected by the BKT criticality and hence solely characterize the entanglement universality class. Our observation of \(x_{1}^{(2)}=0.65(2)\) being significantly different from that of the MIPT without symmetry, \(x_{1}^{(2)}=0.15(2)\), thus strongly suggests that the U(1) MIPT is characterized by a unique multifractal log-CFT that is distinct from its non-symmetric counterpart. ## VIII Conclusion In conclusion, this study has provided a comprehensive analysis of the universality class of the measurement-induced phase transition (MIPT) in U(1) conserving hybrid quantum circuits. Our sector-resolved approach enabled us to investigate charges far from the middle sector and access system sizes up to \(L=48\) for a conserved charge density of \(\mathcal{Q}=Q/L=1/12\). Our findings demonstrate that each sector exhibits Lorentz invariance with a phase boundary that follows the variance of the binomial distribution \(p_{c}(\mathcal{Q})\propto\mathcal{Q}(1-\mathcal{Q})\). The MIPT is characterized by a logarithmic conformal field theory, which we quantified through the examination of the transfer matrix and its Lyapunov spectrum governing the quantum evolution. The effective central charge (\(c_{\rm eff}\)), the anomalous scaling dimension (\(\eta\)), and the multifractal scaling of correlation functions in the conformal field theory were all found to be distinctly different from the MIPT without a conservation law. We then analyzed the charge sharpening physics and found it to be consistent with a BKT transition though our small system size study is not conclusive. Importantly, we also found that even within a single charge sector, critical sharpening physics can still appear in the dynamics of correlation functions averaged over measurement outcomes and these still look critical up to the MIPT. Thus, we provided a natural interpretation to our results that they have been shifted by BKT-related critical exponents. A separate scenario, which seems less likely, is that the sharpening transition and the entanglement transition coincide. However, this is at odds with controlled calculations in the limit of an infinite onsite Hilbert space dimension and we expect this can be resolved in future studies that find models that "pull apart" the locations of the sharpening and entanglement transitions. Nonetheless, regardless of this, by studying the multifractal spectrum of the log-CFT at the entanglement transition our work offers strong numerical evidence that the U(1) conser vation law alters the universal nature of the MIPT. ###### Acknowledgements. We thank Srivatsan Chakram, David Huse, and Matthew Fisher for insightful discussions as well as Utkarsh Agrawal, Sarang Gopalakrishnan, Andrew Potter, and Romain Vasseur for discussions and collaborations on related work. This work was partially supported by the Abrahams Postdoctoral Fellowship at the Center for Materials Theory Rutgers (A.C.), the Army Research Office Grant No. 79849-PE-H (J.H.P.) and a Sloan Research Fellowship (J.H.P.). J.H.W. acknowledges financial support from NSF grant DMR-2238895. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452 (J.H.W., J.H.P.) The computations were performed using the Beowulf cluster at the Department of Physics and Astronomy of Rutgers University; and the Office of Advanced Research Computing (OARC) at Rutgers, The State University of New Jersey ([http://oarc.rutgers.edu](http://oarc.rutgers.edu)), for providing access to the Amarel cluster. The Flatiron Institute is a division of the Simons Foundation. ## Appendix A Critical exponents from power-law scaling of the entangle order parameter To probe the entanglement critical point, we use a "single-site Ancilla" method to calculate the order parameter \(S_{E}\). After coupling the ancilla at \(t=0\) in a fixed charge sector \(\mathcal{Q}=Q/L\), we evolve the U(1) symmetric circuit with measurements upto \(t=2L\) to reach the steady state. We then compute the entanglement entropy \(S_{E}\) of the ancilla for different measurement rates \(p\) and different system sizes \(L\) commensurate with Q. Fig. 9(a)-(c) shows the variation of \(S_{E}\) with \(p\) in three different charge sectors (for \(n=1\)), \(\mathcal{Q}=1/2,1/4\) and \(1/8\). We have shown \(S_{E}\) vs \(p\) in the main text for \(\mathcal{Q}=1/12\). In each cases, with increasing \(p\), \(S_{E}\) decreases from a volume law phase to an area law phase crossing at \(p=p_{c}\) between different \(L\)s. The critical point is obtained using a finite size scaling analysis with two different protocols: (i) using the standard finite-size scaling ansatz for a second order transition, \[S_{E}(L;\mathcal{Q})\sim\tilde{h}_{\mathcal{Q}}[(p-p_{c}(\mathcal{Q}))L^{1/ \nu(\mathcal{Q})}] \tag{10}\] and collapsing the data onto a universal curve \(\tilde{h}_{\mathcal{Q}}\) with two parameters \(p_{c}(\mathcal{Q})\) and \(\nu(\mathcal{Q})\) in each \(\mathcal{Q}\) sector. (ii) We use the ansatz proposed in Ref.[25], \[S_{E}(L;\mathcal{Q})\sim\tilde{h}_{\mathcal{Q}}[(p-p_{c}(\mathcal{Q}))L^{1/ \nu(\mathcal{Q})},y(\mathcal{Q})L^{-\omega(\mathcal{Q})}], \tag{11}\] where the leading irrelevant scaling variable \(y\) is incorporated to take into account the shift of the crossing points with increasing system size. \(\tilde{h}_{\mathcal{Q}}\) can be expanded in Taylor series around \(p=p_{c}\) as, \[S_{E}(L;\mathcal{Q})=a+b(p-p_{c}(\mathcal{Q}))L^{1/\nu(\mathcal{Q})}+c(p-p_{c }(\mathcal{Q}))^{2}L^{2/\nu(\mathcal{Q})}+d/L^{\omega(\mathcal{Q})}. \tag{12}\] We numerically find the exponents \(p_{c}(\mathcal{Q})\) and \(\nu(\mathcal{Q})\) and the fitting parameters \(a,b,c,d\) (ommited the \(\mathcal{Q}\) dependence for simplicity of the notation) by non-linear fitting from the data. The values \(p_{c}(\mathcal{Q})\) and \(\nu(\mathcal{Q})\) shown in the main text in Figs. 1(c) and 3 (b) as Figure 9: Entanglement entropy of the ancilla \(S_{E}\) vs \(p\) at (a) \(Q=L/2\) for \(L=8\) to \(24\), (b) \(Q=L/4\) for \(L=8\) to \(24\) and (c) \(Q=L/8\) for \(L=8\) to \(40\). \(S_{E}\) is collapsed with the scaling form Eq. (10) (and also consistent with Eq. (12)) to obtain \(p_{c}(\mathcal{Q})\) and \(\nu(\mathcal{Q})\) shown in the insets. well as the insets of Fig. 9 are consistent with both the scaling anstaz within errorbars. We next show how we extract the dynamical critical exponent \(Z\). We collapse \(S_{E}\) with \(t/L^{z(\mathcal{Q})}\) for different \(Ls\) in a fixed \(\mathcal{Q}\) sector. Fig. 10(a)-(c) show the data collapse for \(\mathcal{Q}=1/2,1/4\) and \(1/8\) (\(\mathcal{Q}=1/12\) is shown in the main text). This shows \(z\approx 1\) across \(\mathcal{Q}\) sectors as shown in Fig2(c) of the main text. This justifies our choice of the aspect ratio \(t/L=2\) (i.e. an \(\mathcal{O}(1)\) number) in finding \(p_{c}\) and \(\nu\). ## Appendix B BKT finite-size scaling analysis of the charge-sharpening phase transition To understand the nature of the charge-sharpening phase transition in a quantum circuit system, we employed the ancilla method as outlined in Ref.[25]. This method involves coupling an ancilla to two distinct charge sectors, represented as \(\ket{\Psi}=\ket{\psi_{Q}}\ket{0}+\ket{\psi_{Q-1}}\ket{1}\); where \(\ket{\psi_{Q}}\) symbolizes states within the charge sector \(\mathcal{Q}\), while \(\ket{1}\) and \(\ket{0}\) represent ancilla states. The order parameter \(S_{\#}\), signifying the von Neumann entanglement entropy of the reduced density matrix \(\mathrm{Tr}_{\text{ancilla}}\ket{\Psi}\Psi\), is used to probe the charge-sharpening phase transition. To compute the entropy \(S_{\#}\), we adjusted the measurement rates \(p\) and system sizes \(L\) to be commensurate with Q. Fig. 11 illustrates the variation of \(S_{\#}\) with \(p\) for different charge sectors, \(\mathcal{Q}=1/2,1/4,1/6\) and \(1/12\). We used finite-size scaling laws of quantum criticality to understand the behavior near the charge-sharpening phase transition. Figure 10: Entanglement entropy of the ancilla \(S_{E}\) is collapsed with rescaled time \(t/L^{z(\mathcal{Q})}\) follwoing Eq.4 at (a) \(Q=L/2\) for \(L=8\) to \(24\), (b) \(Q=L/4\) for \(L=8\) to \(24\) and (c) \(Q=L/8\) for \(L=8\) to \(40\). This gives the dynamical exponent \(z\approx 1\) for all charge sectors. For our analysis, we tested two possible types of transitions: a second-order phase transition and a BKT transition. For the former, the scaling ansatz matched that for the entanglement phase transition as seen in Eq. (10). On the other hand, for the BKT transition, we employed a different scaling ansatz: \[S_{\#}(L;\mathcal{Q})\sim g_{\mathcal{Q}}[(p-p_{\#}(\mathcal{Q}))(\log L/a)^{2},y_{\#}(\mathcal{Q})L^{-\omega_{\#}(\mathcal{Q})}], \tag{12}\] Here, \(a\) is the ultraviolet scale of BKT criticality, \(y_{\#}\) signifies the amplitude of the subleading term with the scaling dimension \(\omega_{\#}(\mathcal{Q})\), and \(L\) represents the system size. The choice of logarithmic scaling for \(L\) is guided by the BKT scaling law for the correlation length \(\xi/a\sim\exp(1/\sqrt{|p-p_{c}|})\). In the vicinity of the critical point, \(g_{\mathcal{Q}}\) can be perturbatively expanded, \[S_{\#}(L;\mathcal{Q})\sim a_{\#}+b_{\#}(p-p_{\#}(\mathcal{Q}))(\log L/a)^{2}+c _{\#}(p-p_{\#}(\mathcal{Q}))^{2}(\log L/a)^{4}+d_{\#}L^{-\omega(\mathcal{Q})}, \tag{13}\] where the fitting parameters are \(p_{\#}\), \(a\) and \(a_{\#},b_{\#},c_{\#},d_{\#}\). We tested a wide range of the exponent \(\omega_{\#}\) to establish the error bars. ## Appendix C Two methods to calculate mutual information between a pair of ancilla In the calculation of the mutual information between a pair of ancillas, here we compare the newly developed "single-site ancilla" method explained in Sec.III.1 of the main text with the previously used "bond-ancilla" method (to find \(p_{C}\) and \(\nu\)) in Ref.[25]. In the "bond-ancilla" method, to probe the entanglement phase transition in a fixed global charge sector \(\mathcal{Q}\), an ancilla is entangled in a Bell state with a bond of the adjacent sites \(j\) and \(j+1\) in the system yielding the wave function \(|\Psi\rangle=(1/\sqrt{2})\big{[}|\dots\uparrow\downarrow\dots\rangle|\Uparrow \rangle+|\dots\downarrow\uparrow\dots\rangle|\Downarrow\rangle\big{]}\). Here the states \(|\dots\uparrow\downarrow\dots\rangle\) and \(|\dots\downarrow\uparrow\dots\rangle\) belong to same global charge sector \(\mathcal{Q}\) and are orthogonal due to the orthogonal configuration at the bond. Although this method of entangling ancilla accurately predicts the entanglement critical point (\(p_{c}\) and \(\nu\)), it is difficult to extend this method to calculate mutual information between a pair of ancillas and associated \(\eta\) exponents at \(p_{c}\). As we know, correlation function [30] is obtained from the mutual information of the two ancilla entangled to the system. Entangling two ancillas each coupled to a bond requires \(4\) projection operations on the wave function (ex: \(|\dots\uparrow\downarrow\dots\downarrow\uparrow\dots\rangle|\Uparrow \Downarrow\rangle\) ) and makes this method numerically challenging due to collapse of the wavefunction. Moreover, if we go to the charge sectors away from the middle sectors, the dimension of the constrained Hilbert-space (due to U(1) symmetry) reduces and it becomes increasingly difficult to couple two ancillas to two orthogonal states without collapsing the system wavefunction. To circumvent these issues, we used the efficient "single-site ancilla" method to couple the ancilla and calculated correlation functions at different charge sectors. We compare the two methods (the "single-site ancilla" and the "bond-ancilla") in Fig. 12 (a)-(b)at the middle sector where the \(\eta\) exponent (for \(n=1\)) extracted from both the methods are in good agreement. This establishes the consistency of the new method. However, the "bond-ancilla" method requires running \(\sim 1.5\) times more samples compared to the "single-site ancilla" method to obtain similar size of the statistical ensemble over which we averaged the correlation function. Moreover, the "bond-ancilla" method required us to wait longer to reach the steady state before coupling the ancillas (\(t_{0}\sim L^{2}\)), while for the "single-site ancilla" method it is enough to wait till \(t_{0}\sim 2L\). In Fig. 12, we showed both (a) and (b) at \(t_{0}=2L^{2}\) for the purpose of comparison. Next, we use the "single-site ancilla" method to calculate the correlation function at \(\mathcal{Q}=1/8\). Fig. 12(c) shows the scaling collapse with \(\eta=0.69(5)\). We note that, even after using the efficient method at \(\mathcal{Q}=1/8\), a large number of samples collapsed due to the above-mentioned post-selection problem and the size of the statistical ensemble we averaged, is only \(\sim 1/4\) compared to that at the middle-sector. This leads to poor quality of the scaling collapse with large error bars at \(\mathcal{Q}=1/8\) compared to that at half-filling (Fig. 12(b)). ## Appendix D Computation of the anisotropy parameter \(\alpha\) To compute the anisotropy parameter \(\alpha\) in the area of the cylinder of the CFT \(A=\alpha Lt\), we followed the procedure prescribed in Ref.[17]. We compute two kinds of correlation functions: (i) the "space-like" correlation function \(I_{\rm space}=I_{n=1}(A,B:\delta r=L/2,\delta t=0)\) from the mutual information of the two ancillas \(A\) and \(B\), coupled in the steady state of the circuit at the same instant of time \(t_{1}=t_{2}=t_{0}(=2L)\), at a spatial separation \(|r_{2}-r_{1}|=L/2\). (ii) the "time-like" correlation function \(I_{\rm time}(\delta t)=I_{n=1}(A,B:\delta r=0,\delta t)\) from the mutual information of the two ancillas, where the first ancilla is coupled at \(t_{1}=t_{0}=(2L)\) and the second one is coupled at a separation in time \(t_{2}-t_{1}=\delta t\), at the same site in the system. After coupling the two ancillas in the circuit, we calculate the time evolution of the correlation functions \(I_{n=1}(A,B)\) with time \(t-t_{2}\). This is plotted in Fig. 13(a) where the initial conditions are chosen only from the middle sector \(\mathcal{Q}=1/2\) for a \(L=16\) site system. We set the measurement rate at \(p=p_{c}=0.110\) at half-filling. Here, \(I_{\rm space}\) is shown by the yellow line while \(I_{\rm time}\) is shown for two values of \(\delta t=8\) and \(9\) which cross \(I_{\rm space}\). From this, we have to find out the value of \(\delta t=t_{*}\) at which the two types of the correlations match, \(I_{\rm space}=I_{\rm time}(t_{*})\). Since, numerically we can only calculate \(I_{\rm time}(\delta t)\) on a grid of \(\delta t\) with grid size\(=1\), we interpolate to obtain, \[t_{*}=8+\frac{I_{\rm space}-I_{\rm time}(8)}{I_{\rm time}(9)-I_{\rm time}(8)}(9- 8). \tag{10}\] This gives \(t_{*}=8.768\). We compute the value of \(\alpha\) from \(\alpha=\log(1+\sqrt{2})L/(\pi t_{*})\) derived in Ref.[17]. This gives \(\alpha=0.51(3)\) at half-filling. We use this value of \(\alpha\) for numerical results in Sec.V.2. Furthermore, we also compute \(\alpha\) when the system is initialized to states superposing with all charge sectors. This is shown in Fig. 13(b) at \(p=p_{c}=0.110\) for \(L=16\). Following similar procedure explained above, we obtain \(t_{*}=8.912\) which yields \(\alpha=0.50(1)\). Hence the estimates of \(\alpha\) restricting to middle sectors and that from all charge sectors are in good agreement. This is also consistent with the fact that for large \(L\) (in thermodynamic limit), the middle sectors (\(\mathcal{Q}=1/2\)) dominates over all other charge sectors as shown in Fig. 6(b). ## Appendix E Dependence of initial conditions on \(f_{1}(L)\) In section V.2 of the main text, we showed the dependence on the initial conditions of the free energy \(f(L)\) and the difference between the generalized free energies \(f_{1}(L)-f(L)\). We discussed that when we restrict to the contribution only from a fixed charge sector, the free energy \(f^{\mathcal{Q}}(L)\) picks up a leading \(1/L\) dependence (in addition to the usual \(1/L^{2}\) scaling). Here we show Figure 12: The scaling collapse of the correlation function with time using Eq.6: the two ancillas are coupled using the “bond-ancilla” method in (a) and using the “single-site ancilla” method in (b). The critical exponent \(\eta\) extracted using these two methods are \(\eta=0.41(4)\) from the “bond-ancilla” method and \(\eta=0.44(2)\) from the “single-site ancilla” method at \(p_{c}=0.110\) and \(\mathcal{Q}=1/2\). These values are in good agreement with each other, establishing consistency between these two methods. (c) shows the scaling collapse of the mutual information at \(\mathcal{Q}=1/8\) (using the “single-site ancilla” method) and \(p_{c}=0.048\). This gives \(\eta=0.69(5)\), but with a poorer quality of collapse due to averaging over smaller number of samples. We use \(n=1\) for this plot. Figure 13: Computation of the anisotropy paremeter \(\alpha\): at \(p=p_{e}(\mathcal{Q}=1/2)=0.11\), we plot the “space-like” correlator \(I_{\rm space}\) with yellow color and the “time-like” correlators \(I_{\rm time}(\delta t=8)\) (grey) and \(I_{\rm time}(\delta t=9)\) (red) for two values of \(\delta t\) which cross \(I_{\rm space}\) at late times. We obtain \(\alpha=0.51(3)\) in case (a), when we restrict to only \(\mathcal{Q}=1/2\) trajectories and this matches with the case (b) with \(\alpha=0.50(1)\) where the initial conditions are randomly chosen over all charge sectors. that the generalized free energy \(f_{1}(L)\) shows a similar \(1/L\) scaling when we compute \(f_{1}^{\mathcal{Q}}(L)\) from the trajectories projected to \(\mathcal{Q}\) sector at late times. This is shown in Fig. 14 by blue triangles at \(\mathcal{Q}=1/2\) and \(p=p_{c}=0.11\). \(f_{1}^{\mathcal{Q}=1/2}(L)\) can be fit well using a form, \[f_{1}^{\mathcal{Q}}=f_{1}^{\mathcal{Q}}(L\rightarrow\infty)+a^{\prime}\mathcal{ Q}/L+\frac{a^{\prime\prime}}{L^{2}}. \tag{10}\] This \(1/L\) dependence coming from the chemical potential fixing the density to that charge sector is cancelled in the difference \(f_{1}^{\mathcal{Q}}(L)-f^{\mathcal{Q}}(L)\) from each trajectory with charge \(Q\). This is verified in Fig. 6(b) of the main text. We also plot \(f_{1}(L)\) from all trajectories with unrestricted charge in Fig. 14 by the red circles. In this case, absence of chemical potential leads to the usual \(1/L^{2}\) scaling.
2307.04544
Time-drift Aware RF Optimization with Machine Learning Techniques
The Fermilab Linac delivers 400 MeV H- beam to the rest of the accelerator chain. Providing stable intensity, energy, and emittance is key since it directly affects downstream machines. To operate high current beam, accelerators must minimize uncontrolled particle loss; this can be accomplished by minimizing beam longitudinal emittance via RF parameter optimization. However, RF tuning is required daily since the resonance frequency of the accelerating cavities is affected by ambient temperature and humidity variations and thus drifts with time. In addition, the energy and phase space distribution of particles emerging from the ion source are subject to fluctuations. Such drift is not unique to Fermilab, but rather affects most laboratories. We are exploring machine learning (ML) algorithms for automated RF tuning for 2 objectives: optimization of Linac output energy and phase oscillation correction, with an emphasis on time-drift aware modeling that can account for conditions changing over time.
R. Sharankova, M. Mwaniki, K. Seiya, M. Wesley
2023-07-10T13:24:27Z
http://arxiv.org/abs/2307.04544v1
# Time-Drift Aware RF Optimization with Machine Learning Techniques+ ###### Abstract The Fermilab Linac delivers 400 MeV H- beam to the rest of the accelerator chain. Providing stable intensity, energy, and emittance is key since it directly affects downstream machines. To operate high current beam, accelerators must minimize uncontrolled particle loss; this can be accomplished by minimizing beam longitudinal emittance via RF parameter optimization. However, RF tuning is required daily since the resonance frequency of the accelerating cavities is affected by ambient temperature and humidity variations and thus drifts with time. In addition, the energy and phase space distribution of particles emerging from the ion source are subject to fluctuations. Such drift is not unique to Fermilab, but rather affects most laboratories. We are exploring machine learning (ML) algorithms for automated RF tuning for 2 objectives: optimization of Linac output energy and phase oscillation correction, with an emphasis on time-drift aware modeling that can account for conditions changing over time. ## 1 The Fermilab Linac The Fermi National Accelerator Laboratory (Fermilab) Linac accelerates H- beam to 401.5 MeV. The Linac is preceded by a 35 keV H- ion source and a pre-accelerator which bunches and accelerates beam to 750 keV. The Linac comprises three sections: a Drift Tube Linac (DTL), a Side Coupled Linac (SCL), and a transition section between them. The DTL comprises 207 drift tubes spread across 5 tanks and operates at 201.25 MHz RF frequency. The SCL has 7 modules with 448 total cells, operating at 805 MHz. The transition section consists of a buncher and a vernier cavity for longitudinal matching between the DTL and the SCL. During regular operations, the Linac delivers roughly 25 mA at 35 \(\mu\)s pulse length with transition efficiency \(\geq\) 92%. ## 2 Linac RF & Llrf DTL RF field amplitude is controlled by the Marx modulator logic controller which in turn controls the 5 MW power tube modulator voltage [1]. The RF phase is controlled by the low level RF (LLRF) module in a VME eXtension for Instrumentation (VXI) crate. Each SCL module is powered by a 12 MW klystron with VXI based LLRF phase and amplitude control. Amplitude and phase settings are send to the front-end card in the LLRF VXI crates via the Fermilab Accelerator control network (ACNET) [2]. ## 3 Linac Daily Tuning Stable Linac output is crucial for downstream machines. Drifting resonance frequencies of the accelerating cavities and fluctuations in the beam energy and phase coming from the ion source and pre-accelerator both induce longitudinal emittance growth and increased particle loss. To counter such effects, operators perform machine tuning several times a day by hand-scanning a handful of RF parameters. The two main objectives of this tuning are (1) optimize Linac output energy, and (2) maximize beam throughput while minimizing beam losses along Linac. ## 4 Ref Optimization With ML Hand-tuning faces several challenges. Human operators cannot optimize in multi-dimensional space, but rather scan parameters one by one making it difficult to ensure the system has reached a global optimum. Tuning is limited by personnel availability, rather than done when Linac conditions change. To resolve these challenges, we are working towards automating the tuning procedure using ML-based algorithms. We have developed two independent approaches for the two tuning objectives outlined in the previous section. ### Linac output energy control After leaving the Linac, beam is injected into a rapid cycling synchrotron called the Booster via a transfer line. Central momentum changes from the design output energy of 401.5 MeV or increased momentum spread (longitudinal emittance) can manifest as radial position errors in the Booster, increasing beam loss. Thus it is vital to deliver stable Linac output energy that matches Booster acceptance to minimize loss in downstream machines. In daily tuning Linac output energy is optimized by hand-scanning the RF cavity phase of the last Linac cavity, SCL7. Figure 1 shows Figure 1: Change in Linac output energy as a function of SCL7 RF phase change (simulation). in simulation how output energy changes as a function of cavity phase change. Fluctuations in the output energy can be measured using beam position monitor (BPM) data in the transfer line from Linac to Booster where there are no accelerating elements, only focusing and bending, and beam is drifting. Using the correlation between transverse displacement \(\Delta x(\text{or }\Delta y)\) and centroid momentum change \(\Delta p/p\) in regions with dispersion, one can write \(\Delta x(\Delta y)=D\ _{x(y)}\ \Delta p/p\) where \(D\ _{x(y)}\) is a dispersion coefficient derived from MAD-X [3] lattice simulation [4]. Of particular interest to this study are 3 horizontal BPMs in locations along the injection line where dispersion coefficients are large. In order of distance from SCL7 these are called HPQ3, HPQ4 and HPQ5. The change in cavity output energy (or transverse positions as a proxy) as a function of RF phase is defined here as RF cavity response. It was observed that response is not constant, but rather drifts with time, as shown in Figure 2 in terms of HPQ4 data. It is clear output energy depends on beam input energy at cavity entrance. As a step towards automating Linac output energy control, we developed a ML-based procedure for 1-step energy drift correction that does not require daily SCL7 RF scanning. As a starting point for our modeling, we collected a reference SCL7 phase-scan dataset on February 22 2023 recording BPM transverse positions in the transfer line. We fitted a simulated RF cavity response curve to the reference data, by changing cavity voltage, input energy and input phase, to obtain baseline response. Next, the input phase-space dependence of the RF cavity response was modeled using simulation. Beam phase-space at the entrance of SCL7 is painted around the baseline as shown in Figure 3 (left) to cover all possible daily fluctuations, and phase-space at cavity exit is simulated (same Figure on the right). A fully-connected deep neural network (DNN) was trained to predict beam input energy relative change from the baseline \(\Delta E_{in}\) given input and output relative phase changes \(\Delta\phi_{in}\), \(\Delta\phi_{out}\), and output energy change \(\Delta E_{out}\). The network was developed using the Tensorflow framework [5]. The network has 5 hidden Dense layers with ReLU activation and 20 nodes each. The optimizer used was Adam with initial learning rate of 0.001 and a custom rate decay scheduler. The network was trained on a sample of 1200 simulated examples for 20 epochs. The trained model can be used on data to correct output energy daily drift by computing the relative differences w.r.t. the reference dataset. Input and output phases are measured with BPMs directly upstream and downstream of SCL7, and output energy is measured using transverse positions as outlined above. The model predicted \(\Delta E_{in}\) for that particular day is then used to compute a calibrated RF cavity response curve in simulation. Finally, the calibrated RF cavity response is used to determine what SCL7 RF phase shift needs to be applied to the daily data to return the output energy to the reference 401.5 MeV. The correction mechanism was tested on data collected on Feb 20 2023. The effect of the RF phase correction on BPM horizontal positions at HPQ3, HPQ4 and HPQ5 is shown in Figure 4. It is clear the correction brings all horizontal positions closer to the reference. However one can see that the effect is slightly different at the 3 BPM locations. Investigating the raw BPM response vs. cavity phase for that day, we see that the shape of the 3 response curves is slightly different. This can be explained by possible errors on the calculated Figure 4: Effect of energy correction on beam horizontal positions at HPQ3, HPQ4 and HPQ5 locations. Figure 3: Simulated beam phase and energy relative to the baseline model at entrance (left) and exit (right) of SCL7. Figure 2: Beam horizontal position deviation at location HPQ4 as a function of SCL7 RF phase. Light blue trace is data from Feb 20, 2023. Dark blue trace is from Feb 22. dispersion factors or horizontal displacements. The former are simulated assuming ideal beam trajectory and design bending magnet field strengths, and are thus susceptible to fluctuations in magnet currents and beam transverse motion. The latter are affected by the intrinsic noise on BPM positions of \(\approx 0.1\) mm [6]. Nevertheless, the correction resolution is better than the observed daily drift of \(\geq 0.3\,\mathrm{MeV}\). ### Phase oscillation correction Changes in beam energy or phase upstream of the Linac result in longitudinal emittance growth and will manifest as a beam phase oscillation propagating through the linac. The resulting elevated losses degrade beam throughput, thus correcting them is important for best Linac performance. Figure 5 shows evolution over 10 hours of beam phase as measured by BPMs without any changes to Linac RF parameters. From simulation we expect \(\sim 7\) synchrotron oscillations in DTL, however there are only 5 BPMs in DTL, starting at DTL2 exit. Due to lack of sufficient instrumentation in the pre-accelerator and DTL, we cannot determine where the oscillations starts. However we have robust instrumentation coverage in the SCL, which allows us to measure the oscillation there well. Thus we developed a correction scheme that aims to correct the phase oscillation as seen by SCL BPMs. To counter the effect in SCL of a phase or energy error occurring upstream, we need to apply two independent (non-parallel) corrections in the DTL. Figure 6 (left) demonstrates in simulation the effect on beam phase of two independent RF shifts to DTL cavities. Same figure shows on the right the phase-space at the exit of the DTL (\(\approx 77\) m in distance) painted by a 2D RF phase scan of the two cavities. We developed a ML-based phase oscillation correction scheme based on the above principle. First, a 2D RF phase scan of DTL2 and DTL5 cavities was performed and BPM phase data was recorded. Then a fully-connected DNN was trained to reconstruct DTL2 and DTL5 RF phase from the phase oscillation pattern observed by 32 BPMs, 5 in the DTL and 27 in the SCL. The network has 10 hidden layers in total: 5 Dense layers with ReLU activation of 32,64,128,64 and 32 nodes, respectively, each followed by a Batch Normalization layer. Adam with initial learning rate of 0.001 and a custom rate decay scheduler was used for optimization. Network was trained on a sample of 910 data examples for 600 epochs. The model's ability to reconstruct arbitrary phase oscillation patterns by a combination of DTL and DTL5 RF phase shifts was tested on dataset from Dec 14 2022 where beam phase at Linac entrance is intentionally changed by different magnitude. Several examples of model performance are shown in Figure 7. It is clear that the model is qualitatively able to reproduce a wide range of upstream oscillations, making oscillation correction possible. ## 5 Conclusion We are exploring ML applications using diagnostics data for automated Linac RF parameter optimization. We developed ML algorithms for two RF optimization objectives: Linac output energy control, and Linac phase oscillation correction. Preliminary results for both approaches are very promising. Future work includes testing of different neural network types to compare performance, and testing model prediction in real-time in the accelerator operations. Figure 5: BPM phase evolution along Linac over 10 hours. Figure 6: Effect of two independent RF shifts in DTL on beam phase (left). Beam phase-space at exit of DTL resulting from a 2D RF phase scan in DTL (right). Figure 7: BPM phase oscillation along SCL. Horizontal axis is distance (m) Dark blue: effect of beam phase change upstream of Linac. Light blue: model predicted DTL2, DTL5 phase shifts.
2303.09597
Residual Physics Learning and System Identification for Sim-to-real Transfer of Policies on Buoyancy Assisted Legged Robots
The light and soft characteristics of Buoyancy Assisted Lightweight Legged Unit (BALLU) robots have a great potential to provide intrinsically safe interactions in environments involving humans, unlike many heavy and rigid robots. However, their unique and sensitive dynamics impose challenges to obtaining robust control policies in the real world. In this work, we demonstrate robust sim-to-real transfer of control policies on the BALLU robots via system identification and our novel residual physics learning method, Environment Mimic (EnvMimic). First, we model the nonlinear dynamics of the actuators by collecting hardware data and optimizing the simulation parameters. Rather than relying on standard supervised learning formulations, we utilize deep reinforcement learning to train an external force policy to match real-world trajectories, which enables us to model residual physics with greater fidelity. We analyze the improved simulation fidelity by comparing the simulation trajectories against the real-world ones. We finally demonstrate that the improved simulator allows us to learn better walking and turning policies that can be successfully deployed on the hardware of BALLU.
Nitish Sontakke, Hosik Chae, Sangjoon Lee, Tianle Huang, Dennis W. Hong, Sehoon Ha
2023-03-16T18:49:05Z
http://arxiv.org/abs/2303.09597v1
Residual Physics Learning and System Identification for Sim-to-real Transfer of Policies on Buoyancy Assisted Legged Robots ###### Abstract The light and soft characteristics of Buoyancy Assisted Lightweight Legged Unit (BALLU) robots have a great potential to provide intrinsically safe interactions in environments involving humans, unlike many heavy and rigid robots. However, their unique and sensitive dynamics impose challenges to obtaining robust control policies in the real world. In this work, we demonstrate robust sim-to-real transfer of control policies on the BALLU robots via system identification and our novel residual physics learning method, Environment Mimic (EnvMimic). First, we model the nonlinear dynamics of the actuators by collecting hardware data and optimizing the simulation parameters. Rather than relying on standard supervised learning formulations, we utilize deep reinforcement learning to train an external force policy to match real-world trajectories, which enables us to model residual physics with greater fidelity. We analyze the improved simulation fidelity by comparing the simulation trajectories against the real-world ones. We finally demonstrate that the improved simulator allows us to learn better walking and turning policies that can be successfully deployed on the hardware of BALLU. ## I Introduction Buoyancy-assisted or balloon-based robots [1, 2, 3] have great potential to offer fundamental safety in human environments. Traditional mobile robots while being able to execute a variety of tasks, tend to be rigid and heavy and may cause serious damage to their surroundings or themselves in case of control or perception errors. On the other hand, buoyancy-assisted robots (BARs) [4] are typically designed to be lightweight, compact, and intrinsically safe. Therefore, they can be used for various applications that require close human-robot interaction, such as education, entertainment, and healthcare. For instance, Chae _et al._[1] present Buoyancy Assisted Lightweight Legged Unit (BALLU), which is a balloon-based robot with two legs (Fig. 1), and showcase that it can be deployed to various indoor and outdoor environments without any safety concerns. However, it is not straightforward to control BARs due to their unique, non-linear, and sensitive dynamics. One popular approach for robot control is model-predictive control (MPC) [5] which plans future trajectories via models and minimizes the provided cost function. The complex dynamics of BARs, however, prevent us from developing concise and effective models and therefore rule out MPC as a control method. In contrast, deep reinforcement learning (deep RL) offers an automated approach to training a control policy from a simple reward function without robot-specific models. On the flip side, policies trained with deep RL often experience severe performance degradation when deployed to the robot due to the difference between the simulated and real-world environments, which is commonly known as the sim-to-real gap or the reality gap [6]. In our experience, this gap is further compounded in the case of BALLU when we employ a vanilla rigid body simulator, such as PyBullet [7] or CoppeliaSim [8], due to the unmodeled aerodynamics and low-fidelity actuators. In this work, we mitigate the sim-to-real gap of the BALLU robot by identifying system parameters and modeling residual dynamics using a novel technique, _EnvMimic_. First, we iteratively tune the actuator parameters in simulation based on the data collected from hardware experiments. This system identification allows us to better illustrate the nonlinear dynamics of BALLU's cable-driven actuation mechanism. Second, we learn the residual physics of the BALLU robot from the collected real-world trajectories to capture its complex aerodynamics that are difficult to model analytically. To this end, we propose a novel technique, Environment Mimic (EnvMimic), which learns to gener Fig. 1: An image of the successful policy for the forward walking task, which is trained in the improved simulation using our method. ate external forces to match the simulation and real-world trajectories via deep RL, which is different from common supervised learning formulations [9, 10, 11, 12]. This is similar to using pseudo-forces like centrifugal force or Coriolis force to explain the observed behavior. Our approach can also be viewed as an inside-out flipped version of the recent motion imitation frameworks [13, 14, 15, 16], which learn internal controllers that enable the robot to imitate reference motions. In our case, we treat the real-world trajectories as a reference and learn an external residual force policy to imitate that behavior in simulation. We also observe that EnvMimic exhibits a robust generalization capability, even when we have a small number of trajectories. We demonstrate that the proposed techniques can successfully reduce the sim-to-real gap of the BALLU robot. Firstly, we show that modeling the actuators and capturing the aerodynamics results in a significantly improved and qualitatively richer simulation. Our augmented simulator successfully illustrates asymmetric turning behaviors, which are observed on hardware but are not captured by the vanilla version or the simulator with supervised residual dynamics learning. We also demonstrate that we can improve the sim-to-real transfer performance of the policies for two tasks, walking and turning, on the hardware of the BALLU robot. ## II Related Work ### _Deep Reinforcement Learning_ Deep RL [17, 18, 19] has allowed researchers to make great strides in various fields of robotics, including navigation [20, 21], locomotion [22, 23], and manipulation [24, 25]. However, successfully deploying these controllers on hardware is still an active area of research [26, 27], which is not straightforward due to the discrepancy between the simulation and the real world [6]. One of the most common approaches is domain randomization [28, 29, 30, 23, 24, 25, 26, 27, 28, 29], which exposes an agent to a variety of dynamics during training. Additionally, employing extensions such as privileged learning [21, 22] and adopting structured state [21] and action space [32] representations has also enabled successful deployment of learned policies on hardware. On the other hand, researchers have developed frameworks to learn policies directly from real-world experience, which have been proven effective for both manipulators [25] and legged robots [19, 33]. This paper discusses a sim-to-real transfer for the BALLU robot with highly sensitive dynamics inspired by these previous approaches. Drawing inspiration from these previous approaches, this paper discusses a sim-to-real transfer technique for the BALLU robot, which exhibits highly sensitive dynamics. Specifically, we train policies in simulation and enhance the sim-to-real transferability using real-world data. ### _System Identification_ Our approach is also highly inspired by system identification that aims to identify model parameters from the collected experimental data. This is a well-studied problem that has been addressed by a variety of methods involving maximum likelihood estimation [34, 35], optimization-based strategies [36, 37, 38], neural networks [39, 40, 41] with iterative learning [42, 43], actuator dynamics identification [44, 45], adversarial learning [46], and learning residual physics [9, 10, 11, 12, 25]. Combining system identification with other techniques, such as dynamics randomization, latency modeling, and noise injection [47, 48], has also proven to be effective for successful sim-to-real transfer of learned policies. However, in our case, system identification even in combination with domain randomization, proves to be insufficient, necessitating the need for our residual dynamics learning framework, EnvMimic. ### _Balloon-based Robots_ Balloon-based or buoyancy-assisted robots [2, 49] have been investigated because of their intrinsic stability and low costs. Therefore, many researchers have investigated them in various applications, including roof cleaning [50], planet exploration [51], disaster investigation [52, 53, 54], social interactions [55], security [56], and many more. However, control of these balloon robots is not straightforward due to their sensitive non-linear dynamics. One common approach is to develop model-based controllers [3, 56, 57, 58], often with system identification. This paper discusses the control of Balloon-based legged robots proposed by Chae _et al._[1] by leveraging deep reinforcement learning and residual physics. ## III Sim-to-real of BALLU In this section, we will describe our techniques for reducing the'reality gap' [6] of the BALLU robot [1]. We approach this challenging problem by combining traditional system identification and deep residual dynamics learning. First, we improve the simulation model of cable-driven actuation by identifying non-linear relationships between motor and joint angles. Next, we use the captured real-world trajectories to model the residual dynamics of the BALLU robot, which arise from various sources such as aerodynamics, joint slackness, and inertial parameter mismatch. Our key invention is to use deep RL for building a residual dynamics model instead of the common choice of supervised learning, which offers effective generalization over a small number of trajectories. ### _Background: BALLU robot_ BALLU (Buoyancy Assisted Lightweight Legged Unit) is a novel buoyancy-assisted bipedal robot with six helium balloons, which provide enough buoyancy to counteract the gravitational force. BALLU's base is connected to helium balloons and houses a Raspberry Pi Zero W board for computing. The robot has two passive hip joints and two active knee joints, which are actuated by two Dymond D47 servo motors at the feet via cables. The overview of the robot is illustrated in Fig. 2. For more details, please refer to the original paper by Chae _et al._ colleague [1]. Due to its unique dynamics, model-free reinforcement learning can be a promising approach for developing effective controllers for BALLU without having to rely on prior knowledge or domain expertise. However, we need to mitigate the large sim-to-real gap first, which is induced by significant drag force effects and low-fidelity hardware. ### _System Identification_ One main source of the sim-to-real gap is its cable-driven actuation mechanism. In the simulation, servo motor commands and knee joint angles maintain an ideal relationship. In reality, they are affected by friction, torque saturation, and unmodeled cable dynamics, which make the actuator dynamics noisy and nonlinear. Therefore, we first perform system identification to better capture this nonlinear relationship from real-world data using optimization. Our free variables \(\mathbf{p}\) include knee spring parameters, motor gains, default motor angles, and default knee joint angles in simulation, which are sufficient to model various nonlinear relationships. As a result, we have eight free variables subject to optimization. Our objective function is to minimize the discrepancy of all four joint angles (left and right, motor arm and knee) between simulation and hardware. We sample \(20\) actuation commands that constitute \(\mathcal{A}\) that are uniformly distributed over the range \([0,1]\), which corresponds to motor arm angles in the range [\(0^{\circ}\), \(90^{\circ}\)], and measure knee and motor joint angles in simulation and on hardware. Then we fit polynomial curves for all the joints and compute the directed Hausdorff distance between the corresponding curves. We use the L-BFGS-B algorithm and optimize the parameters until convergence. The entire process is summarized in Algorithm 1. ``` 1:Input: the initial parameters \(\mathbf{p}_{0}\) 2:Input: a set of pre-defined actions \(\mathcal{A}\) 3:Measure joint angles on hardware for all actions \(\mathcal{A}\) 4:Fit polynomial curves \(\overline{C}_{1}\), \(\overline{C}_{2}\), \(\overline{C}_{3}\), and \(\overline{C}_{4}\) 5:\(\mathbf{p}\leftarrow\mathbf{p}_{0}\) 6:while not converged do 7: Update the simulation with \(\mathbf{p}\) 8: Measure joint angles for all actions \(\mathcal{A}\) 9: Fit polynomial curves \(C_{1}\), \(C_{2}\), \(C_{3}\), and \(C_{4}\) 10:\(\epsilon\leftarrow\) directed Hausdorff distance between \(C_{i}\) and \(\overline{C}_{i}\) 11: Optimize \(\mathbf{p}\) using L-BFGS-B 12:endwhile ``` **Algorithm 1** System Identification of Cable-driven Actuation ### _Residual Dynamics Learning via Reinforcement Learning_ Our next step is to model the residual dynamics of BALLU. Previous methods for learning residual dynamics have employed supervised learning [9, 10, 11, 12] or a combination of self-supervision with deep RL as part of the learning pipeline [25]. However, off-the-shelf supervised learning, in addition to requiring a large number of real-world trajectories, is plagued by limited exploration. Even a small perturbation to the states the policy has observed during training can cause it to diverge during test time. Moreover, we observe that stochasticity in the real world often leads to multiple different state trajectories arising from the same state even when we apply the same actions. The framework of deep RL lends itself naturally to addressing these issues by augmenting the data with simulated trajectories, making it a suitable choice for this problem. Our key insight is to augment the original simulation framework using a learned residual aerodynamics policy. This policy allows us to capture the complex interaction between BALLU and its environment in greater detail. We will demonstrate in the next section that learning locomotion behaviors with this aerodynamics policy in the loop translate to better transfer of our simulation policies to hardware compared to traditional techniques like domain randomization. Therefore, we design a framework to learn a policy that generates proper _external_ perturbation forces that can match the simulation behavior to the ground-truth trajectory collected from the hardware. We draw inspiration from motion imitation methods [13, 14, 15, 16], which have demonstrated impressive results for learning dynamics controllers to track reference motions. The fundamental difference is that we learn a policy for external perturbations, while other motion imitation works aim to learn an internal control policy for the robot's actuators. In our experience, this deep RL approach allows us to model robust residual dynamics from a limited set of real-world trajectories compared to supervised learning. **Data Collection.** The first step is to create a set of reference trajectories. We train several locomotion policies in the vanilla simulation and record their action trajectories. Next, we use the recorded actions as open loop control on hardware to collect multiple state trajectories. We use a motion capture system to obtain observation data due to the lack of onboard sensors on BALLU that can estimate its global position and orientation. We note that hand-designed action trajectories may work well for this step. Fig. 2: Illustration of our research platform, BALLU (Buoyancy Assisted Lightweight Legged Unit) with two passive hip joints and two active knee joints. **MDP Formulation.** Once we have the reference dataset, we can cast learning the residual aerodynamics policy as a motion imitation problem using a Markov Decision Process (MDP). The state space consists of the balloon's position, velocity, orientation, the position and velocity of the base, and the position and velocity of the feet, at the current and last two time steps. The action space is three-dimensional and consists of x, y, and z forces that are applied to the center of mass of the balloon. The forces are in the range of \([-1,1]\) N. The reward function is a combination of position and orientation terms and is defined as follows: \[r_{t}=w^{pos}r_{t}^{pos}+w^{orn}r_{t}^{orn}\] where the position and orientation terms respectively are computed as follows: \[r^{pos}=\text{exp}\left[-10\left(||\hat{\mathbf{p}}_{t}-\mathbf{ p}_{t}||^{2}\right)\right]\] \[r^{orn}=\text{exp}\left[-2\left(||\hat{\mathbf{r}}_{t}-\mathbf{ r}_{t}||_{W}^{2}\right)\right],\] where \(\hat{\mathbf{p}}_{t}\), \(\mathbf{p}_{t}\), \(\hat{\mathbf{r}}_{t}\), and \(\mathbf{r}_{t}\) are the desired position, the actual position, the desired orientation, and the actual orientation of the balloons, respectively. The position reward \(r_{t}^{pos}\) encourages the simulated model's balloon to track the reference balloon position as closely as possible while the orientation reward \(r_{t}^{orn}\) encourages it to track the reference balloon orientation. We use the Euler angle representation for orientation, which demonstrates better performance than the quaternion representation. For all experiments, we set \(w^{pos}=0.7\), \(w^{orn}=0.3\), and \(W=diag(0.2,0.4,0.4)\). **Training.** We train the residual dynamics policy using Proximal Policy Optimization [18]. We use a compact network consisting of two layers with \(64\) neurons each. Similar to Peng _et al._[13], we also randomize the initial state for each rollout by sampling a state uniformly at random from the selected reference trajectory. This leads to the policy being exposed to a wider initial state distribution and improves robustness, especially when transferring to hardware. ### _Policy Training with Improved Simulation_ Once we improve the simulation using system identification and residual dynamics learning, we can retrain a deep RL policy for better sim-to-real transfer. Once again, we formulate the problem using a Markov Decision Process framework. The state space consists of the balloon's position, velocity, orientation, the position and velocity of the base, and the position and velocity of the feet, all measured at the current time step. Our actions are two actuator commands, which will change the joint angles based on the identified nonlinear relationship in the previous section. We learn two policies - one for forward walking and one for turning left. For the forward walking task, our reward function is \(x_{vel}\), whereas for turning left, it is \(y_{vel}\). ## IV Experiments and Results We design simulation and hardware results to answer the following two research questions. * Can we improve the fidelity of the vanilla simulator using actuator identification and residual dynamics learning? * Can we improve the performance on hardware by reducing the sim-to-real gap? ### _Experimental Setup_ We conduct all the simulation experiments in PyBullet [59], an open-source physics-based simulator. We use the stable baselines [60] implementation of Proximal Policy Optimization [18] to learn the residual dynamics (Section III-C) and the policy in the improved simulation (Section III-D). We use the BALLU platform [1] for hardware experiments while capturing all the data using a Vicon motion capture system [61]. ### _Improved Simulation Fidelity_ This section illustrates the process for improving the simulation's fidelity. We first highlight the importance of actuator system identification in Section IV-B1 and show the learned residual dynamics using our EnvMimic technique in Section IV-B2. #### Iv-B1 Actuator System Identification We collect the data and identify the system parameters, such as spring parameters, motor gains, and default joint angles, as described in Section III-B. The identified relationships between the motor commands and joint angles are illustrated in Fig. 3. As shown, the identified relationships exhibit highly nonlinear behaviors compared to the simple idealized curves in simulation, which are essential to model the dynamics of the BALLU robot. We highlight the importance of system identification by comparing trajectories in simulation. We run the same action Fig. 3: Identified non-linear, asymmetric relationships of cable-driven mechanisms. sequences of periodic bang-bang control signals with and without system identification and compare the final states in Fig. 4. The two generated trajectories show a significant difference in terms of the final COM positions (\(0.23\) m difference) and the joint angles (\(10.15^{\circ}\) difference, average of all joints). #### Iv-B2 EnvMimic: Residual Dynamics Learning Next, we examine the results of residual dynamics learning using the proposed EnvMimic technique in Section III-C. We hypothesize that EnvMimic can learn compelling residual dynamics from a few trajectories, unlike data-hungry supervised learning approaches. We compare the x-y-yaw trajectories in four different environments: (1) a vanilla simulation, (2) a simulation with the residual dynamics learned with supervised learning, (3) a simulation with the residual dynamics learned with EnvMimic (ours), and (4) the ground-truth trajectory on hardware. For supervised learning, we use a neural work with two hidden layers of size [64, 64]. For all the trajectories, we use the same action sequences that are generated by the initial policy in a vanilla simulation. Also, please note that the testing ground-truth trajectory is unseen during training. The trajectories are compared in Fig. 5. Clearly, our EnvMimic offers much-improved tracking performance compared to the vanilla simulation that fails to capture the noticeable yaw orientation changes due to the stochasticity of the hardware experiments. In our experience, the trajectory generated with supervised residual dynamics learning tends to turn less and remains on the positive Y side. We hypothesize that the robustness of our RL-based residual dynamics might be obtained due to the mix-use of real-world and simulation trajectories. On the other hand, the supervised learning baseline is trained on the pre-collected hardware trajectories without any data augmentation. We believe the comparison of SL or RL-based approaches on a wider range of scenarios will be an interesting future research direction. Please refer to the supplemental video for qualitative comparisons, highlighting the obvious benefit of our EnvMimic-based residual dynamics learning. ### _Improved Sim-to-real Transfer_ To complete the story, we investigate whether we can improve the sim-to-real transfer of policies using augmented simulation. We first train (1) a policy in the improved simulation with the system identification, learned residual dynamics, and domain randomization [28] (ours) and compare the performance with the selected baseline policies learned in the following settings: (2) a simulation only with system identification (Vanilla + sys ID), (3) a simulation only with domain randomization (Vanilla + DR), and (4) a simulation with both system identification and domain randomization (Vanilla + sys ID + DR). For domain randomization, we randomly sample parameters for friction and initial states. We decided not to randomize masses or the buoyancy coefficient due to the sensitivity of the policy to these parameters. We evaluate these simulation-learned policies on the hardware and measure their performance. For the forward walking task, the learned policy with our augmented simulation is the only one which can walk forward while the other baselines turn left significantly. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Experiment** & \(\alpha_{T}^{sim}-\alpha_{T}^{hw}\) & \begin{tabular}{c} **y-distance** \\ **traveled (m)** \\ \end{tabular} & \begin{tabular}{c} \(\Delta\alpha\) \\ (hardware) \\ \end{tabular} \\ \hline Vanilla + sys & \(16.41^{\circ}\) & \(0.20\) & \(16.72^{\circ}\) \\ **ID** & \(38.18^{\circ}\) & \(-0.04\) & \(-4.28^{\circ}\) \\ \hline Vanilla + sys & \(-8.50^{\circ}\) & \(0.11\) & \(\mathbf{39.77^{\circ}}\) \\ \hline EnvMimic (Ours) & \(\mathbf{3.50^{\circ}}\) & \(\mathbf{0.29}\) & \(36.42^{\circ}\) \\ \hline \end{tabular} \end{table} TABLE II: Sim-to-real Comparison for Turning Left. Fig. 4: Illustration of System Identification Results. We execute the same action trajectory and compare the final state of identified dynamics (blue) to that of the naive simulation (red), which is significantly different. Fig. 5: Comparison of simulation trajectories to ground truth hardware data for forward walking. Our method, EnvMimic, shows the best tracking performance, particularly in terms of the yaw angle. Note that the ground truth trajectory shown is out-of-distribution. Therefore, its traveled distance in the forward (x) direction is \(1.12\) m, which is significantly larger than \(0.27\) m, \(0.32\) m, and \(0.56\) m of the others. For the turning task, our approach trains the effective policy that travels the most distance in the left (y) direction, \(0.29\) m, which is our objective function. On the other hand, the other policies cover a shorter distance: \(0.20\) m, \(-0.04\) m, and \(0.11\)m. We note that the change in yaw angle (\(\Delta\alpha\)) is slightly larger in the case of the baseline with system identification and domain randomization compared to our method. This is an artifact that arises from the policy taking a single step and rotating on the battery cover. This is evident from the y-distance traveled and can be observed clearly in the qualitative results in Figure 7 and the supplemental video. For both tasks, our augmented simulation also exhibits the least sim-to-real errors, which are defined as the average center of mass (CoM) error and final yaw angle (\(\alpha\)) error between simulation and hardware. The performance is summarized in Table I and Table II. Please refer to the supplemental video and Fig. 6 for qualitative comparison. ## V Conclusion and Discussion We present a learning-based method for the sim-to-real transfer of locomotion policies for the Buoyancy Assisted Lightweight Legged Unit (BALLU) robot, which has unique and sensitive dynamics. To mitigate a large sim-to-real gap, we first identify nonlinear relationships between motor commands and joint angles. Then we develop a novel residual dynamics learning framework, _EnvMimic_, which trains an external perturbation policy via deep reinforcement learning. Once we improve the simulation accuracy with the identified actuator parameters and the learned residual physics, we re-train a policy for better sim-to-real transfer. We demonstrate that using our methodology, we can train walking and turning policies that are successful on the hardware of the BALLU robot. There exist several interesting future research directions we plan to investigate in the near future. In this work, we develop our residual dynamics model for each individual task, such as walking or turning, which limits generalization over other tasks. Therefore, it will be interesting if we collect a large dataset and train a general residual dynamics model using the proposed method. It will be possible to take some inspiration from the state-of-the-art motion imitation frameworks, which can track a large number of trajectories using a single policy [14]. In addition, we also want to investigate various policy formulations. This paper assumes simple external forces to the center of the balloons to model Fig. 6: Comparison of the learned forward walking policies without (**top**) and with (**bottom**, ours) the proposed residual dynamics learning. Both policies are also trained with domain randomization and actuator system identification. Please note that the baseline (**top**) shows a significant turning, while ours (**bottom**) can walk double the distance of the baseline. Fig. 7: Comparison of the learned turning left policies without (**top**) and with (**bottom**, ours) the proposed residual dynamics learning. Both policies are also trained with domain randomization and actuator system identification. Both policies are able to achieve a similar change in yaw angle (\(\Delta\alpha\)) over the entire episode, but the baseline (**top**) takes a single step and only turns in place, rotating over the battery cover. We also observe that our method (**bottom**) covers more than twice the distance in the desired y-direction. aerodynamics, and it was sufficient for the locomotion tasks we tested on. However, we may need multiple forces or torques to model some sophisticated phenomena. Furthermore, the dynamics of the BALLU robot are also sensitive to time owing to the deflation of balloons. In the future, we want to introduce the concept of lifelong learning to model those gradual temporal changes. Finally, we plan to evaluate the proposed residual dynamics learning approach, _EnvMimic_, on different tasks and robotic platforms. While showing promising results, many experiments are limited to the selected walking and turning tasks and the specific hardware of BALLU. However, we believe the algorithm itself is agnostic to the problem formulation, and it has great potential to improve the sim-to-real transferability in general scenarios, even including drones and rigid robots. We intend to explore this topic further in future research. ## Acknowledgement This work is supported by the National Science Foundation under Award #2024768.
2308.02968
Robust estimation of exposure ratios in multi-exposure image stacks
Merging multi-exposure image stacks into a high dynamic range (HDR) image requires knowledge of accurate exposure times. When exposure times are inaccurate, for example, when they are extracted from a camera's EXIF metadata, the reconstructed HDR images reveal banding artifacts at smooth gradients. To remedy this, we propose to estimate exposure ratios directly from the input images. We derive the exposure time estimation as an optimization problem, in which pixels are selected from pairs of exposures to minimize estimation error caused by camera noise. When pixel values are represented in the logarithmic domain, the problem can be solved efficiently using a linear solver. We demonstrate that the estimation can be easily made robust to pixel misalignment caused by camera or object motion by collecting pixels from multiple spatial tiles. The proposed automatic exposure estimation and alignment eliminates banding artifacts in popular datasets and is essential for applications that require physically accurate reconstructions, such as measuring the modulation transfer function of a display. The code for the method is available.
Param Hanji, Rafał K. Mantiuk
2023-08-05T23:42:59Z
http://arxiv.org/abs/2308.02968v2
# Robust estimation of exposure ratios in multi-exposure image stacks ###### Abstract Merging multi-exposure image stacks into a high dynamic range (HDR) image requires knowledge of accurate exposure times. When exposure times are inaccurate, for example, when they are extracted from a camera's EXIF metadata, the reconstructed HDR images reveal banding artifacts at smooth gradients. To remedy this, we propose to estimate exposure ratios directly from the input images. We derive the exposure time estimation as an optimization problem, in which pixels are selected from pairs of exposures to minimize estimation error caused by camera noise. When pixel values are represented in the logarithmic domain, the problem can be solved efficiently using a linear solver. We demonstrate that the estimation can be easily made robust to pixel misalignment caused by camera or object motion by collecting pixels from multiple spatial tiles. The proposed automatic exposure estimation and alignment eliminates banding artifacts in popular datasets and is essential for applications that require physically accurate reconstructions, such as measuring the modulation transfer function of a display. The code for the method is available. High dynamic range imaging; camera noise model, statistical estimation, multi-exposure fusion ## I Introduction When the dynamic range of a scene exceeds the operating range of a standard digital sensor, one can overcome this limitation by capturing a stack of images with different camera settings: modulating exposure times [13], sensor gain [16, 22] or by capturing and averaging a burst of images [20]. Then, the captured exposure stack can be merged into a single image to both expand the dynamic range and reduce noise. Regardless of the approach, regions of the reconstructed high dynamic range (HDR) images that contain smooth intensity gradients often end up with banding artifacts, such as those shown in Figure 1. Banding artifacts are highly visible in both images and videos, and they have been the subject of extensive research, particularly in the field of video streaming. Banding tends to be more noticeable compared to other video compression artifacts [40, 27, 44]. The reason for banding in merged exposure stacks is a mismatch between the actual capture parameters and those reported by the camera, typically in the exchangeable image file format (EXIF) header. Such inaccuracies could be caused by * limited accuracy of the (mechanical) aperture and shutter * wrongly reported EXIF data due to rounding (e.g. \(\mid\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! while accounting for heteroskedastic camera noise with known or unknown noise parameters. Thus, our exposure estimation can be used to improve ground truth reconstructions of several existing stack-based HDR datasets. Further, algorithms that enhance merged reconstructions by iterative optimization [29], as well as deep neural networks that use HDR images for other related tasks [6, 46] stand to be improved by our more accurate exposures. Correctly estimating exposures is also relevant for HDR merging with deghosting, which involves detecting and registering pixels that belong to moving objects [32, 41]. Erroneous capture parameters make this task more challenging, as pixel differences could be caused by either object movement or incorrect exposure time. Although banding artifacts are visible only in smooth image regions in Figure 1, inaccurate exposures affect all pixels in the image. Incorrect pixel intensities pose a serious complication for applications that utilize cameras in place of expensive light measurement instruments. Multi-image exposure stacks often serve as substitutes for more accurate measurements from instruments such as spectrophotometers. In Section IV-D, we highlight potential discrepancies in the measured modulation transfer function (MTF) of an HDR display when the exposure stack of a slant-edge [36] is merged with incorrectly reported capture parameters. From a practical standpoint, our method is suitable for large images (8k or more) from modern cameras. We achieve this by solving a smaller system of equations by collecting pixel-pairs with the lowest relative variances according to the noise model we consider. Contrary to existing methods that utilize pixel pairs from consecutive exposures in the stack, we introduce a greedy algorithm that provides optimal pixel pairs to ensure that all exposures are well-estimated. Our reduced linear system is a union of the highest weighted spanning trees induced on an _exposure multigraph_, resulting in a balanced linear system. Additionally, our proposed method is spatially balanced because we split the input image into tiles and collect pixels from all tiles. This improves the robustness of our estimator to ghosting caused by camera or object motion. Here is a brief overview of the paper that summarizes our contributions. We highlight that camera metadata may be unreliable and motivate the need to estimate corrective per-exposure ratios to obtain artifact-free HDR reconstruction. We show that such exposure ratios can be estimated by solving a large linear system of equations (Section III-C), where each equation connects pixel intensities in the logarithmic domain. To deal with underexposed pixels in shorter exposures, we model heteroskedastic camera noise with inverse-variance weights (Section III-D). Then, in Section III-F, we show how to reduce the system of equations for faster equations without sacrificing the estimation quality. We finally validate our exposure estimation framework in Section IV, both on synthetic and real image captures. ## II Related work The problem of inaccurate capture parameters was identified in very early works in HDR merging [7, 30]. However, most of these focused on the challenging task of inverting the CRF under the assumption of film or sensor _reciprocity_[38]. Mitsunaga and Nayar [30] used a polynomial model to jointly estimate the CRF and exposure times. Then, Grossberg and Nayar [15] demonstrated how to recover the brightness transfer function (BTF), a function that describes how the brightness transfers from one image to another. They recovered the BTF from image histograms and stipulated that it can be used to estimate exposure ratios if the CRF is known. More recently, Rodriguez at al. have shown that the previously mentioned methods rely on incorrect assumptions [35] about the independence of color channels and linearity of exposures, resulting in estimation errors and hue shifts. We avoid those problems by directly operating on demosaiced RAW images. Based on [15], Cerman and Hlavac [3] assumed a linear CRF by relying on RAW pixel values. They computed the brightness-transferred image histograms and used them to weight a system of equations in the linear pixel domain. In contrast to their work, we solve a weighted linear system in the logarithmic domain to estimate exposure ratios. The weights in our system of equations are derived from a popular statistical camera noise model and ensure a noise-optimal solution. Our approach systematically handles heteroskedastic camera measurements and provides accurate estimates even when many pixels are under-exposed or affected by noise. ### _Camera noise model_ A key contribution of our work is the use of a parametric noise model to weigh some pixel correspondences more than others. Our weights are based on the popular Poisson-normal statistical camera noise model [2, 12, 28] that has signal-dependent and static (or signal-independent) components. Many earlier works have used approximations of similar noise models for noise-optimal HDR reconstructions [14, 19, 21]. Although deep generative networks [1, 4] model spatially-varying components of real camera noise better, the Poisson-normal statistical model and its normal approximation are better suited for our problem as they offer a convenient algebraic form under the assumption that noise is independent at each pixel. Fig. 2: Histograms of relative errors for two popular HDR datasets. The computed sample standard deviations of relative errors are \(9\%\) for the Fairchild survey [10] and \(14\%\) for the SI-HDR dataset [18]. ### _HDR datasets_ Merged HDR images of many multi-exposure datasets [10, 17, 18, 24, 25, 32] can be improved with accurate exposure estimation. We observed the banding artifacts depicted in Figure 1 in various images from these datasets. Recent HDR deep learning reconstruction methods for tasks like HDR deghosting [5, 24, 33, 34, 47] and inverse tone-mapping [37, 43, 8] utilize these as well as other multi-exposure datasets for testing and evaluation. All these works are likely to produce better results when trained with accurate ground truth information due to our work on better exposure alignment. ## III Methodology After introducing the camera model and some terminology, we describe how to estimate exposure ratios directly from the input image stack by solving a weighted linear system. We then derive noise-optimal weights for the system based on the widely-used Poisson-normal camera noise model. Finally, we discuss practical considerations for making the system agnostic to sensitive noise parameters and improve robustness to ghosting caused by camera or object motion. ### _Camera model_ To digitally represent an HDR scene, we start by capturing \(N\) images with varying exposure times or gains (ISO). Although earlier works included CRF estimation in their image formation pipelines [7, 15, 30], we skip this step since modern cameras provide access to linear RAW pixel values. We model the captured RAW pixels as samples from independent random variables, which are linearly related to the scene radiance, and denote them as: \[Y_{i}(p),\quad i=1\ldots N,\quad p=1\ldots M\,, \tag{1}\] where \(i\) is the exposure index with \(N\) total exposures, and \(p\) is the pixel index with \(M\) total pixels. Throughout this work, we use upper case for random variables and lower case for observed values. Before merging (averaging) the exposure stack, we need to convert them to relative radiance units by compensating for exposure time \(t_{i}\), gain \(g_{i}\), and aperture f-number \(a_{i}\): \[X_{i}(p)=\frac{Y_{i}(p)}{t_{i}\,g_{i}\,\pi\,\left(\frac{f}{2\,a_{i}}\right)^{ 2}}=\frac{Y_{i}(p)}{d_{i}}\,, \tag{2}\] where \(f\) is the focal length. Typically, the scaling constant, \(d_{i}=t_{i}\,g_{i}\,\pi\,\left(\frac{f}{2\,a_{i}}\right)^{2}\), is computed directly from the camera EXIF header. The problem is that \(t_{i}\), \(g_{i}\), and \(a_{i}\) could be incorrect due to the limited accuracy of the mechanical shutter or the rounding of exposure values. Incorrect values of \(d_{i}\) will lead to inaccurate HDR reconstructions Thus, samples from \(X_{i}(p)\), the exposure-compensated or absolute estimates in the captured images, may represent biased measurements of the true scene radiance. Merging images using the inaccurately reported parameters results in the banding artifacts depicted in Figure 1 and Figure 4. In all the images, the artifacts appear when a longer exposure image saturates. ### _Banding due to inaccurate exposure_ To better understand the reason for banding, consider the one-dimensional linear gradient depicted in Figure 3 (left). The noisy measurements (dashed lines), obtained by simulating captures using calibrated noise parameters of the _Sony ILCE-7R_, become misaligned when scaled with inaccurate exposure times. Merging such images results in a jagged reconstruction (solid blue line), causing banding in an otherwise smooth output. Notice that although the reconstruction deviates from the ground truth for almost all pixels, artifacts will be visible only at transition points when an image in the stack saturates as highlighted by the green circles. While Figure 3 demonstrates the problem with simple averaging, banding is further exaggerated when sophisticated algorithms based on physical noise models [14, 19, 21] are used. This is because pixels of longer exposures are more reliable (due to smaller noise variance), but they saturate at lower physical values. Thus, these estimators weigh longer exposure pixels more, and there is a sharp change upon saturation of any image in the stack. If we are able to estimate relative exposure ratios w.r.t one of the images correctly, we can align all exposures to produce the reconstruction in Figure 3 (right). Note that the exposure-aligned reconstruction may still not coincide with the ground truth since we do not have the correct baseline. However, accurately estimating relative ratios is sufficient to align the capture parameters and eliminate banding. ### _Exposure estimation in real images_ To prevent banding and obtain physically accurate pixel values, we align the exposures of all images in the stack by estimating all scaling constants \(d_{i}\). However, computing them directly from input pixels is impossible since the correct absolute measurements representing observations of \(X(p)\) from Eq. (2) are unknown. We thus eliminate these unknowns by estimating the ratio of exposures between any two images in the stack instead. For a scene with constant illumination and Fig. 3: A linear gradient spanning \(13\) stops is captured with exposure times \([0.5,2,4]\) (in seconds) by simulating the _Sony ILCE-7R_ at ISO 3200. Each capture is quantized to \(8\) bits for easy visualization. The left plot depicts exposure-compensated pixels that represent samples from \(X(p)\) (according to Eq. (2)) with inaccurate exposures (red, yellow, and brown dashed lines). The reconstructed gradient (blue line) is jagged around \(\phi=2^{11}\) and \(\phi=2^{12}\) (green circles) due to misaligned exposures caused by biased exposure values. We can reconstruct the smooth gradient by correctly aligning exposure ratios, as shown in the right plot. no scene motion, this ratio should be the same for all pixels from a given pair of images. Its expected value is: \[d_{ij}=\mathbb{E}_{p}\left[\frac{Y_{i}(p)}{Y_{j}(p)}\right]\,, \tag{3}\] where \(i\) and \(j\) index different images in the exposure stack, and \(\mathbb{E}_{p}\) indicates that the expected value is computed over all pixels. To allow for fast computation using linear solvers, we operate on logarithmic values. Thus, let \[e_{ij}=\log d_{ij}\quad\text{and}\,L_{i}(p)=\log Y_{i}(p)\,. \tag{4}\] Although we cannot write a closed-form expression for the density function of \(L_{i}(p)\) because of the \(\log\) transformation, it is possible to approximate the expected value of any transformed random variable using its Taylor expansion. Our results, detailed in Eq. (16) in the Appendix, are only applicable to normally distributed random variables. It is thus imperative to use only well-exposed pixels (we show how to do this in Section III-F), since the Poisson photon noise component of such pixels is well-approximated by a normal distribution. We can obtain an approximate expression for the expected value by applying the result for \(\log Y(p)\): \[\begin{split} e_{ij}&=\log\mathbb{E}_{p}\left[ \frac{Y_{i}(p)}{Y_{j}(p)}\right]\approx\mathbb{E}_{p}\left[\log\frac{Y_{i}(p) }{Y_{j}(p)}\right]\\ &=\mathbb{E}_{p}\left[L_{i}(p)-L_{j}(p)\right]\,.\end{split} \tag{5}\] The equality approximately holds for the operating range of pixel values, as we will detail in Section III-D, Eq. (11). This allows us to compute the expected value over all pixels by setting up and solving a linear system. For the most reliable estimate, we should utilize information from all the available exposures (all possible values of \(i\) and \(j\)). Unlike previous work [3], we thus consider not only ratios between neighboring exposures but between all pairs. This results in a large yet sparse linear system: \[\sqrt{\mathbf{W}}\begin{bmatrix}\begin{smallmatrix}1&-1&0&\cdots&0\\ 1&-1&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&0&-1&\cdots&0\\ 0&0&0&\cdots&\cdots\\ \end{smallmatrix}\end{bmatrix}\begin{smallmatrix}\begin{smallmatrix}\epsilon_{1} \\ \epsilon_{2}\\ \epsilon_{3}\\ \vdots\\ \epsilon_{N}\end{smallmatrix}\end{bmatrix}=\sqrt{\mathbf{W}}\begin{bmatrix} \begin{smallmatrix}\begin{smallmatrix}L_{1}(1)-L_{2}(1)\\ L_{1}(2)-L_{2}(2)\\ \vdots\\ L_{1}(2)-L_{2}(1)\\ \vdots\\ L_{N-1}(M)-L_{N}(M)\end{smallmatrix}\end{bmatrix}\,, \tag{6}\] or more compactly, \[\sqrt{\mathbf{W}}\mathbf{O}\mathbf{e}=\sqrt{\mathbf{W}}\mathbf{m}\,, \tag{7}\] where \(\mathbf{W}\) is a diagonal weight matrix denoting the relative importance of each row of the system. In practice, we found that the weighted system does not always provide a good solution for shorter exposures. Since a rough estimate of exposure values \(\mathbf{e}_{0}\) is available in the image metadata, we introduce a Tikhonov penalty with weight \(\lambda\) and solve to get: \[\begin{split}\mathbf{\hat{e}}_{\rm{WLS}}&=\operatorname*{ arg\,min}_{\mathbf{e}}\left\|\sqrt{\mathbf{W}}(\mathbf{O}\mathbf{e}-\mathbf{m})\right\|_{2}^{2}+ \lambda\|\mathbf{e}-\mathbf{e}_{0}\|_{2}^{2}\\ &=(\mathbf{O}^{T}\mathbf{W}\mathbf{O}+\lambda\mathbf{I})^{-1}(\mathbf{O}^{T}\mathbf{W}\mathbf{m}+ \lambda\mathbf{e}_{0})\,.\end{split} \tag{8}\] To demonstrate the effectiveness of exposure alignment, we captured an exposure stack of a simple HDR scene consisting of a bright light shining at an angle, which produces an inverse-square fall-off in intensity. Since most pixels are well-exposed, as shown in Figure 4, we could assume that they are equally reliable and set \(\mathbf{W}=\mathbf{I}\). This works well for the carefully controlled scene and eliminates banding artifacts that would have appeared when merging with EXIF parameters. However, the constant noise assumption breaks down for real-world HDR stacks since the noise in camera pixels is heteroskedastic [11] and thus, different pixels provide different amounts of information. ### _Heteroskedastic pixels_ To determine noise-optimal weights for images of real cameras, we need to derive an expression for the variance of each row of Eq. (6). Then, we populate \(\mathbf{W}\) with inverse-variance weights (i.e., each weight is given by the reciprocal of variance) to obtain the weighted least-square estimate \(\mathbf{\hat{e}}_{\rm{WLS}}\). This is equivalent to maximum likelihood estimation (MLE) under the assumption that the system of equations models additive, normally distributed noise, where the variance is different for different rows. Since each entry of the output vector \(\mathbf{m}\) is a difference of two random variables, the inverse-variance weight for each row of the system is given by: \[w_{k,k}=\frac{1}{\mathbb{V}[L_{i}(p)-L_{j}(p)]}=\frac{1}{\mathbb{V}[L_{i}(p)]+ \mathbb{V}[L_{j}(p)]}\,. \tag{9}\] Here, \(k\) indexes each row of Eq. (6) since \(\mathbf{W}\) is diagonal. For instance, the second row corresponds to \(k=2,i=1,j=2\). In order to get an expression for the denominator in Eq. (9), we refer to detailed studies of the noise characteristics of cameras [2, 28], which indicate that real camera noise follows a compound Poisson-normal distribution. For the working range of pixels in most images, this can be approximated by zero-mean additive noise that follows a normal distribution. Temporarily dropping the exposure index for brevity of notation, Fig. 4: The left column shows exposures (gamma-encoded for visualization, \(\gamma=2.2\)) of the HDR image reconstructed with EXIF parameters, while the right column shows exposures (encoded with \(\gamma=2.2\)) of the HDR image reconstructed by solving the linear system represented by Eq. (6) with \(\mathbf{W}=\mathbf{I}\). Banding artifacts are visible in dark (top row) and bright (bottom row) regions of the reconstruction using EXIF metadata (a). Aligning exposures according to Eq. (8) fixes the problem (b). the variance at each pixel thus consists of a signal-dependent component as well as a static component and is equal to, \[\sigma_{Y}^{2}(p)=\mathbb{V}[Y(p)]=\alpha\,\mathbb{E}[Y(p)]+\beta\,, \tag{10}\] where \(\alpha\) and \(\beta\) are camera-specific noise parameters, which also depend on the sensor's gain. Since \(L(p)\) is a random variable obtained by applying the \(\log\) transformation to \(Y(p)\), its density function does not have an exact expression. This is because the domain of \(Y(p)\) includes negative values for which the \(\log\) function is undefined. As we will show in Section III-F, we operate on a small subset of available pixels, selected for the lowest relative variances, and whose intensities tend to be much greater than \(0\). We show, in Eq. (17) in the Appendix, how to approximate the variance when any random variable is transformed by an invertible function. Here, we apply the result for \(Y(p)\) when it is transformed by the \(\log\) function, \[\mathbb{E}[L(p)] \approx\log\mu_{Y}(p)\,, \tag{11}\] \[\mathbb{V}[L(p)] \approx\frac{\sigma_{Y}^{2}(p)}{\mu_{Y}^{2}(p)}\,,\] where \(\sigma_{Y}^{2}(p)\) is the variance of the noise from Eq. (10) and \(\mu_{Y}^{2}(p)\) is the expected pixel value, which we approximate by \(\mathbb{E}[Y(p)]=\mu_{Y}(p)\approx y(p)\). We reiterate that we are able to use this result because we work with well-exposed pixels, that are approximately normal. After substituting the computed variance in Eq. (9), the diagonal weights become: \[w_{k,k}=\left(\frac{\alpha y_{i}(p)+\beta}{y_{i}^{2}(p)}+\frac{\alpha y_{j}(p )+\beta}{y_{j}^{2}(p)}\right)^{-1}\,. \tag{12}\] ### _Camera noise calibration_ While the inverse-variance weights in Eq. (12) help us compute noise-optimal estimates of the exposure ratios, a fundamental limitation is their dependence on calibrated noise parameters \(\alpha\) and \(\beta\). In the considered noise model [12], these are camera and gain specific and may not be available at the time of HDR merging. Moreover, the quality of HDR reconstructions is highly sensitive to accurate noise parameters [2], motivating the need for methods that do not rely on them. If noise parameters are unavailable or inaccurate, it is still possible to solve the weighted linear system by assuming that the static noise parameter \(\beta\) is 0 as we work with well-exposed pixels (see Section III-F). Since the entries of \(\mathbf{W}\) determine the relative importance of the different rows of Eq. (6), we can eliminate the common signal-dependent constant, \(\alpha\) as well. The new weights which no longer depend on calibration-sensitive parameters are: \[w_{k,k}^{\prime}=\left(\frac{1}{y_{i}(p)}+\frac{1}{y_{j}(p)}\right)^{-1}\,. \tag{13}\] In practice, we can get away with using camera-independent \(\mathbf{W^{\prime}}\) instead of camera-specific \(\mathbf{W}\) from Eq. (12). ### _Reducing the linear system_ The linear system given by Eq. (6) contains up to \(M\binom{N}{2}\) equations corresponding to all pixels (\(M\)) and all pairs of exposures in the stack (\(N\)). Solving such a large system may be impossible for large images or deep exposure stacks due to computation and memory limitations. However, this system is strongly overdetermined as only \(N-1\) exposure ratios need to be estimated. Therefore, we only need a small percentage of equations to solve Eq. (8). Another reason for reducing the system is to eliminate underexposed pixels pairs because they do not satisfy the assumptions made in the derivations of previous sections. For example, dark pixels may be negative or poorly approximated by a normal distribution. A logical reduction strategy is to select pixels pairs with the highest weights since they are least affected by noise. However, such a strategy is heavily biased towards longer exposures because pixel pairs that include these images will have the highest weights. The shorter exposures will then be poorly represented and, thus, poorly estimated. Another issue is related to the spatial location of bright objects in the scene, such as the Sun or other light sources. If we select a small fraction of pixels (say 5% of the pixels per image), all of them are likely to be concentrated in one portion of the image, corresponding to these bright objects. If those objects happen to be in motion, the exposure estimation will fail. We propose two orthogonal design choices to balance the system of equations and handle both these biases in estimated exposure ratios. #### Iii-F1 Spatial balance: Tiling To ensure that the linear system contains samples from all parts of a scene, we split the input image stack into \(t\times t\) pixel tiles (\(16\times 16\) in our experiments). We can then select a fixed number of pixel pairs from each tile and pixels from a few bright objects will not dominate the system of equations. The tiled processing provides a convenient way to vectorize the construction of the reduced linear system. Several tiles can be processed in parallel for faster execution with multi-core or multi-threaded systems. Our noise-based solution helps provide a robust estimation even though some tiles may contain noisy pixels corresponding to dimly-lit parts of the scene. Variance-optimal balancing of exposures is crucial for such tiles, as we will show in the next section. #### Iii-F2 Exposures balance: Spanning trees The next objective is guaranteeing that all \(N-1\) exposure ratios are correctly estimated. Within each spatial tile, we need to include pixels from all exposure pairs, including noisy short exposures despite their relatively smaller weights. Consider the _exposure graph_: an undirected weighted multi-graph (a graph that contains more than one edge between two vertices as shown in Figure 4(a)), whose vertices represent the \(N\) exposures and edges link pairs of co-located pixels from two exposures. Each edge corresponds to one row of Eq. (6), with its weight given by Eq. (12) or Eq. (13). The different colored vertices and edges in Figure 4(a) represent different pixel locations. Reducing the linear system from Eq. (6) is then equivalent to removing edges from this dense multigraph. The optimal subset contains \((N-1)k\) edges with the largest weights that connect all exposures in a balanced manner. Such a subset can be found by computing the \(k\) highest-weighted spanning trees of the multigraph, where the weight of a spanning tree is the sum of the weights of its edges. The previous work directly linked pixels from neighboring exposures [3], resulting in pairwise connectivity as shown in Figure (b)b. However, this solution is sub-optimal for some inputs (such as the orange tile in Figure (d)d) because using edges linked to the longest exposures, as shown in Figure (c)c), results in higher weights and, therefore better estimates. An optimal solution would sequentially extract \(k\) MSTs using _Kruskal's_ or _Prim's_ algorithm. Better algorithms [26, 9] extract the \(k\) highest spanning trees more efficiently. However, explicitly creating a large multigraph and computing MSTs is both memory and computationally expensive. Below we show that the optimal solution can be found by a simpler greedy algorithm. ``` input :\(y_{1},\ldots,y_{N}\) (Images sorted by exposure time) output : MST (Maximimum spanning Tree) 1FunctiongreedyMST(\(y[\,]\)) 2\(N\gets length(y)\) MST\(\leftarrow[\,]\) 3for\(i\gets 1\)to\(N-1\)do /* Iterate over all exposures starting from the shortest */ \(mask\leftarrow\)isValid(\(y_{i}\)) and isValid(\(y_{i+1}\)) \(p_{*}\leftarrow\)maxWeight(\(y_{i}[mask],y_{i+1}[mask]\)) //Location of the highest weighted edge connecting images \(y_{i}\) and \(y_{i+1}\) for\(j\gets N\)to\(i+1\)do ifisValid(\(y_{j}(p_{*})\))then MST.addEdge(\(i\)\(\bullet\)\(\bullet\)\(j\)) break returnMST ``` **Algorithm 1**Greedy MST algorithm _Greedy MST solution:_ We start by extracting all valid pixels -- those that are unsaturated and sufficiently above the noise floor. We iterate through all exposures from the shortest to the longest. For each exposure \(i\), we identify \(p_{*}\), the pixel location that contains the highest-weighted edge between exposures \(i\) and \(i+1\). This is typically the brightest pixel that is not saturated in exposure \(i+1\). Then, we find the longest exposure \(j>i\) in which pixel \(p_{*}\) is not saturated. We add an edge between \(i\) and \(j\). By selecting the longest exposure, we ensure that the weight of the edge is maximized. This procedure, summarized in Algorithm 1, can be repeated \(k\) times to extract the \(k\) highest-weighted spanning trees without explicitly constructing a graph. If all longer exposures \(y_{N}(p_{*}),y_{N-1}(p_{*}),\ldots y_{i+2}(p_{*})\) are invalid due to saturation, the solution reduces to pairwise connectivity depicted in Figure (b)b. Note that while a pair of pixels forming an edge must have the same position, each edge in the MST can come from a different pixel position. Consider the 1-dimensional blue tile in the tonemapped image in Figure (d)d and its horizontal scanline in Figure (e)e. The MST is given by two edges: an edge between the medium and long exposures and an edge between the medium and short exposures (marked by triangles in the plot). Note that these are the brightest unsaturated pixels in the long and medium exposures. The MST for the blue tile is thus the pairwise connected spanning tree. The limited dynamic range of the orange tile means that a single pixel, marked by the star in Figure (e)e, aligns all three exposures for the orange tile, resulting in the spanning tree in which the longest exposure is linked to all other exposures. ### _Handling outliers and pixel misalignment_ The weighted least-squares solution Eq. (8) using rows of the reduced system will provide an accurate estimate only if there is no movement in the scene and all pixels are well aligned. If there is motion across exposures, for example, due to camera shake or object motion, those pairs of pixels will affect our estimates of exposure ratios, as shown in Figure 6. Notice the artifacts surrounding bright regions such as the windows and red lights in Figure (a)a due to camera motion. Here, we estimated the exposures using the original stack and then registered the images with homography alignment [39] before merging. Thus, any visible artifacts are due to incorrect exposure alignment and not due to ghosting. Fig. 5: Consider the exposure multigraph in (a) where different colored vertices and edges represent different pixels. The MST may connect pairwise exposures (b) or connect the longest exposure to all others (c). Algorithm 1 will provide different solutions for different inputs. For example, the MST for the blue horizontal tile near the Sun in (d) is the pairwise-connected spanning tree connected at pixels marked by the two triangles in plot (e). However, for the orange tile, the MST is given by the longest-exposure spanning tree at the pixel marked by the star in plot (e). The spatial tiling described in Section III-F1 ensures that motion in a few bright objects does not adversely affect the estimated exposures, i.e., our procedure is robust to localized object motion. In case of widespread pixel misalignment (e.g., due to camera motion), we can still provide reasonable exposure estimates if the scene contains uniform image patches. We identify _usable_ tiles by collecting \(k\) MSTs within a tile and solving the smaller per-tile system of equations separately. If the predictions of a tile deviate too much from the EXIF values, we treat the tile as an outlier and do not include the corresponding MSTs in the final system. The added overhead of solving per-tile linear systems is negligible because each system contains only \(k\) equations (\(k=50\) in our experiments). This outlier removal is much faster than an iterative algorithm such as the one described in [3]. ## IV Results and applications The primary application of our method is merging exposure stacks with varying exposure times or gains. We first validate our results on synthetically generated stacks, for which we know the ground truth, and then show qualitative comparisons on real captures. Finally, in Section IV-D, we show how our method can improve the estimate of the MTF of a display. All our results use \(\lambda=10\) for the Tikhonov penalty term described in Eq. (8). For merging exposures stacks after exposure estimation, we used a noise-aware HDR estimator [19]. However, our results hold for other simpler methods too. ### _Validation on synthetic dataset_ First, we rely on synthetic exposure stacks to compare the accuracy of different methods for exposure estimation. We simulated HDR exposure stacks using the noise parameters of the _Canon PowerShot S100_ with exposure times [%4, %8, 1, 8] (in seconds). All images were quantized to 14 bits to match the bit-depth of the camera, and the experiment was repeated for ISO settings between 100 and 800. We simulated noise according to Eq. (10), with noise parameters listed in Table I. Source HDR images were taken from the Fairchild photographic survey [10] containing \(105\) scenes at a resolution of \(4312\times 2868\). All simulated captures are linearly related to the HDR image, i.e., we do not apply a CRF since it is likely to introduce artifacts. Before merging the images, we corrupted the exposure times by introducing a small amount of normally distributed noise: \[e_{i}^{\prime}=e_{i}+\eta_{i}\quad\mathrm{where}\ \eta_{i}\sim\mathcal{N}(0,0. 15\,e_{i})\,. \tag{14}\] The relative standard deviation factor \(0.15\) was selected to match real camera EXIF errors plotted in Figure 2. Figure 7 plots the relative root-mean-squared error (RMSE) (in percent) of the exposure ratios, for different ISO levels. Fig. 6: For an exposure stack containing some spatially misaligned pixels, solving the weighted linear system Eq. (7) results in inaccurate estimates. In this scene, camera motion causes pixels of the building to be misaligned due to camera motion, producing artifacts near the windows in Figure 5(a). However, pixels from other areas, such as the sky, can still be utilized for exposure estimation. Figure 5(b) demonstrates the advantage of using the outlier-removal procedure described in Section III-G to improve robustness. Fig. 7: Accuracies of exposure estimation methods with increasing ISO computed by simulating captures on an existing HDR dataset [10]. Error bars indicate \(95\%\) confidence intervals computed across all images. The blue line represents corrupted exposures according to Eq. (14), while the baseline, in green, was obtained by directly estimating exposures by solving a system based on Eq. (3). Note that we can compute such relative errors only for synthetic datasets. We include results of a simple baseline by solving a linear system that realises Eq. (3) without relying on a noise model. Note that the baseline includes other components described in Section III, such as spatial balance via tiling and exposure balance via MSTs. The blue line shows the error of noisy exposures according to Eq. (14). The synthetic exposure stacks represent a challenging estimation problem with exposures three stops apart. As a result, the simple baseline (green) and the histogram-based BTF method [3] (orange) are unable to provide good results due to the adverse impact of pixels in shorter exposures. A key limitation of both methods is not explicitly modeling camera noise, which we address in our formulation. Figure 7 depicts results for the two versions of our weighted least-squares solution (Eq. (7)), showing both pairwise connectivity (in red) and the greedy MST heuristic (in purple). Both our solutions use weights that model the noise characteristics of the camera. Thus, they result in better estimates with smaller errors. Overall, the greedy MST heuristic is the best-performing estimator of exposure ratios. **Execution times:** Histogram matching used in [3] is computationally expensive for high-resolution images, resulting in an average execution time of 2.11 seconds. Further, we observed that the expensive iterative procedure for removing outliers is needed for good estimates increasing time to 8.54 seconds. Our tiled reduction (Section III-F1) results in a significantly faster average execution time of 0.265 seconds for pairwise connected exposures and 0.29 seconds for the greedy MST solution. The reported times were computed on an _Intel i7-8700 CPU_ for images of resolution \(4312\times 2868\). ### _Performance on real captures_ Next, we show the advantage of our proposed exposure estimation over naive merging using EXIF data and compare it with the histogram-based BTF approach [3]. The RAW images have a bit-depth of 14 and are linearly related to the scene radiance. We did not apply CRF or tone-mapping for any image shown in this section. Thus, all artifacts visible are due to incorrect exposures. For all results in this section, we use calibration-free weights given by Eq. (13). Banding artifacts are visible for scenes containing a smooth gradient at the transition point when one of the input exposures saturates. This frequently occurs very close to bright light sources such as the Sun during the day or street lights at night, as indicated by red arrows in Figure 8 (a). Pixels close to light sources tend to be unsaturated only in shorter exposures where other parts of the image are strongly affected by noise. Aligning exposures under these conditions is challenging, resulting in the poor performance of the baseline as well as the histogram-based BTF weights [3]. The baseline method obtained by directly solving Eq. (3) completely fails to recover exposure ratios. When using the BTF weights, the banding artifacts also persist at the same locations or are introduced in other locations as shown in Figure 8 (c). By accounting for heteroskedastic noise with inverse-variance weights in Figure 8 (d), we can correctly align all the exposures to produce banding-free results. Even in the absence of point light sources, banding can appear at natural smooth gradients or due to defocus blur, as shown in Figure 9. Large regions of the images are well-exposed in such scenes, and most methods work reasonably well. However, the baseline and BTF weights can still sometimes fail to recover the exposure ratio for shorter exposures (for example, see the red patch in the third column and green and blue patches in the second column of the first scene). Our noise-model motivated approach consistently aligns all exposures to produce banding-free reconstructions across the scenes. ### _Deghosting for scenes with motion_ When multi-image stacks with misaligned exposures are fused, the resulting HDR reconstructions often exhibit accentuated ghosting artifacts. Even recent deep learning-based methods [42, 45, 24] are unable to account for inaccurate exposure ratios. Figure 10b illustrates the artifacts that arise when exposures are corrupted according to Eq. (14). It is important to note that these artifacts are in addition to the ghosting artifacts visible in Figure 10a. Our proposed algorithm, which includes tile-based outlier detection to account for scene motion, effectively aligns exposures. Fig. 8: Zoomed in patches of light sources, appropriately exposed to highlight banding close to the source due to inaccurately reported EXIF exposures (first row). The baseline method (second row) that does not model camera noise is unable to estimate shorter exposures, resulting in artifact-rider reconstructions. Similarly, only some exposures can be aligned with histogram weights based on BTF (third row). By modeling camera noise, we simultaneously align all exposures (last row). ### _Measuring display MTF_ Digital cameras are often used as inexpensive light measuring instruments, for example, for measuring the spatial characteristic of an electronic display, such as spatial uniformity of a modulation transfer function (MTF) [48]. Inaccuracies in pixel values due to exposure misalignment may lead to errors Fig. 11: Edge profile of an HDR display recovered from a slant edge captured with an exposure stack. Errors in EXIF metadata lead to deviation from the actual edge spread function, affecting MTF estimation. On inspecting the captured images, we observed the banding artifacts close to the captured slant-edge when the image was merged with EXIF exposures. Fig. 10: Comparison of HDR reconstructions of an attention-based deep network [42] with aligned (first column) and corrupted (middle column) exposures. These insets are from the HDR deghosting dataset [24], which the deghosting network was trained on. Fig. 9: Fixing banding in smooth image gradients when images contain sufficient well-exposed pixels to populate the linear system. Our inverse-variance solution (last column) is more robust and produces better reconstructions. in the estimated display characteristics. Here, we demonstrate the need for exposure alignment for measuring MTF of HDR displays. The MTF measurement involves taking an image of a slanted edge [36], where one side has its pixels set to 0 and the other side to the maximum pixel value. Modern displays can reach very high contrast levels, so such a slanted-edge image can be faithfully captured only using an exposure stack. The inaccuracies of exposure times can easily introduce bias to our measurements. Figure 11 shows the edge profile computed with (blue) and without (orange) exposure estimation. The difference in reconstructions is due to inaccurate EXIF exposure values, visible as banding in the merged image. This translates to the bias at low-radiance pixels in Figure 11 and shows that errors in exposure times can easily lead to biases in the final MTF estimates. ## V Conclusions We propose to estimate relative exposures directly from images of an HDR exposure stack instead of relying on inaccurate camera EXIF metadata. The problem can be formulated as a weighted linear system of equations with Tikhonov regularization. Our formulation considers the noise characteristics of a camera to derive weights. We show that good performance on a large number of scenes is possible without the need for camera- and gain-specific noise parameters. We also describe an efficient approach based on the union of the highest weighted spanning trees of the exposure multigraph to reduce the size of the system for large images. When applied to multi-image HDR stacks, our method eliminates banding artifacts at smooth image gradients close to light sources or due to defocus blur. The exposure-aligned images are closer to physical quantities, making our work essential for using a camera as a light-measurement instrument. ## Acknowledgments This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement N\({}^{\circ}\) 725253-EyeCode). ## Moments of a transformed random variable Here, we use the Taylor expansion to derive expressions for the first and second moments (expected value and variance respectively) of a random variable under an arbitrary invertible function. In Section III-D we apply these results to obtain expressions for the \(\log\) of a normally distributed random variable. To get an approximate variance for a nonlinear function of a random variable, we describe the Delta method [23, 31] that uses a Taylor expansion. Let \(Z\) be an asymptotically normal random variable with a known mean \(\mu_{Z}\) and variance \(\sigma_{Z}^{2}\), and let \(f\) be an invertible and differentiable function. To get expressions for the expected value and variance of \(f(Z)\) we use its first order Taylor expansion about the mean \(\mu_{Z}\), \[\begin{split} f(Z)=& f(\mu_{Z})+f^{\prime}(\mu_{Z})(Z -\mu_{Z})\\ &+\text{higher-order terms}\,.\end{split} \tag{15}\] Ignoring the diminishing contributions of higher-order terms, the expected value is, \[\begin{split}\mathbb{E}[f(Z)]&\approx\mathbb{E}[f( \mu_{Z})]+\mathbb{E}[f^{\prime}(\mu_{Z})(Z-\mu_{Z})]\\ &=\mathbb{E}[f(\mu_{Z})]+f^{\prime}(\mu_{Z})(\mathbb{E}[Z]- \mathbb{E}[\mu_{Z}])\\ &=f(\mu_{Z})+f^{\prime}(\mu_{Z})(\mu_{Z}-\mu_{Z})\\ &=f(\mu_{Z})\,,\end{split} \tag{16}\] and the variance is \[\begin{split}\mathbb{V}[f(Z)]&\approx\mathbb{V}[f( \mu_{Z})]+\mathbb{V}[f^{\prime}(\mu_{Z})(Z-\mu_{Z})]\\ &=\mathbb{V}[f(\mu_{Z})]+f^{\prime}(\mu_{Z})^{2}\,\mathbb{V}[Z- \mu_{Z}]\\ &=\mathbb{V}[f(\mu_{Z})]+f^{\prime}(\mu_{Z})^{2}\,(\mathbb{V}[Z]+ \mathbb{V}[\mu_{Z}])\\ &=0+f^{\prime}(\mu_{Z})^{2}\,(\sigma_{Z}^{2}+0)\\ &=f^{\prime}(\mu_{Z})^{2}\,\sigma_{Z}^{2}\,.\end{split} \tag{17}\]
2305.13899
Sequence-Level Knowledge Distillation for Class-Incremental End-to-End Spoken Language Understanding
The ability to learn new concepts sequentially is a major weakness for modern neural networks, which hinders their use in non-stationary environments. Their propensity to fit the current data distribution to the detriment of the past acquired knowledge leads to the catastrophic forgetting issue. In this work we tackle the problem of Spoken Language Understanding applied to a continual learning setting. We first define a class-incremental scenario for the SLURP dataset. Then, we propose three knowledge distillation (KD) approaches to mitigate forgetting for a sequence-to-sequence transformer model: the first KD method is applied to the encoder output (audio-KD), and the other two work on the decoder output, either directly on the token-level (tok-KD) or on the sequence-level (seq-KD) distributions. We show that the seq-KD substantially improves all the performance metrics, and its combination with the audio-KD further decreases the average WER and enhances the entity prediction metric.
Umberto Cappellazzo, Muqiao Yang, Daniele Falavigna, Alessio Brutti
2023-05-23T10:24:07Z
http://arxiv.org/abs/2305.13899v2
Sequence-Level Knowledge Distillation for Class-Incremental End-to-End Spoken Language Understanding ###### Abstract The ability to learn new concepts sequentially is a major weakness for modern neural networks, which hinders their use in non-stationary environments. Their propensity to fit the current data distribution to the detriment of the past acquired knowledge leads to the catastrophic forgetting issue. In this work we tackle the problem of Spoken Language Understanding applied to a continual learning setting. We first define a class-incremental scenario for the SLURP dataset. Then, we propose three knowledge distillation (KD) approaches to mitigate forgetting for a sequence-to-sequence transformer model: the first KD method is applied to the encoder output (audio-KD), and the other two work on the decoder output, either directly on the token-level (tok-KD) or on the sequence-level (seq-KD) distributions. We show that the seq-KD substantially improves all the performance metrics, and its combination with the audio-KD further decreases the average WER and enhances the entity prediction metric. Umberto Cappellazzo\({}^{1}\), Muqiao Yang\({}^{2}\), Daniele Falavigna\({}^{3}\), Alessio Brutti\({}^{3}\)\({}^{1}\)University of Trento, Trento, Italy \({}^{2}\)Carnegie Mellon University, Pittsburgh, PA, USA \({}^{3}\)Fondazione Bruno Kessler, Trento, Italy [email protected], [email protected], {falavi,brutti}@fbk.eu **Index Terms**: continual learning, spoken language understanding, knowledge distillation, transformer ## 1 Introduction Spoken Language Understanding (SLU) is an essential component of any system that interacts with humans through speech, such as voice assistants and smart home devices [1]. It is in charge of extrapolating the salient information from a spoken utterance so that proper actions can be taken to satisfy the user's requests. We can individuate two relevant tasks for SLU [2]: 1) Intent Classification, where we map the sentence to its corresponding intent, and 2) Entity Classification, or Slot Filling, by which we fill some fields of pre-defined semantic forms with content. Traditional SLU systems [3] employ a cascade of an automatic speech recognition (ASR) module followed by a natural language understanding module. Recently, end-to-end (E2E) strategies have garnered much attention [4, 5] because they directly output semantic information from the audio, therefore reducing the impact of error propagation. Most of the previous works on SLU have focused on the mainstream i.i.d. setting in which the entire dataset is available at once to the model [6, 7]. However, this is in stark contrast with practical scenarios where models incur severe shifts in the data distribution or need to adapt to new domains without retraining from scratch. In such conditions, deep models tend to disrupt previous knowledge in favor of the fresh task, leading to catastrophic forgetting [8]. This issue is tackled by the field of Continual Learning (CL) which endeavors to adapt a single model to learn a sequence of tasks such that the model performs properly both on new and prior tasks [9]. Recently, multiple CL methods have been proposed, based on three main strategies [10, 11]: _rehearsal-based_ methods abate forgetting by retaining a portion of the old data [12]; _regularization-based_ approaches preserve the weights' importance through ad-hoc regularization loss terms [13, 14], and _architectural methods_ modify the architecture of the model over time [15, 16]. Recently, the challenging SLURP dataset [17] has been released to address complex E2E SLU problems. In this paper, we propose to combine CL and SLU by defining a Class-Incremental Learning (CLI) setting for SLURP. Since each SLURP utterance is characterized by a domain scenario, we split the dataset into several tasks using the scenarios as a splitting criterion. Due to its lexical richness, the problems of intent and entity classification for SLURP are treated as a sequence-to-sequence (seq2seq) problem, where the intents and entities are generated along with the transcriptions. So, unlike the mainstream CL architecture composed of a feature extractor and a classifier, we exploit a transformer-based seq2seq architecture, whose decoder is also affected by forgetting as the encoder. Our proposed approach combines rehearsal with regularization via knowledge distillation (KD) to combat forgetting at both the encoder and decoder levels. We investigate three KD approaches: one is applied to the encoder's output (audio-KD), whereas the other two distill the knowledge at the decoder side, either at a local (token-KD) or global (seq-KD) level. We show that the seq-KD stands out as the best approach, and we conduct a study where we integrate multiple KDs at once. Our contributions can be summarized as follows: 1) We define a CIL scenario for the SLURP dataset, 2) we study how to moderate forgetting in a seq2seq model, thus moving away from the classical CL pipeline, and 3) we propose three KDs losses that effectively reduce forgetting and discuss their individual and combined contributions. ## 2 Related work KD is a popular technique for model compression that allows transferring knowledge from a large, strong network, coined teacher, to a considerably smaller network, the student [18, 19]. Besides the computer vision field, KD has also proved effective in NLP and speech-related tasks, where the student model mimics the teacher's distribution at a frame level. [20] and [21], for neural machine translation and CTC-based ASR, respectively, propose to apply the KD at a sequence level such that the student matches the probability distribution of the teacher's sequence obtained running beam search. More recently, [22] advance an attention distillation method to transfer knowledge from a large transformer-based teacher by aligning its attention maps with those of the student through a Kullback-Leibler divergence loss. The KD concept has also been exploited in CL. In this case, the teacher is the model trained in the previous tasks, whereas the student needs to be trained in the current task [13]. The goal now is to transfer the knowledge of the old classes to the student model, which has no longer access to data related to them. In addition to the KD-based strategy, other kinds of CL strategies have been proposed in the speech domain. [23] adapts the Gradient Episodic Memory method [12] to an online scenario for ASR. [24] explores the use of the prompt-learning paradigm for class-incremental event detection. [25] studies the use of self supervised (SS) methods for continual representation learning for sound event classification, showing that SS learning is more resilient to forgetting than the supervised counterpart. To the best of our knowledge, we are the first to explore how to attenuate forgetting in a seq2seq model for joint ASR/SLU. We define a CIL setting over the SLURP dataset, and we investigate the use of different KD methods applied to the encoder and decoder side. Our empirical evaluation shows the superiority of the sequence-level KD, and we elaborate on the entanglement between various KD combinations. ## 3 Class-Incremental learning for SLURP In this section, we describe how we have defined the CIL setting for the SLURP dataset [17]. SLURP is a multi-domain dataset for E2E SLU comprising around 56 hours of audio of people interacting with a home assistant (_slurp_real_), with the addition of 43.5 hours of synthetic recordings (_slurp_synth_). At present this makes SLURP the biggest and the most diverse dataset in terms of lexical complexity for SLU. Each utterance is annotated with three semantics: Scenario, Action, and Entities. The pair (scenario, action) is defined as Intent. Overall, there are 18 unique scenarios, 46 actions (56 if we consider both _slurp_real_ and _slurp_synth_), 55 entity types, and 69 intents. Figure 1 provides an example of an annotated utterance. We have used the scenarios as a splitting criterion to define the tasks of the CIL setting. The complete list of scenarios is: ["**alarm**", "**audio**", "**calendar**", "**cooking**", "**datetime**", "**email**", "**general**", "**lot**", "**lists**", "**music**", "**news**", "**play**", "**qa**", "**recommendation**", "**social**", "**takeaway**", "**transport**", "**weather**"]. Since the number of scenarios is limited and each scenario provides a high-level concept associated with each utterance, we think that they can closely resemble a practical application that must adapt to new general domains. Additionally, since the intent classification is the chief metric to assess our model against, the use of scenarios as splitting criterion abides by the rule of having only intents related to scenarios available in the current task. Finally, although some actions and entities can be included in multiple scenarios, the overlap is very limited because the majority of the entities and actions are specific to a single scenario. For example, the action "**taxf**" is only associated with the scenario "**transport**", and the entity "**weather**_descriptor**" with the scenario "**weather**". Figure 2 shows two consecutive tasks, each introducing 3 new scenarios. Another critical aspect is the order in which the scenarios are available to the model. In our implementation, the order depends on the cardinality so that the scenarios with the highest cardinality appear first. In this way, we simulate a practical situation in which we endow the model with the sufficient general knowledge, learning the largest scenarios first, that will be useful for learning more specific scenarios. ## 4 Proposed approach As discussed in the previous section, we consider a CIL setting in which we want to adapt a single model to perform well on all seen tasks. Specifically, the training dataset is divided into \(T\) distinct tasks, \(\mathcal{D}=\{\mathcal{D}_{0},\ldots,\mathcal{D}_{T-1}\}\), based on the scenario labels, so that a scenario is included in one and only one task. The dataset \(\mathcal{D}_{t}\) of the \(t^{th}\) task comprises audio signals \(\mathcal{X}_{t}\) with associated transcriptions \(\mathcal{Y}_{t}\), i.e. \(\mathcal{D}_{t}=(\mathcal{X}_{t},\mathcal{Y}_{t})\). CIL setting is challenging as the model must be able to distinguish all classes till task \(t\), thus at test time the task labels are not accessible (unlike task-incremental learning) [26]. We employ a transformer-based seq2seq ASR architecture, constituted by a Wav2vec 2.0 encoder (WavEnc) [27] followed by a transformer decoder. Let \(\mathbf{x}=[x_{1},\ldots,x_{I}]\) be an audio input sequence of length \(I\), and \(\mathbf{y}=[y_{1},\ldots,y_{J}]\) be the corresponding output sequence of length \(J\), with \(y_{j}\in\mathcal{V}\), where \(\mathcal{V}\) is the set of all possible output subword tokens. The goal of the ASR model is to find the most probable output sequence \(\mathbf{\hat{y}}\) given the input sequence \(\mathbf{x}\): \[\mathbf{\hat{y}}=\operatorname*{argmax}_{\mathbf{y}\in\mathcal{Y}^{\star}}p( \mathbf{y}|\mathbf{x};\theta), \tag{1}\] where \(\mathcal{Y}^{\star}\) is the set of all possible token sequences and \(\theta\) represents the parameters of the seq2seq model. Suppose that \(p(\mathbf{y}|\mathbf{x};\theta_{t})\) and \(p(\mathbf{y}|\mathbf{x};\theta_{t-1})\) are the output probability distributions of the transformer decoder at task \(t\) and \(t-1\) parameterized by \(\theta_{t}\) and \(\theta_{t-1}\), respectively. The model at task \(t-1\) can be seen as the teacher model. Let also \(\mathcal{R}_{t}\) be the set of rehearsal data at the beginning of task \(t\). In the following equations, we use \(\mathbf{x}\in\mathcal{D}_{t}\) in place of \((\mathbf{x},\mathbf{y})\in\mathcal{D}_{t}\) for brevity. The standard training criterion of rehearsal-based CL methods consists of minimizing the cross-entropy loss over \(\mathcal{D}_{t}\cup\mathcal{R}_{t}\): \[\mathcal{L}_{\text{CE}}^{t}=-\sum_{\mathbf{x}\in\mathcal{D}_{t}\cup\mathcal{ R}_{t}}\log(p(\mathbf{y}|\mathbf{x};\theta_{t})). \tag{2}\] The main idea of KD is to transfer knowledge from the teacher network \(p(\mathbf{y}|\mathbf{x};\theta_{t-1})\) to a student model, such that the latter mimics the former's behavior. Basically, the KD is used to force the current model to not deviate too much from the teacher, which retains the knowledge of the previous tasks. We point out that the KD, unless otherwise stated, is applied to the sole rehearsal data since the teacher can effectively predict only the data seen in the previous tasks. We propose three different types of KDs: audio-KD, token-KD, and seq-KD. The audio-KD works at the encoder's output level, whereas the other two KDs are applied to the output of the decoder. In this way, we contrast forgetting either at the encoder or at the decoder side (or both, if we combine multiple KDs). The **audio-KD** forces the encoder's audio embeddings of the current task \(t\) to resemble those from the previous task \(t-1\). Let \(\text{WavEnc}(\mathbf{x})\in\mathbb{R}^{h}\) be the Wav2vec 2.0 encoder output followed by a mean operation to squeeze the temporal dimension, where \(h\) is the hidden size. We define the audio-KD loss as: \[\mathcal{L}_{\text{radio-KD}}^{t}=\sum_{\mathbf{x}\in\mathcal{R}_{t}}\left\| \text{WavEnc}_{\theta_{t-1}}(\mathbf{x})-\text{WavEnc}_{\theta_{t}}(\mathbf{x })\right\|^{2}, \tag{3}\] Figure 1: _Example of annotated utterance from the SLURP dataset. The intent in this case is the couple (music,likeness)._ Figure 2: _The class-incremental learning setting for the SLURP dataset, where 3 new scenarios are introduced in each task._ where \(\|\cdot\|\) is the Euclidean distance operator. Eq. 3 acts as a regularization term for the encoder. We can apply such similar reasoning to the decoder, which predicts each word of the transcription in an autoregressive way (in our case we use Byte-Pair Encoding [28], so we will use the term token rather than word to refer to the output units). The **token-KD** forces the current decoder to match the token-level distribution of the teacher. This is a kind of "local" distillation in that the student mimics the teacher for each token of the transcription. The corresponding CE criterion is defined as: \[\mathcal{L}^{t}_{\text{tok-KD}}=-\sum_{\mathbf{x}\in\mathcal{R}_{t}}\sum_{j=1} ^{J}p(y_{j}|\mathbf{x},\mathbf{y}_{<j};\theta_{t-1})\log(p(y_{j}|\mathbf{x}, \mathbf{y}_{<j};\theta_{t}), \tag{4}\] where \(\mathbf{y}_{<j}\) is the output sequence up to token j-1. A potential flaw of this method is that if some initial token distributions are poorly estimated, their bias will be propagated until the end of the sequence. Indeed, a predicted token might be optimal at the current position in the sequence, but as we proceed through the rest of the sentence, it might turn out not to be the optimal one, given that later predicted positions are not already available. **Seq-KD** is an alternative approach that trains the student to generate the same output sequence as the teacher, thus working at the sequence level. In practice, we generate a new set of automatic transcriptions with the teacher model using beam search at the end of each task ("soft transcriptions"), and then we use them to train the student network with CE criterion in the next task. Formally, we add the following CE loss: \[\mathcal{L}^{t}_{\text{seq-KD}}=-\sum_{\mathbf{x}\in\mathcal{R}_{t}}\log(p( \tilde{\mathbf{y}}|\mathbf{x};\theta_{t})), \tag{5}\] where \(\tilde{\mathbf{y}}\) is the output sequence generated with beam search using the teacher model. Overall, the total loss to be optimized at task \(t\) is: \[\mathcal{L}^{t}_{\text{TOT}}=(1-\lambda_{\text{KD}})\mathcal{L}^{t}_{\text{CE }}+\sum_{k\in\mathcal{K}}\lambda_{\text{KD}}\mathcal{L}^{t}_{k}, \tag{6}\] where \(\mathcal{K}=\{\text{audio-KD, tok-KD, seq-KD}\}\) and \(\lambda_{\text{KD}}\) is a weighting parameter. Depending on whether we employ a single KD or multiple ones, Eq. 6 changes accordingly. Figure 3 shows the learning process with the three KD losses applied to the transformer architecture. ## 5 Experiments ### Experimental settings **Dataset and CIL setting**. We conduct experiments on the SLURP dataset [17] (see section 3) using the official train, validation, and test splits, with a ratio of 70:10:20. In all experiments we also use _slurp_synth_ only for training. Since very long audio data are harmful for efficient training, we remove the training samples longer than 7 seconds (around 0.004% of the total training dataset). Concerning the definition of the CIL setting, we experiment on two configurations: 1) the dataset is partitioned into 3 tasks, each comprising 6 scenarios (denoted as SLURP-3); 2) a more challenging configuration wherein the 18 scenarios are distributed across 6 tasks (denoted as SLURP-6). **Pre-processing and model configurations.** As proposed in [29], the intent and entity classification problems are treated as a sequence-to-sequence ASR task, where both intent and entities associated with an utterance are predicted alongside its transcription. In a sense, we build an "augmented" transcription that will be fed to the transformer decoder, prepending the intent to the original transcription, followed by the entities and the corresponding lexical values. The special token _SEP_ is used to separate the intent from the entities and the entities from the original transcription, whereas the token _FILL_ is used to separate each entity from its value. If the original transcription is the one in Fig. 1, then the augmented transcription becomes: _music_likeness_SEP music_genre_FILL jazz_SEP I like jazz_. **Model.** The encoder is the base Wav2vec 2.0 model pre-trained and fine-tuned on 960 hours of Librispeech (a CNN-based feature extractor followed by 12 Transformer blocks with hidden size = 768, 8 attention heads, 2048 FFN hidden states). The feature extractor is kept frozen during the training, whereas the transformer blocks are fine-tuned. Then, the transformer decoder includes 6 layers with the same parameters as the encoder. We apply layer normalization to the input raw waveforms. The total number of parameters of the model is around 148M. **Training**. We tokenize the transcriptions using Byte-Pair Encoding (BPE) [28], with a vocabulary size of 1k and BPE dropout = 0.1. Both at inference time and for computing the soft labels for the KD-seq we run beam search with beam width = 20. The number of epochs for each task is {40,25,15} for SLURP-3, whereas {40,25,15,15,15} for SLURP-6. The batch size is \(32\). We use AdamW optimizer with learning rate = \(5e^{-5}\) and weight decay = \(0.1\). We use the validation set for hyperparameters tuning, and for selecting the best model for each task that is used for testing. Each experiment took approximately 1 day and a half on a Tesla V100 and a day on an Ampere A40. The code will be made public upon acceptance. **CL baselines and strategies**. Our upper bound is the _offline_ method consisting of a single macro-task with all the scenarios (i.e. no incremental learning), while the naive _fine-tuning_ approach, which retrains the same model task by task, is our lower bound. We consider two different sampling strategies for the rehearsal approach: 1) a random selection of the samples to retain, and 2) iCaRL [30], which selects the samples closest to their moving barycenter. We provide an example with a memory buffer of size equal to around 5% of the training dataset, and the rest of the experiments use 1%. Finally, we show the result for each KD strategy, as well as their various combinations. The KD weight in Eq. 6 is proportional to the fraction of rehearsal data in the mini-batch and is defined as: \[\lambda_{KD}=\sqrt{\frac{b_{rehe}}{b_{all}}}, \tag{7}\] where \(b_{rehe}\) is the number of rehearsal data in the current mini-batch, and \(b_{all}\) is the current mini-batch size. We refer the reader to [31] for a detailed description of this weight's choice. Figure 3: Illustration of the learning process in the proposed CIL setting. The model from the current task (student) mimics the behavior of the teacher model through **audio**, **token**, and **sequence KD** losses to counter forgetting._ **Metrics.** We evaluate the proposed methods using 4 metrics: the average intent accuracy, **Avg Acc**, after each task; the intent accuracy after the last task **Last Acc**; the average **SLU F1** metric for entity classification [17]; the average word error rate, **Avg WER**, after each task. ### Results The performance for both CIL settings, SLURP-3 and SLURP-6, are reported in Table 1. First and foremost, we note that, as expected, the fine-tuning approach struggles in both settings, thus incurring catastrophic forgetting. The use of a rehearsal memory (rows Rehe-5% and Rehe-1%) proves to be very effective, even with only 1% of retained data. Therefore, in the following experiments we consider 1% of data in the rehearsal memory. We also experiment with a more sophisticated sampling strategy, iCaRL [30], which achieves noteworthy improvements, in particular for SLURP-6 (+1.44% of Avg Acc). When we focus on the proposed KDs, it is quite evident that the seq-KD leads to the most substantial improvement for both Avg and Last Acc metrics (+4.63% and +7.28% on SLURP-3). Instead, for WER and SLU F1, all three KDs behave similarly. Note that in our setting, previous intents are not seen anymore, and indeed the KDs help the model remember past scenarios. Conversely, though we expect the utterances to have some scenario-specific words, general speech tokens are spread among the tasks, making forgetting less critical for WER. Nevertheless, for the more challenging SLURP-6, KDs bring a notable enhancement also in terms of WER and SLU F1. **Combination of multiple KDs.** In this final section, we investigate whether combining multiple KD approaches results in additional improvement. We try to combine two KDs at a time, and all three KDs together. The results for SLURP-6 are reported in Table 2. As expected, the best combinations involve the use of seq-KD. Indeed, when the seq-KD is not included (audio + token), the results are even worse than using the KDs individually. Instead, the best combination is given by audio and seq KDs, the two approaches that yield the best improvement if taken singularly. We guess that forcing the encoder output of the current task to be similar to that of the previous task (audio-KD) favors the cross-attention layer of the decoder to attend to the most relevant part of the audio signals. We also mention that using all three KDs leads to satisfactory results, yet slightly worse than seq + audio. We think that, for this last case, since four KDs are involved, the design of the KD weights is more cumbersome, and more experiments are necessary. Finally, in Figure 4 we show the trend of the intent accuracy task by task for SLURP-6. We observe that the seq-KD outperforms both audio and token KDs by a large margin on all steps, and its combination with the audio KD leads to the best results. As a last analysis, we point out that the additional computational burden brought by the proposed KDs is limited for two reasons: 1) the KD losses take into account only the rehearsal samples, which are just a fraction of the entire dataset (i.e., 1%); 2) they just involve an additional forward pass through the teacher model, which is kept frozen during the current task. ## 6 Conclusions In this paper, we define a CIL setting for a challenging SLU dataset, SLURP. To mitigate forgetting, we propose three different KD-based strategies working at different levels in the seq2seq model. Our extensive experiments reveal the superior performance of the seq-KD, and that combining multiple KDs results in additional improvements. In future work we will focus on refining the seq-KD by exploiting multiple beam search hypotheses with their corresponding scores, and we will carry out more experiments to find the optimal weights as multiple KDs are used. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**SLURP-3**} & \multicolumn{4}{c}{**SLURP-6**} \\ \cline{2-9} & **Avg Acc** & **Last Acc** & **Avg** & & **Avg Acc** & **Last Acc** & **Avg** & \\ & **Acc** & **Acc** & **WER** & **SLU F1** & **Acc Acc** & **Acc** & **WER** & **SLU F1** \\ \hline **Offline** & 85.84 & - & 20.46 & 70.59 & 85.84 & - & 20.46 & 70.59 \\ **Fine-tuning** & 46.27 & 18.36 & 35.82 & 49.25 & 33.56 & 12.42 & 46.26 & 37.88 \\ \hline **Rehe-5\% rand** & 79.79 & 74.82 & 25.79 & 65.85 & 77.12 & 73.11 & 28.87 & 63.22 \\ **Rehe-1\% rand** & 71.30 & 61.47 & 29.13 & 60.05 & 66.11 & 59.37 & 34.77 & 55.33 \\ \hline **Rehe-1\% iCaRL** & 71.49 & 61.66 & 28.62 & 60.23 & 67.55 & 62.55 & 33.82 & 56.09 \\ **+ audio-KD** & 72.14 & 63.03 & 28.68 & 61.08 & 68.40 & 62.83 & **32.04** & 58.15 \\ **+ token-KD** & 71.79 & 61.54 & 28.82 & **61.88** & 68.36 & 62.53 & 32.47 & 58.20 \\ \hline **+ seq-KD** & **76.12** & **68.94** & **28.56** & 61.50 & **71.56** & **64.82** & **32.50** & **58.29** \\ \hline \hline \end{tabular} \end{table} Table 1: Results in terms of Average Accuracy (\(\uparrow\)), Last Accuracy (\(\uparrow\)), Average WER (\(\downarrow\)), and SLU F1 (\(\uparrow\)) for different strategies. Figure 4: The trend of the intent accuracy on the observed tasks for the SLURP-6 setting.
2307.02135
Differentially Private Adversarial Auto-Encoder to Protect Gender in Voice Biometrics
Over the last decade, the use of Automatic Speaker Verification (ASV) systems has become increasingly widespread in response to the growing need for secure and efficient identity verification methods. The voice data encompasses a wealth of personal information, which includes but is not limited to gender, age, health condition, stress levels, and geographical and socio-cultural origins. These attributes, known as soft biometrics, are private and the user may wish to keep them confidential. However, with the advancement of machine learning algorithms, soft biometrics can be inferred automatically, creating the potential for unauthorized use. As such, it is crucial to ensure the protection of these personal data that are inherent within the voice while retaining the utility of identity recognition. In this paper, we present an adversarial Auto-Encoder--based approach to hide gender-related information in speaker embeddings, while preserving their effectiveness for speaker verification. We use an adversarial procedure against a gender classifier and incorporate a layer based on the Laplace mechanism into the Auto-Encoder architecture. This layer adds Laplace noise for more robust gender concealment and ensures differential privacy guarantees during inference for the output speaker embeddings. Experiments conducted on the VoxCeleb dataset demonstrate that speaker verification tasks can be effectively carried out while concealing speaker gender and ensuring differential privacy guarantees; moreover, the intensity of the Laplace noise can be tuned to select the desired trade-off between privacy and utility.
Oubaïda Chouchane, Michele Panariello, Oualid Zari, Ismet Kerenciler, Imen Chihaoui, Massimiliano Todisco, Melek Önen
2023-07-05T09:24:48Z
http://arxiv.org/abs/2307.02135v1
# Differentially Private Adversarial Auto-Encoder ###### Abstract. Over the last decade, the use of Automatic Speaker Verification (ASV) systems has become increasingly widespread in response to the growing need for secure and efficient identity verification methods. The voice data encompasses a wealth of personal information, which includes but is not limited to gender, age, health condition, stress levels, and geographical and socio-cultural origins. These attributes, known as soft biometrics, are private and the user may wish to keep them confidential. However, with the advancement of machine learning algorithms, soft biometrics can be inferred automatically, creating the potential for unauthorized use. As such, it is crucial to ensure the protection of these personal data that are inherent within the voice while retaining the utility of identity recognition. In this paper, we present an adversarial Auto-Encoder-based approach to hide gender-related information in speaker embeddings, while preserving their effectiveness for speaker verification. We use an adversarial procedure against a gender classifier and incorporate a layer based on the Laplace mechanism into the Auto-Encoder architecture. This layer adds Laplace noise for more robust gender concealment and ensures differential privacy guarantees during inference for the output speaker embeddings. Experiments conducted on the VoxCeleb dataset demonstrate that speaker verification tasks can be effectively carried out while concealing speaker gender and ensuring differential privacy guarantees; moreover, the intensity of the Laplace noise can be tuned to select the desired trade-off between privacy and utility. speaker verification; gender recognition; privacy preservation; differential privacy + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal: Journal: Journal: Information systems + Footnote †: journal accuracy (Han et al., 2017). Studies in (Kang et al., 2018) also show that speakers' short recordings can be used to reconstruct their average-looking facial images that embody their physical characteristics such as age, gender, and ethnicity. However, despite their potential use for legitimate processing purposes, soft biometrics are susceptible to malicious utilization. This can occur through unauthorized data processing that puts individuals at risk of privacy concerns such as discrimination, invasive advertising, extortion, and other forms of abuse. As a specific illustration, the finance sector has been shown to exhibit gender-based biases in loan provision (Borda et al., 2017). This raises concerns regarding the potential existence of discriminatory lending practices that pose greater barriers to women than to men in the pursuit of starting a business enterprise (Borda et al., 2017). Solutions based on cryptographic primitives (Borda et al., 2017; Kiefer et al., 2017), while effective, produce completely garbled messages. Data obfuscation techniques, on the other hand, provide a more balanced approach to privacy preservation, protecting sensitive information without rendering the entire message content unrecognizable. Moreover, the voice is recognized as personal and sensitive and is therefore subject to protection under the General Data Protection Regulation (GDPR or Regulation 2016/679)1 together with numerous other data protection legislation, worldwide. The GDPR considers gender as well as a form of personal data and imposes an obligation to safeguard its protection. In light of the increasing concerns surrounding privacy, there has been a growing effort to protect private information like soft biometrics. This effort has led to multiple research initiatives aimed at developing and implementing effective techniques for protecting the privacy of soft biometric attributes (Borda et al., 2017; Kiefer et al., 2017). Among these, techniques based on the differential privacy (DP) notion (Kiefer et al., 2017) have received significant attention. Differentially private solutions (also referred to as global or centralized DP) were proposed for more than a decade and regarded as a privacy protection tool for different areas (Borda et al., 2017; Kiefer et al., 2017). While global DP mechanisms consist of a trusted central party/data curator collecting the users' data, aggregating them, and further protecting the aggregated information by adding some calibrated noise before releasing it to the public, local DP (LDP) solutions (Kiefer et al., 2017) protect the input data immediately to prevent the data curator from discovering the real, individual data. The noise is derived from a DP mechanism (e.g. Laplace mechanism). In this paper, we aim to address the challenge of protecting gender information while preserving the efficiency of speaker verification. Our approach is based on adding a calibrated noise drawn from the Laplace distribution during the training of an Adversarial Auto-Encoder (AAE) architecture. The noise is injected into the latent space (i.e. the output of the encoder) in order to assure that the model is \(\epsilon\)-differentially private and to enhance the capability of the adversary in obscuring gender information. The speaker makes use of the private AAE locally to conceal their gender prior to the dissemination of their speaker features for the purpose of authentication. Our experiments conducted on the VoxCeleb 1 and VoxCeleb 2 datasets demonstrate the feasibility of executing speaker verification tasks effectively while disrupting adversarial attempts of gender recognition. To the best of our knowledge, this is the first work that uses differentially private solutions to protect gender information while preserving identity in biometrics. Footnote 1: [https://gdpr-info.eu/](https://gdpr-info.eu/) ## 2. Related Work In recent years, there has been a proliferation of academic literature pertaining to the topic of soft biometrics protection in biometric recognition systems. A significant number of researchers have centered their efforts on developing technical solutions that are capable of preventing the extraction of soft biometric attributes and are either directly applied to the collected biometric data like face images and voice signals (i.e. at sample level) (Borda et al., 2017; Borda et al., 2017; Kiefer et al., 2017) or to the extracted features (i.e. at feature level) (Borda et al., 2017; Kiefer et al., 2017; Kiefer et al., 2017; Kiefer et al., 2017) or Mirjalili et al. (Mirjalili et al., 2017) proposed a Semi-Adversarial Network (SAN) based on an adversarial Convolutional Auto-Encoder (CAE) in order to hide the gender information from face images while retaining the biometric matching utility. In a follow-up work (Mirjalili et al., 2017), the same authors introduced an ensemble of SANs that are constituted of multiple auxiliary gender classifiers and face matches that generates diverse perturbations for an input face image. The idea behind this approach is that at least one of the perturbed images succeeds in fooling an arbitrary gender classifier. In (Mirjalili et al., 2017), Mirjalili et al. also attempted to combine a variety of face perturbations in an effort to improve the generalization capability of SAN models. Despite the successful privacy preservation of gender attributes by the aforementioned techniques, their robustness to arbitrary classifiers is limited. In a more recent study, Tang et al. (Tang et al., 2017) presented an alternative gender adversarial network model that effectively masks gender attributes while preserving both image quality and matching performance. Besides, this model demonstrates the ability to generalize to previously unseen gender classifiers. Further work was proposed by Bortolato et al. (Borda et al., 2017) to leverage the privacy-preservation of face images on the template level also using the AE technique. The authors suggested an AE-based solution that effectively separates gender attribute information from identity, resulting in good generalization performance across a variety of datasets. Additionally, Terhost et al. (Terhost et al., 2017) introduced an Incremental Variable Eliminations (IVE) algorithm that trains a set of decision trees to determine the importance of the variables that are crucial for predicting sensitive attributes. These variables were then incrementally removed from the facial templates to suppress gender and age features while maintaining high face-matching performance. In (Mirjalili et al., 2017) Melzi et. al. extended this approach to protect multiple soft biometrics (i.e. gender, age, and ethnicity) present in facial images. In speech-related literature, Aloufi et al. (Aloufi et al., 2017) built a Voice Conversion (VC) system that can conceal the emotional state of the users while maintaining speech recognition utility for voice-controlled IoT. The model is based on a Cycle-Generative Adversarial Network (GAN) architecture. Similarly, in (Borda et al., 2017), the authors introduced a neural VC architecture that can manipulate gender attributes present in the voice signal. This proposed VC architecture involves multiple Auto-Encoders that transform speech into independent linguistic and extra-linguistic representations. These representations are learned through an adversarial process and can be adjusted during VC. On a template level, Noe et. al. (Noeoe et al., 2017) proposed an adversarial Auto-Encoder architecture that disentangles gender attributes from x-vector speaker embeddings (Tang et al., 2017). The AE is combined with an external gender classifier that attempts to predict the attribute class from the encoded representations. The proposed solution succeeds in concealing gender-related information in the embedding while maintaining good ASV performance. Nonetheless, our experimental findings indicate that using speaker embeddings other than x-vectors, such as those generated by the ECAPA-TDNN model (Kang et al., 2018), yields inconsistent performance, implying potential challenges in achieving generalization. We hypothesize that this may be attributed to the superior representational capabilities of ECAPA-TDNN embeddings, which have largely superseded x-vectors in recent speaker modeling. ## 3. Gender Concealment In this section, we present the building blocks of the proposed gender concealment technique. First, we describe the architecture of the AAE and highlight its limitations in the concealment task. Second, we briefly introduce local differential privacy, a concept that is instrumental in improving the gender concealment capabilities of the model. Lastly, we illustrate how to combine the AAE and LDP to obtain a more effective technique for suppressing gender information in speaker embeddings, with a tunable privacy-utility trade-off and sound theoretical guarantees. ### Gender-Adversarial Auto-Encoder Let \(\mathbf{x}\) be an embedding representing a speaker identity. The goal of a Gender-Adversarial Auto-Encoder is to process \(\mathbf{x}\) so as to produce a new embedding \(\tilde{\mathbf{x}}\) that still encodes the identity of that same speaker, but is devoid of any information about their gender. In this section, we describe our implementation of this system, which mostly follows the one proposed in (Kang et al., 2018). Given an input embedding \(\mathbf{x}\in\mathbb{R}^{d}\), we create a compressed representation of it by means of \(e_{\phi_{1}}\left(\mathbf{x}\right)=\mathbf{z}\in\mathbb{R}^{l}\), where \(e_{\phi_{1}}\left(\cdot\right)\) is a feed-forward neural network parameterized by \(\phi_{1}\) and \(l<d\). The disentanglement of gender-related information from \(\mathbf{z}\) depends on an adversarial "discriminator" module \(a_{\theta}\left(\cdot\right)\) (also a feed-forward neural network) that attempts to infer the gender of the speaker associated with \(\mathbf{z}\). During training, we optimize \(\theta\) to minimize the objective: \[\mathcal{L}_{disc}\left(\mathbf{x},y,\theta\mid\phi_{1}\right)=-y\log\left(a _{\theta}\left(\mathbf{z}\right)\right)-\left(1-y\right)\log\left(1-a_{ \theta}\left(\mathbf{z}\right)\right) \tag{1}\] where \(y\in\left\{0,1\right\}\) is the ground-truth gender label (0 for male, 1 for female) and \(a_{\theta}\left(\mathbf{z}\right)\in\left[0,1\right]\) represents the predicted probability of \(\mathbf{z}\) having been produced by a female speaker. The suppression of the gender-related information is performed by adversarially training the encoder to "fool" the discriminator, i.e. to make it so that it is not capable of accurately predicting the speaker's gender from \(\mathbf{z}\). In practice, this is achieved by optimizing the same objective as (1), except that the probability predicted by the discriminator is inverted: \[\mathcal{L}_{ado}\left(\mathbf{x},y,\phi_{1}\mid\theta\right)=-y\log\left(1- a_{\theta}\left(\mathbf{z}\right)\right)-\left(1-y\right)\log\left(a_{ \theta}\left(\mathbf{z}\right)\right) \tag{2}\] A decoder feed-forward module \(d_{\phi_{2}}\left(\cdot\right)\) attempts to reconstruct the original input embedding from \(\mathbf{z}\). The role of the decoder is to guarantee that the reconstructed embedding can still be used for other tasks, e.g. speaker verification, despite the suppression of gender-related attributes. Thus, the Auto-Encoder is optimized end-to-end according to a further "reconstruction" objective: the cosine distance between the original input embedding and the reconstructed one. \[\mathcal{L}_{rec}\left(\mathbf{x},\phi_{1},\phi_{2}\right)=1-\cos\left( \mathbf{x},d_{\phi_{2}}\left(\mathbf{z}\right)\right) \tag{3}\] Overall, we aim to strike a balance between privacy protection (optimizing \(\mathcal{L}_{disc}\), \(\mathcal{L}_{ado}\)) and utility (optimizing \(\mathcal{L}_{rec}\)) of the processed embeddings. The overall system is trained by alternating gradient descent steps on the parameters of the Auto-Encoder \(\phi=\left\{\phi_{1},\phi_{2}\right\}\) and the parameters of the discriminator \(\theta\): \[\begin{split}\phi&\leftarrow\nabla_{\phi}\left( \mathcal{L}_{adv}+\mathcal{L}_{rec}\right)\\ \theta&\leftarrow\nabla_{\theta}\mathcal{L}_{disc} \end{split} \tag{4}\] At test time, we produce a protected embedding \(\tilde{\mathbf{x}}\) by passing \(\mathbf{x}\) through the Auto-Encoder: \[\tilde{\mathbf{x}}=d_{\phi_{2}}\left(e_{\phi_{1}}\left(\mathbf{x}\right)\right) \tag{5}\] The privacy preservation capability of the Auto-Encoder is evaluated upon the ability of an attacker to infer the gender of the original speaker from the protected utterance \(\tilde{\mathbf{x}}\). To measure it, we train an external gender classifier \(c\left(\cdot\right)\) on a separate set of clean embeddings, then report the gender classification performance of \(c\left(\cdot\right)\) on the original test embeddings and their privacy-protected version: the difference between the two represents the effectiveness of gender concealment. The utility preservation is evaluated by comparing the performance of the same ASV system on the original and protected speaker embeddings. We perform a preliminary evaluation of the reconstructed speaker embeddings of the Gender-AAE and obtain Area Under the ROC Curve (AUC) for gender recognition = 98.45 (\(10^{-2}\)) and Equal Error Rate (EER) = 1.86% for ASV performance. In order to ensure that the predictions of the gender classifier are truly random, the AUC must be close to 50%. Therefore, it is necessary to strengthen the adversarial performance to conceal gender information. In this work, we investigate the impact of adding noise derived from a Laplace mechanism which is well-studied for noise addition and calibration and also provides DP guarantees. The latent vectors \(\mathbf{z}\) are locally differentially private thanks to the Laplace mechanism and subsequently, the reconstructed vectors are differentially private by the post-processing property of DP (Song et al., 2018). Figure 1. Illustration of the proposed system at training time. Solid and dashed arrows represent forward and backward propagation respectively. Modules are colored based on which gradient signal they are optimized by. ### Local Differential Privacy Local differential privacy plays a crucial role in protecting personal data like soft biometrics and assessing the privacy risks. In this section, we provide a brief description of the underlying concepts of local differential privacy and the Laplace mechanism. _Definition._ Local differential privacy is a state-of-the-art privacy model and consists in protecting individual input data before its collection. LDP ensures privacy for each user locally (i.e. each individual record is protected rather than the entire dataset as a whole) by adding noise without the necessity of trusting a central authority. Formally, \((\epsilon)\)-local differential privacy is defined as follows. Definition 3.1 ((Local Differential Privacy (Kal possible outputs to an arbitrary set. Then, \(g\circ\mathcal{M}\) is \(\epsilon\)-differentially private. Similarly to the work in (Kang et al., 2017), we add noise to the latent space of the Auto-Encoder during the training, and use the same privacy proof, thanks to the post-processing property: \(d_{\phi_{2}}\circ dp\) satisfies \(\epsilon\)-DP, and so does the Auto-Encoder \(d_{\phi_{2}}\circ dp\circ e_{\phi_{1}}\). ## 4. Experimental Setup and Results In this section, we discuss the experimental configurations and results. The feature extractor used to produce the speaker embeddings is the ECAPA-TDNN, whose output feature size is \(d=192\). The modules of the proposed encoder and decoder models are single-layer fully-connected neural networks and the gender classifiers (i.e. discriminator and external) are two-layer fully-connected neural networks. The encoder is followed by a ReLU activation and batch normalization, and the decoder is followed by a tanh activation function. We set the latent space to be of size \(l=64\). The adversarial classifier is composed of two fully-connected layers: the first one has 64 input units with a ReLU activation function, and the second one has 32 input units with a sigmoid activation function. An external gender classifier, used by an attacker to infer gender, is used to assess privacy protection and has the same architecture as the discriminator with 192 input units in the first layer and 100 input units in the second layer. The ASV assessment is done by first creating a model for each speaker; trial scores are then obtained by comparing trial embeddings with the respective speaker models by means of cosine similarity. The training process is carried out with Adam optimizer using a learning rate of \(1\cdot 10^{-3}\) and a minibatch size of 128. The training dataset of the AAE is a subset of VoxCeleb2 (Krizhevsky et al., 2012) development partition (397032 segments per class). The testing is conducted using a subset of the VoxCeleb1 (Krizhevsky et al., 2012) test partition (2900 segments per class). The external sex classifier is trained using a subset of the VoxCeleb1 development partition (61616 segments per class). To select the clipping threshold \(C\), we compute the median of the norm of all unclipped \(z\) vectors during the training, which is \(C\)=18.35. We initially explore the behavior of the system by setting \(\epsilon_{ts}=\infty\) (i.e. no DP protection) and for increasing values of \(\epsilon_{tr}\): Figure 2 shows the achieved ASV EER and gender classification AUC. We experimentally determine the noise scale and prioritize higher \(\epsilon_{tr}\) resolution for the region with significant privacy/utility changes, while lower resolution suffices for regions with minor variations. As expected, privacy and utility scores inversely mirror one another. Specifically, \(\epsilon_{tr}=15\) seems to strike a satisfactory balance between the two, resulting in a 0.55 gender classification AUC while achieving an ASV EER of 8.1%. For comparison, the same gender classifier and ASV system obtain an AUC of nearly 1 and an EER of 1.1% on the original ECAPA embeddings, respectively. We pick the model weights trained with \(\epsilon_{tr}=15\) and \(\epsilon_{tr}=20\) and experiment with values of \(\epsilon_{ts}<\infty\) to add DP protection to the speaker embeddings. Setting \(\epsilon_{ts}=\epsilon_{tr}\) further enhances the level of gender concealment: AUC scores drop from 0.55 to 0.50 (from 0.76 to 0.55 respectively) for \(\epsilon_{tr}=\epsilon_{ts}=15\) (\(\epsilon_{tr}=\epsilon_{ts}=20\) respectively). However, ASV EER degrades by around 20 percentage points in both scenarios. By increasing \(\epsilon_{ts}\) by 20 units, it is possible to restore the ASV EER to around 10% (for both model versions) while achieving satisfactory AUC values of 0.55 and 0.68 for \(\epsilon_{tr}=15\) and \(\epsilon_{tr}=20\), respectively. In general, these results show the level of flexibility that the system can achieve even after training, all while providing DP guarantees over the produced embeddings. Informal experiments run with \(\epsilon_{tr}=\infty\) have resulted in rapid erasure of all meaningful information from the speaker embeddings even for high values of \(\epsilon_{ts}\): this is indicative of the relevance of including the Laplace noise during training for the DP protection to be applicable at test time. ## 5. Conclusions We have presented an AE-based system to conceal gender-related information in speaker embeddings while retaining their utility for a speaker verification task. We perform the concealment by means of an adversarial game between an Auto-Encoder and an external gender classifier, and we improve upon previous work by introducing a Laplace-noise-addition layer within the architecture. The Laplace noise regularizes the training and allows for more Figure 3. ASV EER and gender classification AUC achieved by the system for increasing values of \(\epsilon_{ts}\), for the cases of \(\epsilon_{tr}=15\) and \(\epsilon_{tr}=20\). Figure 2. ASV EER and gender classification AUC achieved by the system for increasing values of \(\epsilon_{tr}\). robust gender concealment, while also endowing the output speaker embedding with DP guarantees at inference time. The tuning of the \(\epsilon\) parameter of the Laplace layer allows selecting the desired balance of privacy protection and utility, even after the training process has finished. Experimental results show that the proposed solution is effective in preserving gender privacy while maintaining utility for speaker verification tasks. Furthermore, the flexible trade-off between privacy and utility provided by our approach can be adapted to individual needs, making it a promising solution for privacy-preserving applications. ## Acknowledgments This work is supported by the TRESPAS-ETN project funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 860813. It is also supported by the ANR-DFG RESPECT project.
2302.07693
Fine-tuning of sign language recognition models: a technical report
Sign Language Recognition (SLR) is an essential yet challenging task since sign language is performed with the fast and complex movement of hand gestures, body posture, and even facial expressions. %Skeleton Aware Multi-modal Sign Language Recognition In this work, we focused on investigating two questions: how fine-tuning on datasets from other sign languages helps improve sign recognition quality, and whether sign recognition is possible in real-time without using GPU. Three different languages datasets (American sign language WLASL, Turkish - AUTSL, Russian - RSL) have been used to validate the models. The average speed of this system has reached 3 predictions per second, which meets the requirements for the real-time scenario. This model (prototype) will benefit speech or hearing impaired people talk with other trough internet. We also investigated how the additional training of the model in another sign language affects the quality of recognition. The results show that further training of the model on the data of another sign language almost always leads to an improvement in the quality of gesture recognition. We also provide code for reproducing model training experiments, converting models to ONNX format, and inference for real-time gesture recognition.
Maxim Novopoltsev, Leonid Verkhovtsev, Ruslan Murtazin, Dmitriy Milevich, Iuliia Zemtsova
2023-02-15T14:36:18Z
http://arxiv.org/abs/2302.07693v2
# Fine-tuning of sign language recognition models: a technical report ###### Abstract Sign Language Recognition (SLR) is an essential yet challenging task since sign language is performed with the fast and complex movement of hand gestures, body posture, and even facial expressions. In this work, we focused on investigating two questions: how fine-tuning on datasets from other sign languages helps improve sign recognition quality, and whether sign recognition is possible in real-time without using GPU. Three different languages datasets (American sign language WLASL, Turkish - AUTSL, Russian - RSL) have been used to validate the models. The average speed of this system has reached 3 predictions per second, which meets the requirements for the real-time scenario. This model (prototype) will benefit speech or hearing impaired people talk with other trough internet. We also investigated how the additional training of the model in another sign language affects the quality of recognition. The results show that further training of the model on the data of another sign language almost always leads to an improvement in the quality of gesture recognition. We also provide code for reproducing model training experiments, converting models to ONNX format, and inference for real-time gesture recognition. Sign Language Recognition, Russian Sign Language, action classification, action recognition Further author information: Maxim Novopoltsev.: E-mail: [email protected] Leonid Verkhovtsev.: E-mail: [email protected] Ruslan Murtazin.: E-mail: [email protected] Dmitriy Milevich.: E-mail: [email protected] Julia Zemtsova.: E-mail: [email protected] ## 1 Introduction Sign language recognition (SLR) is the task of recognizing individual signs or tokens called glosses from a given segment of signing video clip. There are two types of sign language recognition system: sensor- based, and vision-based. The disadvantage of the first method is that it is expensive, requires wearing sensors to recognize gestures, and is also unstable in some environments. Much research endeavored to develop high-performance SLR. But most of these systems require large computational power including GPU usage. We present SLR system on CPU, that perform about 3 predictions per second on a Apple Macbook pro16 (2021) m1 pro 16gb. Our code is available at [12]. Considering that existing word-level russian sign language datasets do not provide a large-scale vocabulary of signs, we firstly collect large-scale word-level signs in RSL as well as their corresponding annotations. Further we will introduce a new large-scale Russian Sign Language (RSL) video dataset, containing more than 240000 gloss samples performed by 5 signers. We select 4 signers for training and the remaining 1 signer for testing. ## 2 Related Works ### Sign Language Datasets Sign Language Recognition (SLR) achieves significant progress and obtained high recognition accuracy in recently years due to the development on practical deep learning architectures and the surge of computational power. In summary, the current publicly available datasets are constrained by one or more of the following: limited vocabulary size, short video or total duration, limited domain. Several benchmarks have been proposed for American (WLASL, MS-ASL, How2Sign, Boston ASL LVD, ASLLVD), German (DGS Kinect 40), Chinese (Isolated SLR500, NMFs- CSL), and Turkish (AUTSL) sign languages. RSL datasets, on the other hand, are scarce. Table 1 provides an overview of the large-scale isolated and continuous sign language datasets. Many existing sign language datasets contain isolated signs. But most real-world use continuous sign language.There are no russian continuous sign language datasets. Our dataset consists of 2644 signs performed by 5 different signers and 244101 isolated sign video samples in total. RSL dataset can be used both for the sign language recognition task and for the sign language translation task. ### Sign language recognition The early sign language automation tasks were mainly for sign language recognition. Initially, due to technical limitations, research on sign language recognition was focused on hand-crafted features computed for hand shape and motion [11, 13, 27]. Pose [4, 6, 24, 25, 24], face [11, 16, 23] and mouth [2, 16, 17] have then been widely used as part of the recognition pipelines. For real-life communication between the hearing and the deaf people, the later emerging Continuous Sign Sentences Recognition. Koller et al. [18] present a hybrid approach based on CNN-RNN-HMM. More recently 3D CNNs have been adopted due to their representation capacity for spatio-temporal data [1, 3, 5, 14, 20]. There have been efforts to use sequence-to-sequence translation models for sign language translation [7], though this has been limited to the weather discourse of RWTH-Phoenix, and the method is limited by the size of the training set. The recent work of [21] localises signs in continuous news footage to improve an isolated sign classifier. Some authors use additional modalities like RGB-D data [26]. Two recent concurrent works [1, 20] showed that I3D models significantly outperform their pose-based counterparts. The Video Transformer Network (VTN), originally proposed by Kozlov et al. [19], was used for isolated sign recognition on the corpus of Flemish sign language and achieved promising results (74.7% accuracy on 100 classes), which were mainly limited by the size of the labeled dataset. Recent work [9] apply VTN model with hand cropping and pose flow (VTN-PF), achieves 92.92% accuracy on the balanced test set of AUTSL [15] propose a Sign Language Graph Convolution Network (SL-GCN) to model the embedded dynamics and a novel Separable Spatial Temporal Convolution Network (SSTCN) to exploit skeleton features. ## 3 Using pretrained models and fine tuning Sign language recognition (SLR) involves extracting features from videos and classifying them. The main challenge in working with sign languages is the lack of large datasets. To address this issue, large models trained on more general data are commonly used and then fine-tuned for a specific (downstream) task. In this work, we investigate the impact of incorporating data from other sign languages on the performance of a model. We tested the most widely used models for the Action Recognition task - VideoSWIN Transformer [22] and MViT [10], pre-trained on the Kinetics600 dataset. Then, we fine-tuned the models on other sign language datasets. In the Table 2, the entry "RSL \(\rightarrow\) AUTSL \(\rightarrow\) WLASL" implies that we took the Kinetics pre-trained network, fine-tuned it on the RSL dataset, then on the AUTSL dataset, and finally on the WLASL dataset. The obtained metrics show that the use of datasets from other sign languages leads to a significant improvement in the recognition of sign gestures. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Datasets & Sign Language & Task & Duration (h) & Vocab.glosses & Glosses \\ \hline WLASL & American & Recognition & 14 & 2000 & 21000 \\ Boston ASLLVD & American & Recognition & - & 3300 & 9800 \\ MS-ASL & American & Recognition & - & 1000 & 25513 \\ AUTSL & Turkish & Recognition & - & 226 & 38336 \\ Phoenix 14t & German & Translation & 11 & 1066 & 76000 \\ How2sign & American & Translation & 79 & 16000 & - \\ RSL & Russian & Translation & 69 & 2644 & 244101 \\ \hline \end{tabular} \end{table} Table 1: Summary of sign language datasets. ## 4 Real-Time Inference After the model has been trained, it is necessary to convert it to the ONNX format to use it in real time. The system takes frames from a web camera as input and produces predicted values that are displayed on the screen. The real-time operation of the system differs from the training mode. In training mode, there is usually one gloss for each video fragment. But in inference mode, there may be one gloss, multiple gloss, or no glosses in the fragment. To avoid excessive false triggers, especially when there are no glosses on the video, we selected the confidence threshold of the neural network, averaged adjacent predictions, and selected how often to send sets of frames to the neural network for prediction. In inference mode, the input to the neural network is not individual gestures but continuous speech, and the spoken phrase on the video is used as ground truth. Therefore, if we focused on average accuracy during training, WER (Word Error Rate) became the main metric during inference. We present the WER values for different thresholds, strides, and numbers of forecasts for averaging (Table 3). ## 5 Conclusion In this work, we focused on investigating two questions: how fine-tuning on datasets from other sign languages helps improve sign recognition quality, and whether sign recognition is possible in real-time without using GPU. For experiments, we used well-established architectures VideoSWIN Transformer and MViT. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{avg size} & \multicolumn{6}{c|}{stride} \\ \cline{2-9} & 0.0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 \\ \hline \multicolumn{9}{|c|}{threshold=0.5} \\ \hline 1 & 2.21 & 1.25 & 0.889 & 0.762 & 0.714 & 0.726 & 0.726 & **0.71** & 0.839 & 0.806 & 0.823 \\ 2 & 1.19 & 1.05 & 0.794 & 0.714 & 0.714 & **0.71** & 0.79 & 0.774 & 0.855 & 0.79 & 0.814 \\ 3 & 1.08 & 0.921 & 0.726 & 0.742 & 0.726 & 0.762 & 0.742 & 0.794 & 0.839 & 0.79 & 0.823 \\ \hline \multicolumn{9}{|c|}{threshold=0.9} \\ \hline 1 & **0.746** & **0.746** & 0.762 & 0.746 & 0.81 & 0.79 & 0.823 & 0.806 & 0.903 & 0.903 & 0.871 \\ 2 & **0.746** & **0.746** & **0.746** & 0.794 & 0.778 & 0.823 & 0.823 & 0.839 & 0.903 & 0.871 & 0.915 \\ 3 & **0.746** & **0.746** & 0.774 & 0.806 & 0.80 & 0.841 & 0.839 & 0.825 & 0.919 & 0.871 & 0.887 \\ \hline \multicolumn{9}{|c|}{threshold=0.99} \\ \hline 1 & **0.81** & 0.825 & 0.825 & 0.857 & 0.889 & 0.887 & 0.919 & 0.871 & 0.952 & 0.935 & 0.968 \\ 2 & 0.825 & 0.825 & 0.857 & 0.889 & 0.887 & 0.935 & 0.952 & 0.952 & 0.952 & 0.949 \\ 3 & 0.825 & 0.825 & 0.825 & 0.855 & 0.919 & 0.952 & 0.919 & 0.937 & 0.984 & 0.903 & 0.984 \\ \hline \end{tabular} \end{table} Table 3: Inference time WER with different parameters on test set. Avg size means how many predictions we averaged, stride shows after how many frames relative to the width of the window the next forecast will be and the threshold shows how confident the model should be in the forecast. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Train map} & Model & TOP 1 & TOP 5 & Mean Class Acc \\ \hline \multicolumn{2}{|c|}{WLASL} \\ \hline WLASL & Swin tiny & 44.58 & 80.37 & 41.50 \\ RSL\(\rightarrow\) WLASL & Swin tiny & 53.54 & 85.72 & 51.03 \\ RSL \(\rightarrow\) AUTSL \(\rightarrow\) WLASL & Swin tiny & **58.51** & 88.36 & **56.00** \\ RSL\(\rightarrow\) WLASL & MViT small & 56.88 & **88.57** & 54.55 \\ \hline \multicolumn{2}{|c|}{AUTSL} \\ \hline AUTSL & Swin tiny & 94.33 & 99.41 & 94.29 \\ RSL \(\rightarrow\) AUTSL & Swin tiny & 95.38 & **99.65** & 95.33 \\ RSL \(\rightarrow\) WLASL\(\rightarrow\) AUTSL & Swin tiny & 95.62 & 99.63 & 95.59 \\ RSL \(\rightarrow\) AUTSL & MViT small & **95.72** & 99.41 & **95.74** \\ \hline \end{tabular} \end{table} Table 2: Sign language recognition accuracy(%) of fine-tuned models on test sets. The results of the experiments show a significant improvement in sign recognition quality when models are fine-tuned on other sign languages. As seen in Table 4, our methods can achieve relatively high classification accuracy on WLASL and AUTSL validation subsets. We achieved sign recognition quality comparable to the SAM-SLR model, while our models can work on CPU in real-time, providing 2-3 predictions per second on an Apple Macbook Pro 16 (2021) M1 Pro 16GB, while SAM-SLR is an ensemble of 6 models. In this article, we provide the code for reproducing the training experiments as well as for converting the models into the ONNX format [12].
2305.03452
A technical note on bilinear layers for interpretability
The ability of neural networks to represent more features than neurons makes interpreting them challenging. This phenomenon, known as superposition, has spurred efforts to find architectures that are more interpretable than standard multilayer perceptrons (MLPs) with elementwise activation functions. In this note, I examine bilinear layers, which are a type of MLP layer that are mathematically much easier to analyze while simultaneously performing better than standard MLPs. Although they are nonlinear functions of their input, I demonstrate that bilinear layers can be expressed using only linear operations and third order tensors. We can integrate this expression for bilinear layers into a mathematical framework for transformer circuits, which was previously limited to attention-only transformers. These results suggest that bilinear layers are easier to analyze mathematically than current architectures and thus may lend themselves to deeper safety insights by allowing us to talk more formally about circuits in neural networks. Additionally, bilinear layers may offer an alternative path for mechanistic interpretability through understanding the mechanisms of feature construction instead of enumerating a (potentially exponentially) large number of features in large models.
Lee Sharkey
2023-05-05T11:56:26Z
http://arxiv.org/abs/2305.03452v1
# A technical note on bilinear layers for interpretability ###### Abstract The ability of neural networks to represent more features than neurons makes interpreting them challenging. This phenomenon, known as superposition (Olah et al., 2020; Elhage et al., 2022), has spurred efforts to find architectures that are more interpretable than standard multilayer perceptrons (MLPs) with elementwise activation functions. In this note, I examine bilinear layers (Shazeer, 2020), which are a type of MLP layer that are mathematically much easier to analyze while simultaneously performing better than standard MLPs. Although they are nonlinear functions of their input, I demonstrate that bilinear layers can be expressed using only linear operations and third order tensors. We can integrate this expression for bilinear layers into a mathematical framework for transformer circuits (Elhage et al., 2021), which was previously limited to attention-only transformers. These results suggest that bilinear layers are easier to analyze mathematically than current architectures and thus may lend themselves to deeper safety insights by allowing us to talk more formally about circuits in neural networks. Additionally, bilinear layers may offer an alternative path for mechanistic interpretability through understanding the _mechanisms of feature construction_ instead of enumerating a (potentially exponentially) large number of features in large models. ## 1 Introduction Neural networks can learn to compute interesting and complicated functions. To a first approximation, these functions appear to be structured such that particular computational roles or representations are assigned to particular directions in neural activation space (Olah et al., 2020). We call these representations _features_. Somewhat surprisingly, neural networks are believed to be able to represent more features than they have neurons (Elhage et al., 2022; Gurnee et al., 2023). This phenomenon is known as _superposition_, since they assign features to non-orthogonal directions which 'overlap' in high-dimensional space. We are particularly interested in mechanistically understanding large language models that use the transformer architecture (Vaswani et al., 2017). This architecture mostly consists of a series of alternating attention layers (which let activations at different points in a sequence interact with each other) and MLP layers (which, at each point in the sequence, construct useful output features that are nonlinear transformations of the input features). About two thirds of the parameters in these models are in the MLP layers, which are thought to make prodigious use of superposition (Elhage et al., 2022; Gurnee et al., 2023). Nonlinear elementwise activation functions (such as ReLU (Nair and Hinton, 2010) or GeLU (Hendrycks and Gimpel, 2020)) in MLP layers can remove small amounts of interference between non-orthogonal features (Elhage et al., 2022), thus making it possible for layers to represent features in superposition without increasing the loss. Unfortunately, while the activation function is very useful for the performance of neural networks, it makes it quite difficult to analyze MLPs mathematically because the powerful tools of linear algebra can no longer be readily applied. However, it turns out that another kind of MLP layer, the bilinear layer (Shazeer, 2020; Dauphin et al., 2016; Mnih and Hinton, 2007), is much easier to analyze than MLPs with elementwise activation functions. Even though bilinear layers are nonlinear functions of the input vector, **bilinear layers can be described using only linear operations and third order tensors**! This nice property lets us **extend 'A Mathematical Framework for Transformer Circuits' (Elhage et al., 2021) to transformers with MLP layers as well as attention**, not just attention-only transformers. We hope that this simple change will give us a firmer analytical footing to understand large models on a deep, mechanistic level. This might eventually let us make deeper claims about their safety, since it could permit us to describe classes of circuits as mathematical objects with certain properties (as induction heads were in Elhage et al. (2021)) and to analyze learning dynamics and predict the emergence of particular kinds of circuits. It has been hypothesized (though not yet observed) that neural networks might represent a number of features that is exponential in the number of neurons in a layer (Elhage et al., 2022). If this is true, it would not bode well for our ability to mechanistically understand large neural networks, which in a sense relies on our being able to enumerate all their features. However, as discussed in the last section of this work, bilinear layers may offer **a potential alternative path to 'enumerative safety'**(Elhage et al., 2022). Instead of attempting to understand each of a large number of features, with bilinear networks we may be able to understand a smaller number of primitive features that bilinear layers use to 'construct' their (potentially exponential) larger number of features. Thus, in the same way that we might be able to understand an exponentially large number of executed programs by understanding their code, we might be able to understand an exponentially large number of features by understanding the process by which features with certain properties are constructed. Here, we make some preliminary steps toward understanding the mechanisms of feature construction in bilinear layers; we show that in bilinear layers, **output features are constructed through sums of pairwise interactions between input features**, whereas, in standard MLPs, output features are constructed using all-to-all interactions between input features that appear not to be decomposable. ## 2 Bilinear layers ### Introducing bilinear layers A standard MLP layer consist of an input vector \(x\), a weight matrix \(W\) and an elementwise nonlinear activation function, \(\sigma\) such as the ReLU function (and an optional bias term which is omitted for notational simplicity). The input vector is linearly transformed by the weight matrix to yield the pre-activation vector \(Wx\), to which the activation function is applied elementwise: \[MLP_{ReLU}(x)=\sigma(Wx)\] Bilinear layers are slightly different. They take the form \[MLP_{Bilinear}(x)=(W_{1}x)\odot(W_{2}x),\] where \(\odot\) denotes elementwise multiplication. They have two weight matrices, which each separately transform the input vector. They were introduced in different forms by Dauphin et al. (2016) and Mnih and Hinton (2007). They were later studied by Shazeer (2020), who showed that bilinear layers, when used as the MLP layer in transformer language models, are surprisingly competitive1: They are at least as performant per parameter than standard MLPs with ReLU or GELU activation functions and only slightly less performant than state-of-the-art SwiGLU layers2. Footnote 1: At least for the model size they explored, which was approximately 120M parameters, a similar size to GPT2-small (Radford et al., 2019). To my knowledge, it remains to be determined whether bilinear layers continue to perform competitively at larger scales. Footnote 2: A SwiGLU layer is equivalent to a bilinear layer but where an elementwise Swish activation function (Ramachandran et al., 2017) is applied to \(W_{1}x\). ### Describing bilinear layers using only linear operations and third order tensors The lack of an elementwise activation function in bilinear layers makes them mathematically very simple. In fact, despite being nonlinear functions of \(x\), they can be expressed using only linear operations and third order tensors. First, we'll define the _tensor inner product_ (See appendix A for some examples of tensor inner products which may help build intuitions). Unlike the inner product between vectors, the tensor inner product needs to define the axes along which the inner product is taken. The tensor inner product is thus defined as \[\mathbf{U}^{(n)}\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}}}}}_{jk}\mathbf{V}^{(m)}=\mathbf{T}^{(n+m-2)}\] where \[\mathbf{T}_{\gamma_{1}\cdots\gamma_{j-1}\gamma_{j+1}\cdots\gamma_{n} \gamma_{1}^{\prime}\cdots\gamma_{k-1}^{\prime}\gamma_{k+1}^{\prime}\cdots\gamma _{m}^{\prime}}\quad=\quad\sum_{\beta}\mathbf{U}_{\gamma_{1}\cdots\gamma_{j-1}\beta \gamma_{j+1}\cdots\gamma_{n}}\mathbf{V}_{\gamma_{1}^{\prime}\cdots\gamma_{k-1}^{ \prime}\beta\gamma_{k+1}^{\prime}\cdots\gamma_{m}^{\prime}} \tag{1}\] For the tensor inner product between \(n^{\text{th}}\) order tensor \(\mathbf{U}\) and \(m^{\text{th}}\) order \(\mathbf{V}\) to be defined, the dimension of axis \(j\) of tensor \(\mathbf{U}\) must be the same dimension as axis \(k\) of tensor \(\mathbf{V}\). Now we show how bilinear layers can be expressed using linear operations and third order tensors. Suppose we want to find the third order tensor \(B\) such that \[(W_{1}x)\odot(W_{2}x)=x\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}_{12}B\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}}}}_{21}x,\] if it exists. We'll first identify the terms in the vector on the right hand side, \[((W_{1}x)\odot(W_{2}x))_{i} =(\sum_{j}W_{1(ij)}x_{j})(\sum_{k}W_{2(ik)}x_{k}) \tag{2}\] \[=\sum_{j}\sum_{k}W_{1(ij)}x_{j}W_{2(ik)}x_{k}\] Now let's express the terms of the third order tensor \(B\) using tensor inner products. We have, \[(x\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}_{12}B\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}}}}}_{21}x)_{i} =\sum_{j}x_{j}\sum_{k}x_{k}B_{ijk} \tag{3}\] \[=\sum_{k}x_{k}\sum_{j}x_{j}B_{ijk}\] \[=\sum_{j}\sum_{k}x_{j}x_{k}B_{ijk}.\] Note that it doesn't matter whether we take the tensor inner product between \(B\) and \(x\) on the \(2\)nd or \(3\)rd axis first, which is why \(x\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}_{12}B\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}}}}}_{21}x\) is associative, i.e. \((x\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}}}}}}_{12}B) \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ }}}}}}}}}}}}}}}}}_{21}x=x\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}}}}}}}}}}}}_{12}(B \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin \mathbin{\mathbin{\mathbinmathbinmathbinmathbinmathbinmathbin{\mathbinmathbinmathbinmathbinmathbin{\mathbinmathbinmathbin{ \mathbinmathbinmathbinmathbinmathbin{\mathbinmathbin{\mathbin \mathbinmathbin{\mathbinmathbinmathbin{\mathbin{\mathbin \mathbin{\mathbin\mathbin{\mathbin\mathbin \mathbin{\mathbin\cdot}}}}}}}}}}}}}}}}}_{12B\))). We'll use this property when extending a Mathematical Framework for Transformer Circuits (Elihage et al., 2021) (Section 2.3). Comparing the terms from equations 2 and 3, we can see they are equal if \(B_{ijk}=W_{1(ij)}W_{2(ik)}\). Thus, we can construct the tensor \(B\) using the bilinear layer weights \(W_{1},W_{2}\in\mathbb{R}^{m\times n}\) and a third order tensor \(Z\) such that \(Z_{ijk}=1\) where \(i=j=k\) and \(0\) otherwise, because \(B=W_{1}\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbinmathbinmathbin}}}}}}}}}}}}}}_{12}Z \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbinmathbin}}}}}}}}}}}}}}}_{21}W_{2}\). One helpful way to think about the \(m\times n\times n\) tensor \(B\) is that the column vector \(B_{ijk}\) consists of the elementwise multiplication of the \(j^{\text{th}}\) column of \(W_{1}\) with the \(k^{\text{th}}\) column of \(W_{2}\). ### Extending a Mathematical Framework for Transformer Circuits When Elhage et al. (2021) analyzed the equations for 1- and 2-layer attention-only transformers, it offered interesting insights on the structure of these models. It helped to reveal QK- and OV-circuits, induction heads, and virtual attention heads, which formed the basis of much interesting follow-up work in interpretability (Olsson et al., 2022; Wang et al., 2022). However, one of the biggest shortcomings of Elhage et al. (2021) was that the transformers they analyzed had no MLP layers. MLPs comprise around two thirds of all parameters in standard transformer language models and are thought to be be necessary for a great deal of interesting behaviour (Geva et al., 2021). The reason MLPs were excluded was that they could not be linearised, which made their analysis intractable. But, as we've seen, it is possible to describe bilinear layers using only linear operations. This means we can write linearized expressions for transformers with both attention and MLP layers! It's important to stress that the MLPs we achieve this with are close to state of the art (Shazeer, 2020). This opens up the possibility that we may be able to formally analyze some very capable language models. In this section, we'll identify the expression for a one-layer transformer with attention and (bilinear) MLPs. The expressions for two- and N-layer transformers are left as lengthy exercises for the reader. We'll update our notation in order to be consistent with Elhage et al. (2021), with which we expect readers to be familiar. The inputs to the language model is a sequence of tokens \(t\) of length \(n_{\text{context}}\). These are embedded by the \(d_{\text{model}}\times n_{vocab}\) embedding matrix \(W_{E}\). The token embeddings \(x_{0}=W_{E}t\) (which have shape \(n_{\text{context}}\times d_{\text{model}}\) ) become the residual stream, which is passed through multiple residual blocks, each consisting of a multihead attention layer and an MLP layer, and each added back into the residual stream. Finally, the residual stream is unembedded by the unembedding matrix \(W_{U}\) to make the token logits. In Elhage et al. (2021), they assumed MLPs that had an elementwise GeLU activation function, which are very difficult to analyze. Here, we'll instead use bilinear layers. Define the bilinear MLP layer as \[F(x)=W_{O}^{m}(x\text{ -}_{12}W_{I_{1}}^{m}\text{ -}_{12}Z\text{ -}_{21}W_{I_{2}}^{m}\text{ -}_{21}x) \tag{4}\] where \(W_{O}^{m}\) is the \(d_{\text{model}}\times d_{\text{mlp}}\) output weight matrix for the MLP layer and \(W_{I_{1}}^{m},W_{I_{2}}^{m}\) are the two \(d_{\text{mlp}}\times d_{\text{model}}\) input weight matrices for the bilinear layer. Using the path expansion trick described by Elhage et al. (2021), the input to the MLP in a one layer transformer can be described as \[\begin{split} x_{1}&=(Id+\sum_{h\in H}A^{h}\otimes W _{OV}^{h})\cdot W_{E}t\\ &=(W_{E}+\sum_{h\in H}A^{h}\otimes W_{OV}^{h}W_{E})t\end{split} \tag{5}\] where \(W_{OV}^{h}=W_{O}^{h}W_{V}^{h}\) and \(A^{h}=\text{softmax*}(t^{T}\cdot W_{E}^{T}W_{QK}W_{E}\cdot t)\) in which softmax* is the softmax function with autoregressive masking and \(W_{QK}=W_{Q}^{h\top}W_{K}^{h}\). Putting our definition of \(x_{1}\) into our definition of \(F(\cdot)\) we get \[F(x_{1})=W_{O}^{m}(((W_{E}+\sum_{h\in H}A^{h}\otimes W_{OV}^{h}W_ {E})t)\text{ -}_{12}W_{I_{1}}^{m}\text{ -}_{12}Z\text{ -}_{21}W_{I_{2}}^{m}\text{-}_{21}\\ ((W_{E}+\sum_{h\in H}A^{h}\otimes W_{OV}^{h}W_{E})t)) \tag{6}\] Note that for arbitrary matrices \(M\), \(M^{\prime}\), it's true that \(M\text{ -}_{12}M^{\prime}=M^{\top}M^{\prime\top}\). So we transpose the left hand bracket and \(W_{I_{1}}^{m}\) and move the weight matrix into the brackets: \[=W_{O}^{m}((t^{\top}(W_{E}^{\top}W_{I_{1}}^{m\top}+\sum_{h\in H} A^{h}\otimes W_{E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m\top}))\text{ -}_{12}Z\text{ -}_{21}W_{I_{2}}^{m}\text{-}_{21}\\ ((W_{E}+\sum_{h\in H}A^{h}\otimes W_{OV}^{h}W_{E})t)) \tag{7}\] And next, noting that \(M\text{ -}_{21}M^{\prime}=MM^{\prime}\), we move \(W_{I_{2}}^{m}\) into the right hand brackets: \[=W_{O}^{m}((t^{\top}(W_{E}^{\top}W_{I_{1}}^{m\top}+\sum_{h\in H} A^{h}\otimes W_{E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m\top}))\text{ -}_{12}Z\text{-}_{21}\\ ((W_{I_{2}}^{m}W_{E}+\sum_{h\in H}A^{h}\otimes W_{I_{2}}^{m}W_{ OV}^{h}W_{E})t)) \tag{8}\] Next, we move the \(Z\) tensor into the left hand brackets \[=W_{O}^{m}((t^{\top}(W_{E}^{m\top}W_{I_{1}}^{m\top}\cdot_{12}Z+\sum_{h \in H}A^{h}\otimes W_{E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m\top}\cdot_{12}Z))\cdot_{21}\] \[((W_{I_{2}}^{m}W_{E}+\sum_{h\in H}A^{h}\otimes W_{I_{2}}^{m}W_{OV}^ {h}W_{E})t)) \tag{9}\] And combining both the left hand and right hand brackets, we get the expression for a bilinear feedforward layer \[= W_{O}^{m}(t^{\top}(\] \[W_{E}^{\top}W_{I_{1}}^{m\top}\cdot_{12}Z\cdot_{21}W_{I_{2}}^{m}W _{E}+\] \[\sum_{h\in H}A^{h}\otimes(W_{E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m \top}\cdot_{12}Z\cdot_{21}W_{I_{2}}^{m}W_{E})+\] \[\sum_{h\in H}A^{h}\otimes(W_{E}^{\top}W_{I_{1}}^{m\top}\cdot_{12} Z\cdot_{21}W_{I_{2}}^{m}W_{OV}^{h\top}W_{E})+ \tag{10}\] \[\sum_{h\in H}\sum_{h^{\prime}\in H}A^{h}A^{h^{\prime}}\otimes(W_ {E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m\top}\cdot_{12}Z\cdot_{21}W_{I_{2}}^{m}W_ {OV}^{h^{\prime}\top}W_{E})\] \[)t)\] We can analyze each of the terms in this equation. The first summand expresses a direct path from the token embedding matrix straight to the MLP without passing through any attention heads. The second summand expresses the components of the token embeddings that pass through the attention head and then pass into only the first MLP input matrix. The third summand is similar, but the embeddings pass through the attention heads and into the second MLP input matrix. The last summand corresponds to token embeddings that pass through the attention heads and then into both the first and second MLP input matrices. With this expression for the MLP layer, we can now express the path expansion for the full one layer transformer, which is simply the above expression for \(F(x)\) added to the token embedding-unembedding pathway (the 'direct pathway') and the pathways through the attention heads: \[T(t)= (\mathit{Id}\otimes W_{U}W_{E})t+ \tag{11}\] \[\sum_{h\in H}(A^{h}\otimes W_{U}W_{OV}^{h}W_{E})t+\] \[W_{O}^{m}(t^{\top}(\] \[W_{E}^{\top}W_{I_{1}}^{m\top}\cdot_{12}Z\cdot_{21}W_{I_{2}}^{m} W_{E}+\] \[\sum_{h\in H}A^{h}\otimes(W_{E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m \top}\cdot_{12}Z\cdot_{21}W_{I_{2}}^{m}W_{E})+\] \[\sum_{h\in H}A^{h}\otimes(W_{E}^{\top}W_{I_{1}}^{m\top}\cdot_{12} Z\cdot_{21}W_{I_{2}}^{m}W_{OV}^{h\top}W_{E})+\] \[\sum_{h\in H}\sum_{h^{\prime}\in H}A^{h}A^{h^{\prime}}\otimes\] \[(W_{E}^{\top}W_{OV}^{h\top}W_{I_{1}}^{m\top}\cdot_{12}Z\cdot_{21 }W_{I_{2}}^{m}W_{OV}^{h^{\prime}\top}W_{E})\] \[)t)\] ## 3 Understanding feature construction in bilinear layers One of the problems we may face when trying to mechanistically understand neural networks is that they may be able to represent an exponential number of features. If this hypothesis resolves true, then enumerating all the features in large networks may become computationally intractable. One analogy that gives us hope is discussed by Olah (2022): Even though the input space to a particular computer program might be exponentially large, we can still say that we understand that exponentially large space of executed programs if we understand its code. In the same way, if we can understand the process by which features with certain properties are constructed from simpler primitives, we may be able to overcome the issue of having to understand an exponential number of features. In this section, which is more speculative than earlier sections, I outline why this hopeful vision seems very hard to realise in standard MLPs, but seems quite possible in bilinear layers. ### Feature construction in standard MLPs is non-decomposable Suppose we have a standard MLP layer \(MLP_{ReLU}(x)=\sigma(Wx)\) with a ReLU activation \(\sigma\) (where the bias term is omitted). Also suppose that the input vector \(x\in X\) consists of sparse linear combinations of input features \(x=D^{I\top}a^{I}\), where \(D^{I}\) is a dictionary of input features represented as a \(n_{\text{features}}\times d_{\text{input}}\) matrix and \(a^{I}\in A^{I}\) is a sparse vector of coefficients (with values in \([0,\infty)\) of size \(n_{\text{features}}\)) such that the dataset \(X\) can be reconstructed from the features and their coefficients, \(X=D^{I\top}A^{I}\). Similarly suppose there is a dictionary of output features for this layer \(D^{O}\) and that sparse linear combinations of those output features describe the activations observed in a large representative sample from \(p_{x}(MLP_{ReLU}(x))\), i.e. \[MLP_{ReLU}(x)=\sigma(Wx)=\sigma(W(D^{I\top}a^{I}))=D^{O\top}a^{O} \tag{12}\] Therefore \(D^{I}\) and \(D^{O}\) are overcomplete bases3 for the input space \(X\) and output space \(MLP_{ReLU}(X)\) respectively. Footnote 3: In linear algebra, a basis of a vector space is a set of vectors from which every vector in that space can be expressed as a linear combination. An _overcomplete_ basis is a basis where at least one element of the basis set can be removed yet the set remains a basis. One way to view the process of feature construction is to say that output features \(D^{O}\) are all implicitly represented in superposition in the weight matrix \(W\) and that the nonlinearity, when applied elementwise to the preactivation vector \(Wx\), modifies a set of _default output features_ in order to select particular output features. One candidate for the default output features are the left singular vectors of \(W\), i.e. the columns of a matrix \(U\) (We'll discuss other candidates in the next section). We can thus introduce a _modifier vector_\(m(x)\) that is a function of \(x\) such that \[MLP_{ReLU}(x)=m(x)\odot Wx=(m(x)\odot U)\Sigma V^{\top}x=D^{O\top}a^{O}.\] Therefore we can view linear combinations of the output features (namely \(D^{O\top}a^{O}\)) as consisting of linear combinations of modified default output features (namely \((m(x)\odot U)\Sigma V^{\top}x\)). With a ReLU activation function, \(m(x)\) is binary vector of ones and zero: \(m(x)_{i}=1\) where \(\sigma(Wx)_{i}>0\) and \(m(x)_{i}=0\) otherwise. In general, for vanilla MLPs with any elementwise activation function \(\sigma\): \[m(x)_{i}=\frac{\sigma(Wx)_{i}}{(Wx)_{i}}\quad\text{\ref{eq:m(x)_i}} \tag{13}\] It is the modifier vector that'selects' from the features represented in superposition in \(W\), or, equivalently, 'contracts' them by modifying the default output features. If we could understand how \(m(x)\) is computed in terms of input features \(D^{I}\), then we could begin to understand why particular output features \(D^{O}\) are constructed not others. Unfortunately, in vanilla MLPs, the only way to calculate the value of \(m(x)\) in general is Equation 13. In other words, to get the value of the modifier vector, we first have to pass the input through the network to observe what the post-activations (the numerator) and pre-activations are (the denominator) to get \(m(x)\). But this is circular: We would have to already understand the nonlinear computation in the numerator in order to understand how output features are constructed. This framing doesn't simplify anything at all! Feature construction in standard MLPs can thus be considered 'non-decomposable'. ### Feature construction in bilinear layers In mechanistic interpretability, one of the major assumptions that we need to make is that we can interpret linear transformations of almost arbitrary dimensionality. They may still be large objects, but linear transformations are as simple as transformations get. For large linear transformations with non-sparse coefficients, we may have to spend more time studying them or prioritize analysis of the largest coefficients. But overall we assume that we can understand them to a satisfying extent. If we can't, then the whole business of mechanistic intepretability would be doomed even for large linear regressions, never mind deep neural networks. Granting this assumption, if we could describe the modifier vector \(m(x)\) in the previous section as a linear function of input features (instead of a nonlinear one), then we could begin to understand how a layer constructs output features. Fortunately, in bilinear layers the modifier vector is a linear function of the input! \[MLP_{Bilinear}(x)=m(x)\odot(W_{2}x)\qquad\text{where}\qquad m(x)=W_{1}x,\] We'll say that the modifier vector modifies the default output features represented in \(W_{2}\) to construct output features. We still need to define what the default output feature directions and the modifier feature directions are concretely. Ultimately this choice will always be somewhat arbitrary because linear transformations do not imply any particular privileged basis. As before, perhaps the most obvious candidates for the default output feature directions are the left singular vectors of \(W_{2}\). But the largest directions in the output activations may not necessarily have a strong relationship with the weights because the output directions depend on both the weights and the input directions. Therefore, we may be able to do better than the left singular vectors of \(W_{2}\) by incorporating the data distribution into the choice of bases. One way might use the right singular vectors \(MLP_{Bilinear}(X)\) or \(W_{2}X\). Another - perhaps better - way is to identify default output features that are maximally statistically independent. This may be better because statistically independent directions tend to be activated somewhat sparsely and therefore might be better default output features than singular vectors, since fewer will be significantly 'activated' at any one time. We could achieve this by performing linear ICA Hyvarinen and Oja [2000] on the preactivations \(W_{2}X\). This would yield a matrix \(U^{(2)}\), which is the set of vectors that are maximally statistically independent directions of the output dataset while still being a basis of it. We can then use multiple linear regression to find the corresponding matrix \(V^{(2)\top}\) such that \(W_{2}=U^{(2)}V^{(2)\top}\). Slightly abusing terminology, we'll call \(U^{(2)}\) and \(V^{(2)\top}\) the left and right independent components of \(W_{2}\) respectively. We can define the modifier features using the same procedure, identifying the left and right independent components of \(W_{1}=U^{(1)}V^{(1)\top}\). Armed with such features, **we may be able to describe feature construction in bilinear networks in terms of interactions between two relatively small, relatively sparse sets of vectors (the default output features and the modifier features)**. We hope we can use this approach to tell mechanistic stories for how features with certain properties are constructed by the network. We might be able to do this by understanding the functional properties of the default output features and how modifier features tend to modify them. Optimistically, mechanistic stories like these may let us understand an exponentially large space of features. Whether or not such an approach will work is ultimately an empirical question, which we leave for future work. In the next section, we explore the mathematical simplicity of feature construction in bilinear layers, which gives us some reason to suspect that feature construction may be understandable. 5 Footnote 5: We can make further modifications to the modifier features and default output features that assist either the intuitiveness or interpretability of bilinear networks. I’ll note them here but won’t explore them further in this work. **Improving intuitiveness:** If, during training, we constrain \(W_{1}x\) to be low \(L_{2}\) norm and add the one vector as bias, the modifier vector would always be close to the one vector. In other words: \(m(x)=W_{1}x+\mathbf{1}\) where \(||W_{1}x||\approx 0\). This would mean that modifier features simply cause slight modifications of default output features. This addition would also help us make a analysis prioritization decisions later (see section 3.4), but fundamentally the modification isn’t necessary. This addition also opens up an experimental avenue (which we won’t explore here): By imposing more or less regularization on the norm, it allows us to control the amount of superposition a network is able to do. This would be an interesting experimental lever to pull, since it would allow us to directly test how much a network’s performance is due to superposition. **Improving interpretability:** We could choose an \(L_{1}\) penalty for the norm constraint on the modifier vector (instead of the \(L_{2}\) norm); or we could constrain \(W_{1}\) to be low rank; alternatively, we could quantize the output of \(W_{1}x\) in order to put hard limits on the amount of superposition a network can do. Feature construction in bilinear layers decompose into a sum of pairwise interactions between input features Not all layers have the same 'amount' of nonlinearity. Some are more nonlinear than others. Here we characterize the amount of nonlinearity layers can have, which sheds light on how bilinear layers differ from standard MLPs. Let \(C(d_{i}^{I},d_{j}^{O},a^{I})\) quantify the contribution of input feature \(d_{i}^{I}\in D^{I}\) to the activation (or'selection') of output feature \(d_{j}^{O}\in D^{O}\). We then have the following (non-comprehensive) set of degrees of nonlinearity. * **Linear**: Fully linear layers have no nonlinearity. There are therefore no interactions between input features during feature construction (since there is no modifier vector). The amount that input feature \(d_{i}^{I}\) contributes to the selection of output feature \(d_{j}^{O}\) is quantified simply as \(C(d_{i}^{I},d_{j}^{O},a^{I})=[Wd_{i}^{I}a_{i}^{I}]^{\top}d_{j}^{O}\), which is just the inner product between the preactivation caused by that input feature and the output feature. * **Additively pairwise nonlinear**: In this case, output features are determined by a sum of pairwise interactions between features. For example, if input features \(d_{1}^{I},d_{2}^{I},d_{3}^{I}\) are active in the input, the contribution of \(d_{i}^{I}\) (where \(i\in\{1,2,3\}\)) to each output feature can be described as a sum of pairwise nonlinear interactions, \(C(d_{i}^{I},d_{j}^{O},a^{I})=[f(d_{i}^{I};d_{1}^{I},a_{1}^{I})+f(d_{i}^{I};d_{2 }^{I},d_{2}^{I})+f(d_{i}^{I};d_{3}^{I},a_{3}^{I})]^{\top}d_{j}^{O}\), where \(f(\cdot)\) is some nonlinear function of the two interacting features. * **Fully nonlinear**: The contribution an input feature makes to the selection of an output feature depends on every other feature in a way that can't be decomposed into a sum. The contribution of \(d_{i}^{I}\) to each output feature can only be described as an all-to-all nonlinear interaction between input features that cannot be broken down into linear components: \(C(d_{i}^{I},d_{j}^{O},a^{I})=g(d_{i}^{I};d_{1}^{I},d_{2}^{I},d_{3}^{I},a^{I}) ^{\top}d_{j}^{O}\), where \(g(\cdot)\) is some (non-additively-pairwise) nonlinear function. The task of understanding additively pairwise nonlinearity is easier than full nonlinearity because we can study each pairwise interaction between features and sum them up. Understanding full nonlinearity is significantly harder because there is no way to linearly decompose the function \(g\). Sadly, standard MLPs are fully nonlinear. However, we show that bilinear layers are additively pairwise nonlinear, making them significantly easier to analyze. Suppose the input to a bilinear layer \(x^{\prime}\) consists of a linear combination of two input features \(d_{1}^{I}\) and \(d_{2}^{I}\), i.e. \(x^{\prime}=a_{1}d_{1}^{I}+a_{2}d_{2}^{I}\). Using the re-expression of the bilinear layer, inputting \(x^{\prime}\) into equation 2 yields \[\begin{split}(a_{1}d_{1}+a_{2}d_{2})\cdot_{12}B\cdot_{21}(a_{1}d_ {1}+a_{2}d_{2})=\\ a_{1}d_{1}\cdot_{12}B\cdot_{21}a_{1}d_{1}+\\ a_{1}d_{1}\cdot_{12}B\cdot_{21}a_{2}d_{2}+\\ a_{2}d_{2}\cdot_{12}B\cdot_{21}a_{1}d_{1}+\\ a_{2}d_{2}\cdot_{12}B\cdot_{21}a_{2}d_{2}\end{split} \tag{14}\] More generally, for arbitrary linear combinations of input features: \[(W_{1}x)\ \odot\ (W_{2}x)\ \ =\ \ (\sum_{i\in R}a_{i}d_{i})\ \cdot_{12}B\ \cdot_{21}(\sum_{i\in R}a_{i}d_{i})\ \ =\ \ \sum_{i\in R}\sum_{j\in R}a_{i}a_{j}d_{i}\ \cdot_{12}B\ \cdot_{21}d_{j} \tag{15}\] where \(R\) is the set of indices of nonzero feature coefficients. Equation 15 shows that, although all features interact to determine the output features, these interactions can be understood as a sum of pairwise interactions between features. Hence bilinear layers are only additively pairwise nonlinear. We hope that this simplicity can be leveraged to tell simple stories about how particular input features (hopefully sparsely) activate particular default output features and modifier features. Then, if we understand the functional properties of those default output features and the kinds of functional modifications that those modifier features make, then we may be able to understand the properties of the output features. ### How should we study feature construction? At this early stage, it's not totally clear how best to analyze the structure of bilinear networks. What is clear is that doing so will be easier than analyzing fully nonlinear computations, since we're simply studying the structure of tensors, which is a relatively well understood domain in mathematics. In advance of empirical results, I speculate on a few non-mutually exclusive ways to proceed in this section. 1. **Large coefficients of \(B\)**: As discussed at the beginning of section 2, when interpreting any linear transformation, there may be so many coefficients that it may be necessary to prioritize our analyses by studying only the largest coefficients. One way to leverage this is simply to study the largest coefficients of \(B\) and how they would influence interactions between commonly observed pairs or groups of input features. 2. **Tensor decomposition**: Building on (1), we could perform Higher Order Singular Value Decomposition (HOSVD) and study the structure of the most influential ranks of the tensor. 3. **Maximally modified default output features**: Recall that one way to view the bilinear network is that one side of the elementwise multiplication modifies the linear transformation on the other side. This suggests an way to prioritize the analysis of how particular features are constructed: For each input feature, we should prioritize analysis of the most modified default output features. Concretely, define \[U^{(2,d_{i})}:=d_{i}^{\top}W_{1}\ \mbox{-}_{12}\ Z\ \mbox{-}_{21}\ U^{(2)}.\] This is the set of output features caused by the modifications that input feature \(d_{i}\) makes to default output feature \(U^{(2)}\). Then, for each input feature \(d_{i}\) we should study the top k most modified default output features, i.e. \[\mbox{arg top-k}(||U^{(2)}_{:,l}-U^{(2,d_{i})}_{:,l}||)\] (16) This would let us focus on the most significant modifications that a given input feature makes to the default output features. But we can prioritize our analyses further than that. The modifications that an input feature makes to the default output features don't matter unless the default output feature is actually activated by that feature or some other feature that is simultaneously present in \(x\). Therefore we can identify pairs of features, \((d_{l},d_{m})\) that are correlated (or that have often appeared at the same time) and where \(U^{(2,d_{l})}\) is both one of the default output features that is most modified by \(d_{m}\) and simultaneously one of the default output features that is most activated by \(d_{m}\). ## 4 Conclusion The simplicity of bilinear layers makes formal analysis much easier than for standard MLPs. One of the most important things bilinear layers give us are analysable expressions for performant transformers with both attention heads and MLP layers. I hope that this will eventually let us formally analyze the structure of the representations of large language models in this class. This might reveal interesting features and circuits in a similar way that the mathematical framework for attention-only transformers introduced by Elhage et al. (2021) helped to reveal reveal reveal QK- and OV-circuits, induction heads, and virtual attention heads. Curiosity aside, an expression for models with bilinear layers may let us make stronger claims about safety. For instance, it may let us more directly compare circuit structure in different models, and enable us to make inferences about model behaviour without necessarily running the model. Another potential research direction is analyzing learning dynamics. Models with bilinear layers seem like they might lend themselves to mathematical analysis in a similar fashion to the deep linear layers studied by Saxe et al. (2013). Learning dynamics may be important for safety, since understanding them may be necessary to be able to predict dangerous model behaviors before they emerge. Lastly, and most speculatively, bilinear layers offer the potential to understand the mechanisms of feature construction, which may be necessary for understanding a potentially exponentially large number of features represented in language models. There is still much empirical work to do to evaluate whether intuiting the mechanisms of feature construction is possible. Overall, I hope that this note might pique the interest of the interpretability community by highlighting an architecture that is much gentler on the intuitions than standard MLPs. ## Acknowledgements I thank Trenton Bricken for helpful discussions that initiated my search for layers that could be described in terms of higher order tensors. I thank Beren Millidge, Sid Black, and Dan Braun for helpful discussions and detailed feedback on this work.
2304.14207
Entropy from entangled parton states and high-energy scattering behavior
The relation between the gluon density in a hadron and entanglement entropy can shed a new light on the high energy scattering behavior of hadrons. Using the holographic light-front QCD framework the growth above the classical geometric cross section is directly related to the increase of the internal quantum entropy from the entangled parton distribution in hadrons. A rather consistent picture emerges from the scale dependence of the Pomeron from the QCD evolution of the gluon distribution function $g(x, \mu)$, the rising of the integrated cross section in photoproduction of vector mesons, the deep inelastic scattering (DIS) experiments at HERA, hadron multiplicity and quantum entropy. We also point out a possible analogy between parton entanglement entropy and the black-hole entropy of Bekenstein and Hawking.
Hans Gunter Dosch, Guy F. de Teramond, Stanley J. Brodsky
2023-04-27T14:12:57Z
http://arxiv.org/abs/2304.14207v4
# Entropy from entangled parton states ###### Abstract The relation between the gluon density in a hadron and entanglement entropy can shed a new light on the high energy scattering behavior of hadrons: The growth above the classical geometric cross section is directly related to the increase of the internal quantum entropy from the entangled parton distribution in hadrons. A rather consistent picture emerges from the scale dependence of the Pomeron from the QCD evolution of the gluon distribution function \(g(x,\mu)\), the rising of the integrated cross section in photoproduction of vector mesons, the deep inelastic scattering (DIS) experiments at HERA, hadron multiplicity and entropy. We also point out a possible analogy between parton entanglement entropy and the black-hole entropy of Bekenstein and Hawking. + Footnote †: preprint: SLAC-PUB-17727 Introduction In their analysis of entanglement at the subnuclear level, Kharzeev and Levin [1; 2] have drawn the attention to entropy. The quantum entropy of any bound state is zero, but Kharzeev and Levin considered the entropy of the partons resolved in a deep inelastic scattering (DIS) experiment \[S_{DIS}=\ln N(x,Q^{2}), \tag{1}\] where \(N(x,Q^{2})\) is the number of partons in a hadron with longitudinal light-front momentum fraction \(x\) of the struck parton in the target hadron and \(Q^{2}=-q^{2}>0\) is the momentum transfer. The DIS entropy \(S_{DIS}\) is the logarithm of the number of degrees of freedom in the DIS measurement. It represents the entropy of entanglement between the proton components probed by deep inelastic scattering and the rest of the proton: the number of produced final-state spectator quark and gluon partons. Since the partons cannot be isolated as asymptotic states, the number of partons is not a directly observable quantity, but depends on the virtuality scale of the process \(Q^{2}\), and therefore the quantum DIS entropy is not directly observable. The quantum (von-Neumann) entropy of a state described by the density operator \(\rho\) is given in analogy to the classical (Boltzmann) entropy by the expectation value of the trace of the statistical operator \[S_{Q}=-tr[\rho\,\ln\rho]=-\sum_{i}p_{i}\ln p_{i}, \tag{2}\] where the \(p_{i}\) are the eigenvalues of \(\rho\); they give the probability to find the system in the state \(|i\rangle\). A pure state \(|\psi\rangle\), like an elementary particle, has therefore the quantum entropy 0. Under the assumption that a hadron state \(|\psi\rangle\) consists of \(N\) interacting constituents and that these partons do not factorize into hadronic substates, the partons in the hadron are entangled. In deep inelastic scattering (DIS) a measurement projects only on a single parton \(|j\rangle\). The pure state before the measurement is thus, after a measurement, a mixture described the statistical operator \(\rho=\sum_{j=1}^{N}\,p_{j}\,|j\rangle\langle j|\), where \(p_{j}\) is the probability of hitting the state \(|j\rangle\); that is, after measurement, which traces out the unobserved components of the state, the entropy is given by the expression (2). For very slow partons the gluons are dominant, and we restrain ourselves to this region in order to treat all partons on equal footing. Therefore, we have equal probability for each parton to be hit and thus one expects \(p_{j}=1/N(x)\), hence \(S_{Q}=\ln N(x)\), where \(N(x)\) is the number of partons (gluons) with longitudinal momentum fraction \(x\), and the result is (1). In this letter we discuss the consequences of the relation of the DIS entropy introduced in Ref. [1] with the determination of the gluon distribution given in Ref. [3] from the high energy scattering of hadrons and holographic light front QCD [4; 5; 6]. This specific connection allows us to relate the increase above the proton-proton geometric cross section to the number of entangled partons in the proton probed in the DIS process. Furthermore, using the light-front holographic framework and the QCD evolution results from Ref. [7], we show that the scale dependence of the gluon distribution produces the Pomeron-dominated energy dependence of the DIS cross section. It also accounts for the logarithmic dependence on the observational scale \(\mu\), which, for our purposes, can be identified with the photon virtuality \(\mu^{2}=Q^{2}\) in the DIS measurement \(\gamma^{*}(q^{2})+p\to X\); this is an essential prediction of the scale-dependent Pomeron intercept discussed in Ref. [7]. In Refs. [1; 2; 8; 9] it was further assumed that the entropy of the final hadronic state can, under certain kinematical conditions, be equated to the DIS entropy. This assumption created delicate experimental and theoretical questions, since parton states are not asymptotic states and the final state, after a DIS event, consist of hadrons which are supposed to be formed after the hadronization of the partons. By examining he hadron multiplicity in the final state, we will briefly discuss how this problem can be also addressed using the concept of a unique scale-dependent Pomeron introduced in Ref. [10], and recently discussed in Ref. [7] in the framework of holographic light-front QCD (HLFQCD). ## II High energy scattering and entanglement entropy ### High energy behavior of hadron cross sections One of the most important concepts and tools in high energy scattering is the notion of Regge poles. Originally introduced in scattering in quantum mechanical potential theory by T. Regge [11], it was applied to particle physics by Chew and Frautschi [12]. The guiding principle of the latter authors was the notion of duality. This principle states, roughly speaking, that the same fields which create the interaction between hadrons are the quantum fields which show up as particles or resonances. It follows from rather general principles of quantum field theory and is therefore independent of the specific dynamics. In lowest order Regge theory of particle scattering (one pole exchange) the dependence of the total cross section on the squared center of mass energy \(s\) of the two scattered particles is obtained from the forward scattering amplitude using the the optical theorem. Therefore it is determined at \(t=0\) by the intercept of the trajectory in a Chew-Frautschi plot [12] and the cross section is proportional to \(s^{\alpha_{P}(0)-1}\). In Fig. 1 a double logarithmic plot of the proton-proton total cross section _vs_ the squared center of mass energy \(s\) is displayed. One sees that the Pomeron contribution \(s^{0.086}\) gives an excellent description of the cross section over 6 orders of magnitude of \(s\). For values of \(s\lesssim 10^{3}\) GeV\({}^{2}\) another Regge pole exchange, the \(\rho\) trajectory with an intercept 1/2 [14] gives an additional contribution which vanishes as \(s^{-1/2}\) at large \(s\). By adding both components \[\sigma_{pp}=a(s/s_{0})^{0.086}+b(s/s_{0})^{-0.05}, \tag{3}\] with \(a=24\,\)mb, \(b=28\,\)mb and \(s_{0}\equiv 4m_{p}^{2}\), we find a very good description of the total proton-proton cross section over 9 orders of magnitude in terms of the Pomeron and the \(\rho-\omega\) trajectories only. The Froissart-Martin bound [15; 16] states, however, that the total cross section of two colliding hadrons cannot grow faster than \(\ln^{2}\left(s/s_{0}\right)\). This result is based on very general assumptions of quantum field theory, especially on the unitarity of the scattering matrix. Therefore, the very high energy behavior described by single Pomeron exchange has to be modified by unitarity corrections (two and more Pomeron exchanges), but as can be seen from Fig. 1 these corrections would affect only a region well above \(10^{10}\) GeV\({}^{2}\). For later discussion we add here a general remark on the high energy behavior: From classical, even wave mechanical, considerations one expects a constant cross section determined by the geometrical extension of the scattered objects. However, in the quantum field theoretical shock-wave model put forward by Heisenberg [17], the total cross section increases as \(\ln^{2}s\) due to the possibility of particle creation. It thus is equal to the upper limit later predicted by the Froissart-Martin bound based on principles of quantum theory. Figure 1: The total proton-proton scattering cross section as a function of \(s\), the CM energy squared. The strait grey line in the figure shows the energy dependence \(s^{0.086}\), as predicted by a Pomeron with intercept \(\alpha_{P}(0)=1.086\). The pale blue data points are results from cosmic ray experiments and the red are recent LHC data. Figure adapted from [13], Fig. 52.6. ### DIS entropy and the Pomeron intercept In QCD the gluon distribution at a given virtuality scale \(\mu\) is obtained from DGLAP evolution [18; 19; 20] from the behavior of the DIS cross sections \(\gamma^{*}+p\to X\). The gluon distribution \(g(x,\mu)\) is not an observable, and consequently, the parton distribution function depends also on the scale \(\mu\) \[S_{DIS}=\ln\big{(}x\,g(x,\mu)\big{)}. \tag{4}\] This is in agreement with the result mentioned in the introduction, namely, that also the DIS entropy is not a directly observable quantity. Based on AdS/CFT duality concepts, the correspondence between a gravity theory in a higher-dimensional anti-de Sitter (AdS) space and conformal field theories (CFT) in physical space-time [21; 22; 23], a semiclassical model has been developed [4; 5; 6] which reproduces the same hadron spectra as observed in a specific dual model, the Veneziano model [24]. Furthermore, the analytic expressions of form factors obtained in this model have the same structure as those derived in a generalized Veneziano model [25; 26]. This model includes external currents in order to describe hadron form factors. This combined holographic and dual framework also provides nontrivial connections between the dynamics of form factors and quark and gluon distributions by incorporating Regge behavior at small longitudinal momentum fraction, \(x\), and the inclusive-exclusive connection at large \(x\)[3; 14; 27]. In the light-front holographic model the intrinsic gluon distribution in a hadron is the sum of contributions from all Fock states which contain a gluon component, \[xg(x)=\sum_{\tau}c_{\tau}xg_{\tau}(x), \tag{5}\] where the twist \(\tau\) is the number of components of a given Fock state and the \(c_{\tau}\)'s are expansion coefficients of the corresponding Fock component. The specific form for the \(\tau\)-component of the gluon distribution is given by [3] \[x\,g_{\tau}(x)=\frac{1}{N_{\tau}}w^{\prime}(x)\,w(x)^{1-\alpha_{P}(0)}[1-w(x) ]^{\tau-2}, \tag{6}\] with normalization \(N_{\tau}=B\big{(}\tau-1,2-\alpha_{P}(0)\big{)}\). The universal function \(w(x)\) is independent of the twist \(\tau\) and satisfies the boundary conditions \(w(0)=0\), \(w(1)=0\) and \(w^{\prime}(1)=0\). Physical constraints are imposed on \(w(x)\) at small and large \(x\): At \(x\to 0\), \(w(x)\sim x\) from Regge theory, and at \(x\to 1\) one can apply the inclusive-exclusive counting rules \(g_{\tau}(x)\sim(1-x)^{2\tau-3}\), which gives the additional constraint \(w^{\prime}(1)=0\). These constraints largely determine \(w(x)\) which can be fixed in the quark sector [14]. If we limit the expansion in (5) to twist 4, the gluon distribution in the proton can be directly calculated without the introduction of arbitrary parameters [3]. An essential ingredient for the determination of the DIS entropy in this model is the relation between the Pomeron intercept and the intrinsic gluon distribution at small \(x\) values [3] \[x\,g(x,\mu)\sim\Big{(}\frac{1}{x}\Big{)}^{\alpha_{P}(0)-1},\qquad x\ll 1, \tag{7}\] in the hadronic domain, \(\mu\approx 1\) GeV, which sets the initial scale for the DGLAP evolution equations. This result is a consequence of the general relation of form factors and trajectories in the generalized Veneziano model [25; 26] and the structure of form factors in the light-front holographic framework [14; 28]. From (4) and (7) we then obtain \[S_{DIS}=\ln\big{(}x\,g(x,\mu)\big{)}\sim(\alpha_{P}(0)-1)\ln\Big{(}\frac{1}{x} \Big{)},\qquad x\ll 1. \tag{8}\] It shows that a Pomeron intercept larger than 1 (hypercritical Pomeron), which leads to the rising of the total proton-proton cross section at large energies (Fig. 1), is uniquely related to a positive von Neumann entropy since the rapidity \(Y=\ln(1/x)\) is always positive for \(x<1\). It is, in turn, a consequence of the separation of entangled parton states in a DIS experiment which gives rise to the entanglement entropy \(S_{DIS}\). The longitudinal light-front momentum fraction \(x\) can be identified with the Bjorken variable \(x_{bj}\) measured in the deep inelastic lepton-hadron scattering experiment \[x_{bj}=\frac{Q^{2}}{2p\cdot q}=\frac{Q^{2}}{W^{2}+Q^{2}-m_{p}^{2}}, \tag{9}\] where \(W^{2}=(q+p)^{2}\) represents the total photon-hadron energy squared and \(Q^{2}=-q^{2}>~{}0\) the photon virtuality. In the high-energy kinematic domain, \(W^{2}\gg Q^{2}\), Eq. 9 reduces to \(x=\frac{Q^{2}}{W^{2}}\) and, for fixed \(Q^{2}\), we obtain \[S_{DIS}\sim(\alpha_{P}(0)-1)\ln\left(\frac{W^{2}}{Q^{2}}\right),\qquad W^{2} \gg Q^{2}. \tag{10}\] It leads to the remarkable result that one can identity the value of \(\alpha_{P}(0)\), which determines the gluon distribution \(xg(x,\mu)\) at small \(x\) in Eq. (7), with the value of \(\alpha_{P}(0)\), obtained from the high energy Pomeron contribution to the \(W\) dependence of the DIS cross section. But, as we shall discuss in the next section, it also implies that the Pomeron intercept should also be dependent on the scale. ### The scale dependent Pomeron Results from quasi-elastic photo and electroproduction of vector mesons show, for all processes at center of mass energies \(W\) larger than 10 GeV above threshold, an energy dependence of the integrated cross sections that can be fitted reasonably well by a power law \(\sigma_{\gamma^{*}+p\to V+p}\approx W^{\delta}\). The value of \(\delta\) depends strongly on the process and the virtuality of the photon as shown in Table 1. For integrated exclusive cross sections the whole trajectory contributes. In order to relate the exclusive cross sections, integrated over the whole range of momentum transfer \(t\) to the Pomeron intercept at \(t=0\) one has to take into account the shrinkage corrections due to the slope of the trajectory. The latter can be estimated, and from this, one can reconstruct the energy dependence of the squared forward amplitude which, in Regge theory, is given by \(\delta_{Regge}=4(\alpha_{P}(0)-1)\). The values for this quantity are shown in Table 1. The situation is similar for the inclusive cross section for the DIS process \(\gamma^{*}+p\to X\). The proton structure function \(F_{2}(x,Q^{2})\) has been studied over a large range of values for \(x\) (see (9)) and \(Q^{2}\) and the results for the \(x\)-dependence of the structure function could be fitted by the power behaviour [29] \[F_{2}\sim xg\left(x,Q^{2}\right)\sim x^{-\lambda(Q^{2})},\ \ \mbox{with}\ \ \lambda(Q^{2})=0.048\ln\left(\frac{Q^{2}}{\Lambda^{2}}\right), \tag{11}\] and \(\Lambda\simeq 290\) MeV. The structure function is related to the total DIS cross section for the process \(\gamma^{*}+p\to X\) by \[\sigma_{\gamma^{*}+p\to X}(W^{2})=\frac{4\pi^{2}\alpha_{em}}{Q^{2}}F_{2} \left(x,Q^{2}\right). \tag{12}\] If one applies Regge theory to the elastic forward scattering amplitude \(\gamma^{*}p\to\gamma^{*}p\) and relates it by the optical theorem to the DIS cross section \(\gamma^{*}+p\to X\), one expects a dependence \(\sigma_{\gamma^{*}+p\to X}\sim(W^{2})^{\lambda}\), with \(\lambda=(\alpha_{P}(0)-1)\), whereas the observed value of \(\lambda\) varies between \(\lambda=0.16\pm 0.015\) at \(Q^{2}=2\) GeV\({}^{2}\) and \(\lambda=0.36\pm 0.11\) at \(Q^{2}=150\) GeV\({}^{2}\)[29; 30; 31]. This breakdown of Regge theory in electromagnetic processes led not to a crisis of this theory, since Regge theory was created for purely hadronic processes. But on the other hand, in quantum field theory a photon couples inevitably to intermediate hadronic matter and the lifetime of these intermediate states is large as compared to the time scale of a hard scattering process [32]; In fact, in the case of vector meson production the lifetime of the hadronic state is even infinitely long. Therefore DIS scattering of (virtual) photons \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline Reaction & \(\delta\) & \(\alpha_{P}-1\) & \(\chi^{2}\) \\ \hline \(\gamma\,p\to\rho\,p\) & 0.190 & 0.090 & 0.64 \\ \(\gamma^{*}\,p\to\rho\,p\,(Q^{2}=6\,\mbox{GeV}^{2})\) & 0.486 & 0.135 & 0.66 \\ \(\gamma\,p\to J/\psi\,p\) & 0.675 & 0.173 & 0.81 \\ \(\gamma\,p\to\Upsilon\,p\) & 0.8626 & 0.260 & 0.46 \\ \hline \hline \end{tabular} \end{table} Table 1: Pomeron intercept from integrated cross sections for photo and electroproduction of vector mesons. The cross sections were fitted by the curve \(\sigma=(W/W_{0})^{\delta}\). The intercept \(\alpha_{P}\) was obtained by taking into account the shrinkage corrections from the Regge trajectory slope, see [10]. and diffractive electroproduction should be treated like hadron hadron scattering and Regge theory should be applicable. It was indeed shown by Donnachie and Landshoff that Regge theory can be applied to these processes by introducing two Pomerons [33]. One Pomeron was the usual "soft" Pomeron, well established in hadron scattering, the other one was tentatively related to the so called BFKL [34; 35] "hard" Pomeron [32]. This emerges in perturbative QCD by the exchange of gluon ladders and leads to the effective power \(\lambda_{h}\) \[\lambda_{h}\equiv\alpha_{h}-1=\frac{4N_{c}\alpha_{s}}{\pi}\ln 2, \tag{13}\] where \(\alpha_{h}\) is the hard Pomeron intercept and \(\alpha_{s}\) is the QCD coupling constant. This value is rather unstable against higher order contributions, but with resummation corrections an intercept between 1.3 and 1.5 is plausible. With the conventional soft Pomeron and a BFKL Pomeron, with an intercept \(\alpha^{h}\approx 1.42\), all electron and photoproduction data in the available energy range could be fitted with reasonable accuracy [36]. It turned out that the soft Pomeron couples dominantly to extended objects and the hard one to smaller ones [37]. In DIS production the effective size of the intermediate hadronic system coupled to the virtual photon decreases with increasing virtuality. If one identifies the power \(\lambda(Q^{2})\) obtained from DIS experiments (11) with an effective scale-dependent Pomeron intercept \[\lambda(Q^{2})=\alpha_{P}(0,Q^{2})-1, \tag{14}\] this effective intercept increases indeed by a factor 2 from \(Q^{2}=2\) GeV\({}^{2}\) to \(Q^{2}=150\) GeV\({}^{2}\). For the photoproduction of heavy vector mesons the transverse size of the vector meson is relevant, and is roughly proportional to the inverse mass and for a \(J/\Psi\) and \(\Upsilon\) meson, smaller than a normal hadronic size. As mentioned above, this effective intercept could for all energies available before LHC be explained by the two Pomeron model [33; 36; 38]. The two Pomeron picture, however, seems to be ruled out by the appearance of data for quasi-elastic photoproduction of \(J/\Psi\)[39; 40] and \(\Upsilon\)[41] vector mesons at LHC energies. As can be seen from Fig. 2, the power law for \(J/\psi\) production holds over the range from 20 GeV to above 1 TeV. This is impossibly to achieve with two Pomerons, since at higher energies the hard one would dominate and the curve should show clear convexity. This lead to the introduction of a single, but scale dependent Pomeron [10] on a purely phenomenological basis. We thus write the \(Q\)-dependent effective power of the proton structure function \(F_{2}(x,Q^{2})\propto x^{\lambda(Q^{2})}\) with the scale dependent Pomeron intercept \(\alpha_{P}(0,Q^{2})\) given by (14), and the experimentally determined function \(\lambda(Q^{2})\) (11) from [29], as shown in Figure 3. In Ref. [42] we have examined the possibility that Eq. (6) is not only valid at the hadronic scale but at all scales \[x\,g(x,\mu)=\sum_{\tau}\frac{1}{N_{\tau}(\mu)}c_{\tau}(\mu)w^{\prime}(x,\mu)[ 1-w(x,\mu)]^{\tau(\mu)-2}\,w(x,\mu)^{1-\alpha_{P}(0,\mu)}, \tag{15}\] thus leading to a single, scale dependent Pomeron. It is a challenging problem, however, to unravel the scale dependencies arising from different sources in (15), but since the small-\(x\) behavior is determined uniquely by the the scale-dependent intercept it can, in principle, be extracted unambiguously. For very small values of \(x\) the structure function \(F_{2}(x,Q^{2})\) is strongly dominated by Figure 3: The effective power \(\lambda(\mu)=\alpha_{P}(0,\mu)-1\) extracted from the gluon distribution (15) for the pion (empty red circles) using the procedure in [42], is compared with the values of \(\lambda(Q^{2})\) obtained from the measured proton structure function \(F_{2}(x,Q^{2})\) under the assumption \(\mu^{2}=Q^{2}\). Experimental results from HERA [29] (filled black circles). Figure 2: In photoproduction of hadrons the scale is set by the virtuality of the mass of the produced \(J/\psi\) meson, \(Q^{2}\simeq M_{J/\Psi}^{2}\). The production cross section for \(J/\psi\) mesons can be described by a single Pomeron exchange with intercept \(\alpha_{P}(0)\simeq 1.15\) (solid line), distinctly higher than the intercept of conventional hadrons. The figure is adapted from [10], LHC data from [39; 40]. the gluon distribution and therefore the \(x\) dependence obtained in the HERA analysis [29], and parametrized in (11), yields a model independent determination of the Pomeron intercept if one accepts the relation between DIS-entropy, gluon density and Pomeron intercept expressed in (23). This allows the following comparison: The gluon distribution function obtained in [3] can be evolved using the DGLAP equations and the scale dependence of the intercept can be obtained as described above. This is illustrated in Fig. 3, where we compare the values for our evolved results for the effective power \(\lambda(\mu)\), extracted from the expansion of (15) in the range \(0.0001\leq x\leq 0.00016\) following the procedure described in [42], with the power \(\lambda(Q^{2})\) of the proton structure function \(F_{2}\) from Ref [29]. We show only the evolution results for the pion, since the proton results present numerical instabilities at lower evolution scales, currently under investigation. It should be noted, however, that this is rather a consistency check than an independent determination, since one has to relate the virtuality scale \(\mu\) in the gluon distribution function \(g(x,\mu)\) to the photon virtuality \(Q\) in the DIS process [42; 29] and the measured structure function is an important input for the evolution equations determining the \(\mu\) dependence of the gluon distribution \(g(x,\mu)\). ### Hadron entropy and multiplicity It was suggested in Ref. [1] a relation between the hadron multiplicity in the final state of the collision and the multiplicity of partons \[S_{partons}=\ln\left(xg(x,Q^{2})\right)\equiv S_{hadrons}. \tag{16}\] Such conjectural relation would be an indication of the absence of a significant increase of entropy in the hadronization process, as measured in the hadron multiplicity distributions [1; 2; 43]. In the present approach, this connection leads to the unexpected relevance of hadron multiplicity to the scale dependence of the Pomeron intercept, and fixes an upper bound to the growth's rate of hadron multiplicity at higher energies. The theoretical analysis of hadron multiplicity distributions \(P_{n}=\sigma_{n}/\sigma_{in}\), the ratio of the cross section to produce \(n\) hadrons in a high-energy collision to the inelastic cross section, is a rather complex undertaking [44; 45]. Most of the hadrons in the collision process are produced at relatively low transverse momentum and one has to recur to nonperturbative models or Monte Carlo event generator models of particle production. A simple model which incorporates some of the main features of the particle multiplicity distributions is the parton cascade model of Levin and Lublinski [46] for \(P_{n}\): \[\frac{dP_{n}(Y)}{dY}=-\lambda_{h}nP_{n}(Y)+(n-1)\lambda_{h}P_{n-1}(Y), \tag{17}\] with the solution \[P_{n}(Y)=e^{-\lambda_{h}Y}\left(1-e^{-\lambda_{h}Y}\right)^{n-1}, \tag{18}\] where \(Y=\ln(1/x)\) is the rapidity and \(\lambda_{h}\) is the hard Pomeron intercept (13). The mean multiplicity which follows from (17) and (18) is \[\langle n\rangle=\sum_{n}nP_{n}(x)=e^{\lambda_{h}Y}=\left(\frac{1}{x}\right)^{ \lambda_{h}}, \tag{19}\] which reproduces the power increase of the cross section with energy [1; 46], \(e^{\lambda_{h}Y}=\left(\frac{1}{x}\right)^{\lambda_{h}1}\) In the present approach the gluon distribution at small \(x\) is given by Eq. (7) and thus the multiplicity at the observational scale \(\mu\) follows from Eq (16) as discussed in [1] \[xg(x,\mu)\sim\langle n\rangle=\sum_{n}nP_{n}=\left(\frac{1}{x}\right)^{ \alpha_{P}(0,\mu)-1}, \tag{20}\] but with the Pomeron intercept \(\lambda(\mu)=\alpha_{P}(0,\mu)-1\) instead of \(\lambda_{h}\). At small values of \(x\) we have [48; 49; 50]\(x=\frac{t_{0}}{s}\), where we take the virtuality scale \(\mu\) as the average transverse momentum \(t_{0}\) of the produced hadrons or a particle mass in the final state of the collision [48]. Eq. (20) suggests that \[\langle n\rangle\sim\left(\frac{s}{t_{0}}\right)^{\alpha_{P}(0,t_{0})\ -\ 1}, \tag{21}\] where \(t_{0}\) is independent of the collision energy \(s\), but may differ for different datasets [48]. We notice that the theoretical predictions for the evolution of the effective power \(\lambda(\mu)\), shown in Fig. 3, suggest that this quantity saturates at higher scales, setting an asymptotic upper bound for the rate of change of the hadron multiplicity (21) for \(\lambda\simeq 0.4-0.5\), higher than the value \(\lambda=1/3\) assumed in [1] from two-dimensional conformal field theory. Figure 4: Left figure: CMS data with \(p_{T}>100\) MeV and pseudorapidity \(|\eta|<0.5\) from [51]. Right figure: ALICE data with \(p_{T}>150\) MeV and \(|\eta|<0.8\) from [52]. The grey lines in the figure correspond to Eq. (21) with \(\sqrt{t_{0}}\simeq 1.3-1.4\) GeV. In a recent analysis of LHC data [48] it was found that the average multiplicity is well described by a power law, but with different powers for each dataset. As an illustration, we show in Fig. 4 the CMS results from Ref. [51] which correspond to the Set I examined in Ref. [48] for \(p_{T}>100\) MeV and pseudorapidity \(|\eta|<0.5\). The particles from this data set correspond to gluon production. Set II from ALICE and ATLAS with larger rapidities \(|\eta|<2.4\) also includes the current fragmentation region, and Set III from ATLAS and CMS, also with large rapidities, contains also hadrons produced in the perturbative domain. We thus examine here only the results for Set I from Ref. [48] which abides to the mechanisms and approximations discussed in this article. We also show in Fig. 4 more recent results from ALICE [52] with \(p_{T}>150\) MeV and \(|\eta|<0.8\) not included in [48]. Using the predictions from Eq. (21) (grey line in Fig. 4), and comparing with the proton evolved results shown in Fig. 3, we obtain for the average transverse momentum \(\sqrt{t_{0}}\simeq 1.3-1.4\) GeV. Results for the charm structure function [53; 54]\(F^{c\bar{c}}(x,Q^{2})\) indicate that in the presence of heavy quarks, the effective power \(\lambda(Q^{2})\) for the \(x\) dependence (11) is larger than for the full structure function \(F_{2}(x,Q^{2})\) (see [37]). This leads to the suggestion from Eq. (21), that also the multiplicity of hidden charm objects increases faster than the multiplicity of hadrons with light quarks since, in this case, the virtuality scale is determined by the charm mass. ## III Summary and Conclusion The light-front holographic framework, with essential constraints imposed by the generalized Veneziano model [25; 26], successfully describes the gluon distribution \(g(x,\mu)\) in nucleons and mesons at the hadronic scale \(\mu=1\) GeV. An essential input is its relation of the small-\(x\) behavior of the gluon distribution to the Pomeron intercept \(\alpha_{P}(0)\) from high-energy scattering [3]. The gluon distribution is a scale dependent quantity and the intrinsic distribution evaluated at the hadronic scale can be extended to other scales using QCD evolution equations. In Ref. [42] we have ventured that also the Pomeron intercept follows this evolution and is therefore scale dependent. Such a scale dependent Pomeron was proposed on purely phenomenological reasons in Ref. [10], where it was shown that such a dependence is suggested by the results for the photoproduction of \(J/\psi\) mesons and not in contradiction with the principles of Regge theory. A scale dependent Pomeron trajectory is also a salient feature of the gauge-gravity correspondence [55]. At very small values of \(x\) the gluon dominance leads to universal behavior, well captured by the present description, which allows us to relate different experimental results in the high energy domain. The relation (8) makes it clear that two features of high energy scattering, the hyper-critical Pomeron intercept \(\alpha_{P}(0)>1\) and the, here proposed, single but scale dependent Pomeron are closely related to typical features of a non-Abelian theory like QCD. In fact, in analogy of the thermodynamical definition of classical entropy for a system in equilibrium \[dS=\frac{dQ_{therm}}{T}, \tag{22}\] the relation (8) can be written in the differential form \[dS_{DIS}=d\ln\left(x\,g(x,\mu)\right))\sim 2\,\left(\alpha_{P}(0)-1\right)\frac{ dW}{W}. \tag{23}\] It shows that the hypercritical intercept \(\alpha_{P}(0)\) is related to the increase of the DIS-entropy and the gluon density \(x\,g(x,\mu)\) with increasing energy \(W\), corresponding to decreasing \(x\). The other salient feature discussed here, the scale dependence of the intercept, is a consequence of the scale dependence of the gluon distribution, which in turn is the consequence of the renormalization group of an UV stable renormalizable theory like QCD. It is therefore quite natural that in a nonperturbative holographic theory corresponding to a non-Abelian gauge theory [6] these features emerge in a transparent way. If the quantum entropy \(S_{DIS}\) of partons inside a hadron equals the classical hadron entropy \(S_{hadron}\) of the final state, as proposed in Ref. [1], then the final state entropy is also scale dependent and, with it, also the hadron multiplicity which depends on the average momentum of the hadrons in the final state; here the transverse momentum could set the scale as discussed in Sec. II.4. It describes the rate of change of the multiplicity in terms of the scale-dependent Pomeron, and sets and upper bound to its increase from the saturation of the effective power \(\lambda(\mu)\) at very large virtualities predicted from the QCD evolution of the gluon distributions in holographic QCD. We conclude this letter with some speculative remarks on a possible analogy between the black-hole (or Bekenstein-Hawking) entropy2and \(S_{DIS}\), the quantum entropy from deep inelastic scattering. Footnote 2: For a short non-technical review see Jacob D. Bekenstein, Bekenstein-Hawking entropy, Scholarpedia, 3(10):7375 (2008). The existence of black-hole entropy is demanded by the laws of thermodynamics [56], but the observation of microstates inside the black hole seems to be prevented by the horizon (no hair theorems). There is, however, a way out: Using the temperature of the black hole, which can be determined from the Hawking radiation [57] emitted from it, the Bekenstein-Hawkings entropy can be related to the surface area \(A\) of the black hole \[S_{BH}=\frac{A}{4L_{P}^{2}}, \tag{24}\] where \(L_{P}\) is the Planck length \(L_{P}=\sqrt{G_{N}\hbar/c^{3}}\). On the other hand, the DIS entropy \(S_{DIS}\) is generated by the separation of the hadron state into parton states by the deep inelastic process. It is, however, not a directly observable quantity, since the partons cannot escape the hadron due to confinement. It also has the consequence that the partons at a fixed value of the Bjorken variable \(x\) are not directly countable but the number depends on the renormalization scale \(\mu^{2}\). In our approach to the gluon density [3] the parton number at small \(x\) is given by the Pomeron intercept, see (7). At first sight this seems like a contradiction, since the gluon intercept, at least for energies in and below the TeV region (_i.e.,_ the LHC range and below), can be observed experimentally, _e.g.,_ from forward scattering experiments. The solution to this problem consists in the introduction of a scale dependence of the Pomeron intercept, \(\alpha_{P}(t=0,\mu^{2})\), as discussed extensively in Sec. II.3 and proposed earlier on phenomenological grounds in Refs. [7; 10]. In both cases, \(S_{BH}\) and \(S_{DIS}\), the determination of the entropy is based on theoretical arguments. A direct counting of the microstates is forbidden: By the event horizon in black holes and by confinement in DIS. Additional theoretical arguments, however, allow a relation to observable quantities, by Hawking radiation and high energy scattering. It may give further support to the idea of a possible relation between confinement and black holes as may be realized in a holographic context [58]. ###### Acknowledgements. We thank Tianbo Liu for providing us the DGLAP results for the light-front holographic model shown in Fig. 3. We also thank Ulli Marquard for drawing our attention to the possible parallel of black-hole entropy. SJB is supported in part by the Department of Energy Contract No. DE-AC02-76SF00515.
2304.13568
Toxic comments reduce the activity of volunteer editors on Wikipedia
Wikipedia is one of the most successful collaborative projects in history. It is the largest encyclopedia ever created, with millions of users worldwide relying on it as the first source of information as well as for fact-checking and in-depth research. As Wikipedia relies solely on the efforts of its volunteer-editors, its success might be particularly affected by toxic speech. In this paper, we analyze all 57 million comments made on user talk pages of 8.5 million editors across the six most active language editions of Wikipedia to study the potential impact of toxicity on editors' behaviour. We find that toxic comments consistently reduce the activity of editors, leading to an estimated loss of 0.5-2 active days per user in the short term. This amounts to multiple human-years of lost productivity when considering the number of active contributors to Wikipedia. The effects of toxic comments are even greater in the long term, as they significantly increase the risk of editors leaving the project altogether. Using an agent-based model, we demonstrate that toxicity attacks on Wikipedia have the potential to impede the progress of the entire project. Our results underscore the importance of mitigating toxic speech on collaborative platforms such as Wikipedia to ensure their continued success.
Ivan Smirnov, Camelia Oprea, Markus Strohmaier
2023-04-26T13:59:25Z
http://arxiv.org/abs/2304.13568v1
# Toxic comments reduce the activity of volunteer editors on Wikipedia ###### Abstract Wikipedia is one of the most successful collaborative projects in history. It is the largest encyclopedia ever created, with millions of users worldwide relying on it as the first source of information as well as for fact-checking and in-depth research. As Wikipedia relies solely on the efforts of its volunteer-editors, its success might be particularly affected by toxic speech. In this paper, we analyze all 57 million comments made on user talk pages of 8.5 million editors across the six most active language editions of Wikipedia to study the potential impact of toxicity on editors' behaviour. We find that toxic comments consistently reduce the activity of editors, leading to an estimated loss of 0.5-2 active days per user in the short term. This amounts to multiple human-years of lost productivity when considering the number of active contributors to Wikipedia. The effects of toxic comments are even greater in the long term, as they significantly increase the risk of editors leaving the project altogether. Using an agent-based model, we demonstrate that toxicity attacks on Wikipedia have the potential to impede the progress of the entire project. Our results underscore the importance of mitigating toxic speech on collaborative platforms such as Wikipedia to ensure their continued success. ## Introduction Wikipedia is arguably one of the most successful collaborative projects in history. It has become the largest and most-read reference work ever created, and it is currently the fifth most popular website on the Internet1. Millions of users worldwide rely on Wikipedia as their first source of information when encountering a new topic, for fact-checking and in-depth research _Singer et al._ (_2017_). Even if caution might be required when consulting less actively maintained pages _Bruckman_ (_2022_), numerous studies have shown that Wikipedia is a reliable source of information in areas ranging from political science _Brown_ (_2011_) to pharmacology _Clauson et al._ (_2008_) and its accuracy is comparable to traditional encyclopedias _Giles_ (_2005_) and textbooks _Kraenbring et al._ (_2014_). Footnote 1: [https://www.semrush.com/website/top/](https://www.semrush.com/website/top/) [[Assessed on 23.02.2023]] One of the most remarkable aspects of Wikipedia's success is that its content is exclusively created and curated by volunteer-editors, known as Wikipedians. The English edition alone has 129,698 active editors2. However, this volunteer-driven model also makes Wikipedia susceptible to the inherent challenges associated with maintaining such a large online community _Kraut and Resnick_ (_2012_)_, _Keegan and Fiesler_ (_2017_)_. For example, it has been previously observed that Wikipedia is not free of conflict, particularly in the form of so-called edit wars _Yasseri et al._ (_2012_), which impose significant costs on the project _Kittur et al._ (_2007_) and could negatively affect the quality of Wikipedia articles _Arazy et al._ (_2011_). Footnote 2: [https://en.wikipedia.org/wiki/List_of_Wikipedia](https://en.wikipedia.org/wiki/List_of_Wikipedia) [[Assessed on 23.02.2023]] In this paper, we focus on the impact of toxic comments directed towards editors on their ac
2306.05573
Microscopic intervention yields abrupt transition in interdependent magnetic networks
The study of interdependent networks has recently experienced a boost with the development of experimentally testable materials that physically realize their critical behaviors, calling for systematic studies that go beyond the percolation paradigm. Here we study the critical phase transition of interdependent spatial magnetic networks model where dependency couplings between networks are realized by a thermal interaction having a tunable spatial range. We show how the critical phenomena and the phase diagram of this realistic model are highly affected by the range of thermal dissipation and how the latter changes the transition from continuous to abrupt. Furthermore, we show that microscopic interventions of localized heating and localized magnetic field yield a macroscopic phase transition and novel phase diagrams. Our results provide novel and realistic insights about controlling the macroscopic phases of interdependent materials by means of localized microscopic interventions.
Bnaya Gross, Ivan Bonamassa, Shlomo Havlin
2023-06-08T22:09:26Z
http://arxiv.org/abs/2306.05573v1
# Microscopic intervention yields abrupt transition in interdependent magnetic networks ###### Abstract The study of interdependent networks has recently experienced a boost with the development of experimentally testable materials that physically realize their critical behaviors, calling for systematic studies that go beyond the percolation paradigm. Here we study the critical phase transition of interdependent spatial magnetic networks model where dependency couplings between networks are realized by a thermal interaction having a tunable spatial range. We show how the critical phenomena and the phase diagram of this realistic model are highly affected by the range of thermal dissipation and how the latter changes the transition from continuous to abrupt. Furthermore, we show that microscopic interventions of localized heating and localized magnetic field yields a macroscopic phase transition and novel phase diagrams. Our results provide novel and realistic insights about controlling the macroscopic phases of interdependent materials by means of localized microscopic interventions. In the last few decades, network theory has been proven useful in describing collective phenomena in various fields ranging from social [1], technology [2; 3], biology [4; 5], and medicine [6]. It gained a significant boost with the development of the interdependent networks paradigm [7; 8], showing how dependency interactions between macroscopic systems lead to the emergence of novel phenomena that do not occur in their isolated counterparts. The resilience of such networks is usually studied by percolation theory [9; 10] which has been extensively applied in the last decade for studying and understanding different structural and functional properties of isolated and interacting networks. [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Despite the extensive study of percolation of abstract interdependent networks, applying the interdependent paradigm in physical networks as _physical interdependent networks_ (PINs) is a challenge that has remained, so far, unexplored. Some steps forward happened recently with the theoretical study of interdependence as thermal couplings in random magnetic networks [22] followed by the first experimental realization of interdependent superconducting networks [23]. While most studies in interdependent networks focus on random dependency couplings, only a few of them have focused on the effects that a spatial dependency range has on the model's behaviors. This is, for example, the case of percolation in spatial interdependent networks, where a finite range of dependency links has been proven crucial for exploring the variety of critical processes occurring in these models [24; 25; 26; 12]. Thus, despite the study of interdependent _random_ (non-spatial) magnetic networks [22], the realistic case of spatial physical networks has never been explored. In this Letter, we study a realistic model of spatial interdependent magnetic system and analyze the macroscopic effects induced by localized interventions. We find the different effects that localized heating and a localized magnetic field have on the kinetics of nucleating droplets and characterize the phase diagram as a function of the spatial range of the intervention. Our results generalize localized attacks beyond percolation and provide useful insights for future experimental validation. _Model_- Let us consider a system composed by two \(2D\) magnetic network lattices of size \(N=L^{2}\) placed in a common heat bath of temperature \(T\). Each node can represent a ferromagnetic grain and it is endowed with an Ising spin \(\sigma=\pm 1\) so that the configuration of spins in network \(\mu\) at time \(t\) is \(\mathbf{\sigma}_{\mu}(t)=\{\sigma_{1}^{\mu}(t),\sigma_{2}^{\mu}(t),....,\sigma_{N }^{\mu}(t)\}\), see Fig. 1. When current is induced in each layer, the networks become thermally coupled [22] by a heat dissipation mechanism resulting from the change of local Figure 1: **Illustration of the model.** Two \(2D\) magnetic layers (lattices) of size \(N=L^{2}\) are interdependent on each other via thermal couplings. The magnetic state of each network \(k\) is described by their spins configuration \(\mathbf{\sigma}_{\mu}=\{\sigma_{1}^{\mu},\sigma_{2}^{\mu},....,\sigma_{N}^{\mu}\}\) where each node of the network is an Ising spin pointing up (blue arrow) or down (red arrow) having \(\sigma_{i}^{\mu}=+1,-1\) respectively. Interdependence is realized via thermal coupling where each node in network \(\mu\) affects all nodes up to a distance \(r\) in the other network \(\mu^{\prime}\) and vice versa. resistance due to electron scattering, similarly to magnetoresistors [28; 29]. When spins are locally aligned (ordered), electrons experience weak scattering and the local resistance is low having weak dissipation. On the other hand, when spins are not locally aligned (disordered), strong scattering is expected with high resistance and strong dissipation [30]. Since local spins alignment is correlated with resistance which implies heat dissipation, the thermal coupling can be modeled as follows: locally ordered spins create weak thermal coupling while locally disordered spins create strong thermal coupling. We also assume that heat is dissipated up to a distance \(r\) (see Fig. 1), which can be controlled by the properties of the medium placed between the layers such as thermal conductivity and width. Thus, we assume here, that each node in one layer is thermally coupled with all the nodes in the other layer up to a distance \(r\) (Fig. 1). In this case, the dependency of node \(i\) in network \(\mu\) on its interdependent nodes in network \(\mu^{\prime}\) (and vice versa) is reflected by the relation \[\beta_{i}^{\mu}=\beta\Sigma_{i^{\prime}}^{\mu^{\prime}}(r)\quad, \tag{1}\] Here, \(\beta=1/T\) is the inverse temperature, \(T\), of the heat bath, \(\Sigma_{i}^{\mu}(r)=\frac{1}{|K_{i}(r)|}\sum_{j,|i-j|\leq r}A_{ij}^{\mu}\sigma _{j}^{\mu}\) is the average magnetization of nodes within a distance \(r\) from node \(i\) in network \(\mu\), \(A_{ij}^{\mu}\) is the adjacency matrix of network \(\mu\), and \(K_{i}(r)\) is the set of all nodes up to a distance \(r\) from node \(i\) and \(|K_{i}(r)|=\sum_{j,|i-j|\leq r}A_{ij}^{\mu}\sim\pi r^{2}\) is the number of nodes within a circle of radius \(r\). Thus, locally ordered spins (\(\Sigma_{i^{\prime}}^{\mu^{\prime}}(r)\simeq 1\)) result in \(\beta_{i}^{\mu}\simeq\beta\), i.e., the local temperature is weakly affected by the local ordering. However, as neighborhoods of spins get more and more disordered (i.e., \(\Sigma_{i^{\prime}}^{\mu^{\prime}}(r)\to 0\)), the local temperatures \(\beta_{i}^{\mu}\to 0\), inducing a strong overheating effect. We show below that the dependency interaction range, \(r\), plays a critical role and controls the propagation of thermal fluctuations. For short-range dependency, small Figure 2: **Interdependent magnetization phase transitions.** Magnetization \(M\) as a function of temperature \(T\) is shown for different values of the dependency interaction range \(r\). A smeared-out continuous transition is observed for short interaction range \(r<r_{c}\simeq 2\) and abrupt transition for \(r>r_{c}\). Inset: The critical temperatures, \(T_{c,<}(r)\), when increasing temperature (heating) from the ordered state (\(M\simeq 1\)) to the disordered state, and the critical transitions, \(T_{c,>}(r)\) when decreasing temperature (cooling) from the disordered state (\(M\simeq 0\)) to the ordered state, are the same in the continuous regime \(r<r_{c}\) but are different for \(r>r_{c}\) showing hysteresis. Here \(J=1\), \(L=200\) and performing \(10^{4}\) MCSs. Figure 3: **Critical dynamics at \(T_{c,<}\).****(a)** For low values of \(r\) above \(r_{c}\), nucleation transition is observed with a parabolic shape decrease of the magnetization associated with **(b)** a circular area of disordered spins that spontaneously appears at \(T_{c,<}\) and increases in time due to the dependency heat interactions between the layers. **(c)** The radius of the circle \(R\) increases linearly with time. **(d)** For high values of \(r\), such as \(r=100\) shown here, a plateau is observed where the magnetization remains nearly constant for a long time. At the end of the plateau, the system converges to the disordered phase exponentially fast. **(e)** The number of flipped spins as a function of time, \(S_{t}\), is constant during the plateau showing **(f)** a critical branching factor, \(\eta_{t}=S_{t}/S_{t-1}\simeq 1\). The analogy to percolation of abstract interdependent networks can be seen in Berezin _et al_[25] and Zhou _et al_[27]. \(r\), the effect of fluctuations remains local and the ferromagnetic to paramagnetic phase transitions are continuous, while for long-range dependency, large \(r\), fluctuations have global effects, and abrupt transitions are observed. _Magnetic evolution.-_ The magnetic evolution of the system with time from a given initial conditions \(\mathbf{\sigma}_{\mu}(0)\) can be captured using _Glauber dynamics_[31]. While taking into account the thermal coupling, a spin \(\sigma_{i}^{\mu}\) is randomly chosen from one of the networks and will be flipped with the probability [22]: \[\omega_{i}^{\mu}(\sigma_{i}^{\mu})=\left(1+\exp\{2J\beta\sigma_{i}^{\mu} \Sigma_{i^{\prime}}^{\mu^{\prime}}(r)\sum_{j}A_{ij}^{\mu}\sigma_{j}^{\mu}\} \right)^{-1}. \tag{2}\] This procedure continues until the system reaches equilibrium. A single Monte Carlo step (MCS) is defined as \(N\) attempts to flip randomly chosen spins and the number of steps will be considered here as time \(t\) when a non-equilibrium measurement of the magnetization during a phase transition is being conducted. In such a way, the magnetization of network \(\mu\), \(M_{\mu}(t)=\frac{1}{N}\sum_{i}\sigma_{i}^{\mu}(t)\), is defined using the spins configuration \(\mathbf{\sigma}_{\mu}(t)\) after \(t\) Monte Carlo steps from the initial conditions \(\mathbf{\sigma}_{\mu}(0)\). _Magnetization phase transitions.-_ We measure the steady-state magnetization as a function of temperature for different dependency range \(r\) both for ordered initial conditions (\(\sigma_{i}=+1\) for all spins in both networks) and disordered initial conditions (\(\sigma_{i}=\pm 1\) randomly), see Fig. 2. Interestingly, a critical dependency range \(r_{c}\simeq 2\) exists below which the transition is continuous similar to a single network [32] but \(T_{c,<}(r)\) increases with \(r\), while above \(r_{c}\) the transition becomes abrupt with hysteresis phenomena (see Fig. 2 and inset). This behavior becomes apparent when measuring the transitions critical points \(T_{c,<}(r)\) (heating) and \(T_{c,>}(r)\) (cooling) from the ordered phase (\(M\simeq 1\)) and disordered phase (\(M\simeq 0\)) respectively, see inset of Fig. 2. The continuous transition appears for \(r<r_{c}\) and in this case there is no hysteresis, i.e., \(T_{c,<}(r)=T_{c,>}(r)\). In contrast, an abrupt transition and hysteresis are observed for \(r>r_{c}\) where \(T_{c,<}(r)>T_{c,>}(r)\). While for \(r>r_{c}\) the transition from the ordered phase to the disordered phase is abrupt, nevertheless, the transition nature depends on \(r\). For low values of \(r\), but still \(r>r_{c}\), fluctuations occur locally and a spontaneous nucleation transitions are observed at \(T_{c,<}(r)\). In this case, Figure 4: **Localized heating.****(a)** The upper layer is heated locally within a microscopic radius \(r^{h}\), to temperature \(T_{S}+\Delta T_{L}\). **(b)** This heating creates a disordered droplet which starts to dissipate heat to the bottom layer. **(c)** The localized regime in the bottom layer is heated up and becomes disordered as well. **(d)** The disordered regime in the bottom layer starts to dissipate heat back to the top layer broadening the circle of disorder. **(e)** The disordered droplet in the top layer extends due to the dissipation from the bottom layer. **(f)** This nucleation process continues until the disordered droplet takes over the system. **Phase diagrams.** While for low values of \(r\), but above \(r_{c}\), a spontaneous nucleation transition is observed at \(T_{c,<}\), an _induced_ nucleation macroscopic phase transition is seen for \(T<T_{c,<}\) by an external microscopically localized heating. The phase diagrams of the critical heating radius \(r_{c}^{h}\) for different heating intensity \(\Delta T_{L}\) are shown. **(g)** For \(\Delta T_{L}=0\) there are no induced nucleation transitions and only two phases separated by \(T_{c,<}(r)\) appear, ordered phase (brown) and disordered phase (blue). **(h)-(j)** Once the system is locally heated, a _metastable regime_ appears where localized heating with a critical radius above \(r_{c}^{h}\) can induce a nucleation transition. The metastable regime expands with the localized heat intensity \(\Delta T_{L}\) towards lower temperatures, shrinking the ordered phase. Here \(J=1\) and \(L=100\). a disordered droplet is spontaneously created and radially spread in the system, see Fig. 3**(a)-(c)**. Measuring the non-equilibrium magnetization as a function of time during the transition shows a parabolic shape (Fig. 3**(a)**) as a result of the nucleation of the droplet mass is taking over (Fig. 3**(b)**). This behavior is a result of the linear increase of the droplet radius with time \(R\sim t\) (Fig. 3**(c)**) resulting in the scaling of the droplet mass \(M_{d}=\pi R^{2}\sim t^{2}\). In contrast to the nucleation transition for low \(r\) but above \(r_{c}\), in the case of \(r\gg r_{c}\) fluctuation has a global effect and the transition is of mixed-order [33; 34; 12; 24]. In this case, the non-equilibrium behavior of the magnetization at the critical point shows a _plateau_ similar to interdependent networks [27] where the magnetization fluctuates around a given value of the magnetization for a long time before converging into the disordered phase exponentially fast as shown in Fig. 3**(d)**. The plateau is characterized by an almost constant number of successful flips at each time step \(S_{t}\) shown in Fig. 3**(e)** and a critical branching factor \(\eta_{t}=S_{t}/S_{t-1}\simeq 1\), see Fig. 3**(f)**. _Global macroscopic transition due to microscopic intervention.-_ Spatial interdependent networks can experience an induced macroscopic phase transition as a result of local microscopic interventions such as localized attacks in percolation which create a new metastable state [25; 26; 35]. Here we show the physical analog of nucleation-induced transition from the ordered to the disordered phase in spatial interdependent magnetization networks both for microscopic localized heating and localized magnetic field. Localized interventions are assumed to change physical quantities such as temperature or magnetic field within a circle of a finite radius \(r^{h}\) in one layer of the system. In the case of localized heating, spins outside the circle will experience the system temperature \(T_{S}\) while spins within the circle also experience the localized heating \(\Delta T_{L}\) with the total temperature \(T=T_{S}+\Delta T_{L}\). Thus, the temperature \(T_{i}\) felt by node \(i\) can be summarized as \[T_{i}=\begin{cases}T_{S},&\text{if }d_{i}>r^{h}\\ T_{S}+\Delta T_{L},&\text{if }d_{i}\leq r^{h}\end{cases} \tag{3}\] where \(d_{i}\) is the distance of node \(i\) from the center of the heating. Since \(T_{i}\) is affecting the flipping probability of spin \(i\) in Eq. (2), spins within the circle are likely to be more disordered, and if \(r^{h}\) is large enough, a disordered droplet that will spread in the system can be created and the system will experience an induced nucleation transition as demonstrated in Figs. 4**(a)-(f)**. In fact, for a given \(\Delta T_{L}\), a _finite_ critical radius \(r^{h}_{c}\) exists at a metastable regime where for any values of \(T_{S}\) and \(r\), for \(r^{h}<r^{h}_{c}\) the disorder will not spread and the system will remain in the ordered phase while for \(r^{h}>r^{h}_{c}\) the system will experience a nucleation transition into the disordered state. Note that \(r^{h}_{c}\) does not depend on \(L\) and therefore it is regarded as a microscopic intervention (see SI for finite size analysis). Thus, a phase diagram of \(r^{h}_{c}(T_{S},r)\) can be analyzed for different heating intensities. The null case of no localized heating i.e. \(\Delta T_{L}=0\) shown in Fig. 4**(g)** display two phases, the ordered phase for \(T_{S}<T_{c,<}(r)\) and the disordered phase for \(T_{S}>T_{c,<}(r)\) similar to the inset of Fig. 2. However, once the system is localizely heated i.e. \(\Delta T_{L}>0\), a _metastable_ regime appears where a localized microscopic intervention of radius larger than \(r^{h}_{c}\) will induce a macroscopic transition. Figs. 4**(h)-(j)** show how the metastable regime expands toward the ordered phase as the circular heating intensity increases. Similar to localized microscopic heating, applying a negative localized magnetic field \(h<0\) can induce a nucleation type of macroscopic transition from the ordered phase to the disordered phase. In this case, the localized field, and the flipping probability should adjust in a similar way to Eqs.(2)-(3) which can be seen in the SI. The null case of no magnetic field i.e. \(h=0\) shown in Fig. 5**(a)** displays two phases similar to the null case of localized heating shown in Fig. 4**(a)**. The ordered phase is for Figure 5: **Localized magnetic field intervention.** Similar to localized heating, applying a localized negative magnetic field \(h<0\) can induce a nucleation transition. **(a)** For \(h=0\) there are no induced nucleation transitions and only two phases separated by \(T_{c,<}(r)\) appear, ordered phase (brown) and disordered phase (blue). **(b) - (d)** Once a localized magnetic field is applied, a _metastable regime_ appears where a circular critical radius \(r^{h}_{c}\) of localized field can induce a nucleation transition. The metastable regime expands with the field intensity \(|h|\) towards lower temperatures, shrinking finally the ordered phase which completely disappears for large field intensity \(|h|\gg 0\). Note the analogy between this PIN system studied here to percolation of abstract interdependent spatial networks [25]. \(T_{S}<T_{c,<}(r)\) and the disordered phase for \(T_{S}>T_{c,<}(r)\). However, as a localized magnetic field is applied, \(h<0\), a metastable regime appears where a finite critical microscopic radius \(r_{c}^{h}\) influenced by a localized field can induce nucleation transition. The metastable regime expands as \(|h|\) increases (Figs. 5**(b) - (d)**) until the ordered phases completely disappear for \(r>r_{c}\) even at zero temperature. Note that the metastable regime of the localized field expands differently compared to the localized heating in Fig. 4 probably as a result of the symmetry breaking of the field. _Discussions.-_ The extensive theoretical study of percolation of abstract interdependent networks in the last decade has led to the first-ever experiment of interdependent superconductors [23]. The ability to perform controlled experiments in PINs is a significant breakthrough in identifying and proving novel phase transitions. Furthermore, the study of PINs shows significantly richer phenomena compared to abstract percolation, especially in the realistic spatial case, and is expected to be a novel frontier of experimental and theoretical research of PINs. We hope our theoretical results of novel phase transitions in thermally coupled magnetic networks, and particularly the induced nucleation transition of microscopically localized interventions, will motivate experimentalists and theorists to test this theory and further study interdependent magnetic systems as well as other PINs.
2308.13865
Non-uniform convergence of solution for the Camassa-Holm equation in the zero-filter limit
In the short note, we prove that given initial data $\mathcal{u}_0 \in \pmb{H}^s(\mathbb{R})$ with $s>\frac32$ and for some $T>0$, the solution of the Camassa-Holm equation does not converges uniformly with respect to the initial data in $\pmb{L}^\infty$ $(0,T;H^s(\mathbb{R}))$ to the inviscid Burgers equation as the filter parameter $\alpha$ tends to zero. This is a supplement to our recent result on the zero-filter limit.
Jinlu Li, Yanghai Yu, Weipeng Zhu
2023-08-26T12:51:35Z
http://arxiv.org/abs/2308.13865v1
# Non-uniform convergence of solution for the Camassa-Holm equation in the zero-filter limit # Non-uniform convergence of solution for the Camassa-Holm equation in the zero-filter limit Jinlu Li\({}^{1}\), Yanghai Yu\({}^{2,}\) and Weipeng Zhu\({}^{3}\) \({}^{1}\) School of Mathematics and Computer Sciences, Gannan Normal University, Ganzhou 341000, China \({}^{2}\) School of Mathematics and Statistics, Anhui Normal University, Wuhu 241002, China \({}^{3}\) School of Mathematics and Big Data, Foshan University, Foshan, Guangdong 528000, China E-mail: [email protected]; [email protected](Corresponding author); [email protected] **Abstract:** In this short note, we prove that given initial data \(u_{0}\in H^{s}(\mathbb{R})\) with \(s>\frac{3}{2}\) and for some \(T>0\), the solution of the Camassa-Holm equation does not converges uniformly with respect to the initial data in \(L^{\infty}(0,T;H^{s}(\mathbb{R}))\) to the inviscid Burgers equation as the filter parameter \(\alpha\) tends to zero. This is a supplement to our recent result on the zero-filter limit. **Keywords:** Camassa-Holm equation; Burgers equation; Non-uniform convergence; Zero-filter limit. **MSC (2010):** 35Q35. ## 1 Introduction In this paper, we continue to consider the zero-filter limit \(\alpha\to 0\) for the Camassa-Holm equation in the Sobolev space \[\left\{\begin{aligned} &\partial_{t}m+2m\partial_{x}u+u\partial_{x}m=0,\quad(t,x)\in\mathbb{R}^{+}\times\mathbb{R},\\ & m=(1-\alpha^{2}\partial_{x}^{2})u,\\ & u(0,x)=u_{0}(x),\end{aligned}\right. \tag{1.1}\] where the constant \(\alpha>0\) is a filter parameter. When the filter parameter \(\alpha=0\), Eq.(1.1) becomes the Burgers equation \[\left\{\begin{aligned} & u_{t}+3u\partial_{x}u=0,\quad(t,x)\in \mathbb{R}^{+}\times\mathbb{R},\\ & u(0,x)=u_{0}(x).\end{aligned}\right. \tag{1.2}\] The Camassa-Holm equation was firstly proposed in the context of hereditary symmetries studied in [14] and then was derived explicitly as a water wave equation by Camassa-Holm [3]. (1.1) is completely integrable [3, 6] with a bi-Hamiltonian structure [5, 14] and infinitely many conservation laws [3, 14]. Also, it admits exact peaked soliton solutions (peakons) of the form \(u(x,t)=ce^{-|x-ct|}\) with \(c>0\), which are orbitally stable [12]. Another remarkable feature of the Camassa-Holm equation is the wave breaking phenomena: the solution remains bounded while its slope becomes unbounded in finite time [4, 8, 9]. It is worth mentioning that the peaked solitons present the characteristic for the travelling water waves of greatest height and largest amplitude and arise as solutions to the free-boundary problem for incompressible Euler equations over a flat bed, see Refs. [7, 10, 11] for the details. We note that the pseudo-differential operator \((1-\alpha^{2}\partial_{x}^{2})^{-1}\) with the Fourier multiplier \((1+\alpha^{2}|\xi|^{2})^{-1}\) can be defined as follows \[\left(1-\alpha^{2}\partial_{x}^{2}\right)^{-1}f=g*f,\quad\forall\ f\in L^{2}( \mathbb{R}), \tag{1.3}\] where \(g(x):=\frac{1}{2\alpha}e^{-\frac{|x|}{\alpha}}\), \(x\in\mathbb{R}\) and \(*\) denotes convolution, then \(u=g*m\). Using this identity and applying the pseudo-differential operator \(\left(1-\alpha^{2}\partial_{x}^{2}\right)^{-1}\) to Eq.(1.1), one can rewrite Eq.(1.1) as a quasi-linear nonlocal evolution equation of hyperbolic type, namely \[\begin{cases}u_{t}+3u\partial_{x}u=-\alpha^{2}\partial_{x}^{3}\left(1-\alpha^ {2}\partial_{x}^{2}\right)^{-1}u^{2}-\frac{\alpha^{2}}{2}\partial_{x}\left(1- \alpha^{2}\partial_{x}^{2}\right)^{-1}(\partial_{x}u)^{2},\\ u(0,x)=u_{0}(x).\end{cases} \tag{1.4}\] Formally, as \(\alpha\to 0\), the solution of the Camassa-Holm equation (1.4) converges to the solution of the following Burgers equation \[\begin{cases}u_{t}+3u\partial_{x}u=0,\qquad(t,x)\in\mathbb{R}^{+}\times \mathbb{R},\\ u(0,x)=u_{0}(x).\end{cases} \tag{1.5}\] The Burgers equation is perhaps the most basic example of a PDE evolution leading to shocks. Further background and motivation for the Burgers equation may be found in [1, 13, 19, 20, 21, 16] and references therein. Gui-Liu [15] proved that the solutions of the Camassa-Holm equation with additional dissipative term \(\nu\Lambda^{\gamma}u\) does converge, at least locally, to the one of the dissipative Burgers equation as the filter parameter \(\alpha\) tends to zero in the lower regularity Sobolev spaces. Recently, in [18], we considered the zero-filter limit for the Camassa-Holm equation without dissipative term in Sobolev spaces, and proved that the solution of (1.4) converges to the solution of the inviscid Burgers equation (1.5) in the topology of Sobolev spaces. Precisely speaking, **Theorem 1.1** ([18]).: _Let \(s>\frac{3}{2}\) and \(\alpha\in(0,1)\). Assume that the initial data \(u_{0}\in H^{s}(\mathbb{R})\). Let \(\mathbf{S}_{t}^{\alpha}(u_{0})\) and \(\mathbf{S}_{t}^{0}(u_{0})\) be the smooth solutions of (1.4) and (1.5) with the initial data \(u_{0}\) respectively. Then there exists a time \(T=T(\|u_{0}\|_{H^{s}})>0\) such that \(\mathbf{S}_{t}^{\alpha}(u_{0}),\mathbf{S}_{t}^{0}(u_{0})\in\mathcal{C}([0,T];H ^{s})\) and_ \[\lim_{\alpha\to 0}\left\|\mathbf{S}_{t}^{\alpha}(u_{0})-\mathbf{S}_{t}^{0}(u_{0} )\right\|_{L^{\infty}_{T}H^{s}}=0.\] An interesting problem appears: For any \(u_{0}\in U_{R}\), in the zero-filter limit \(\alpha\to 0\), whether or not the \(H^{s}\)-convergence \[\mathbf{S}_{t}^{\alpha}(u_{0})\to\mathbf{S}_{t}^{0}(u_{0})\quad\text{in}\quad L _{T}^{\infty}H^{s}\] can be established _uniformly with respect to the initial data \(u_{0}\)?_ For any \(R>0\), from now on, we denote any bounded subset \(U_{R}\subset H^{s}(\mathbb{R})\) by \[U_{R}:=\left\{\phi\in H^{s}(\mathbb{R}):\|\phi\|_{H^{s}(\mathbb{R})}\leq R \right\}.\] In this paper, we shall answer the above question and state our main result as follows. **Theorem 1.2**.: _Let \(\alpha\in(0,1)\) and \(s>\frac{3}{2}\). For any \(u_{0}\in U_{R}\), let \(\mathbf{S}_{t}^{\alpha}(u_{0})\) and \(\mathbf{S}_{t}^{0}(u_{0})\) be the solutions of (1.4) and (1.5) with the same initial data \(u_{0}\), respectively. Then a family of solutions \(\left\{\mathbf{S}_{t}^{\alpha}(u_{0})\right\}_{\alpha>0}\) to (1.4)_ \[\mathbf{S}_{t}^{\alpha}:\left\{\begin{aligned} & U_{R}\to\mathcal{C}([0,T];H^{s}),\\ & u_{0}\mapsto\mathbf{S}_{t}^{\alpha}(u_{0}),\end{aligned}\right.\] _do not converge strongly in a uniform way with respect to initial data to the solution \(\mathbf{S}_{t}^{0}(u_{0})\) of (1.5) in \(H^{s}\). More precisely, there exists a sequence initial data \(\{u_{0}^{n}\}_{n=1}^{\infty}\in U_{R}\) such that for a short time \(T_{0}\leq T\)_ \[\liminf_{\alpha_{n}\to 0}\left\|\mathbf{S}_{t}^{\alpha_{n}}(u_{0}^{n})- \mathbf{S}_{t}^{0}(u_{0}^{n})\right\|_{L_{T_{0}}^{\infty}H^{s}}\geq\eta_{0},\] _with some positive constant \(\eta_{0}\)._ **Notation** Throughout this paper, we will denote by \(C\) any positive constant independent of the parameter \(\alpha\), which may change from line to line. The symbol \(\mathrm{A}\lesssim(\gtrsim)\mathrm{B}\) means that there is a uniform positive "harmless" constant \(C\) independent of \(\mathrm{A}\) and \(\mathrm{B}\) such that \(\mathrm{A}\leq(\geq)\mathrm{CB}\), and we sometimes use the notation \(\mathrm{A}\approx\mathrm{B}\) means that \(\mathrm{A}\lesssim\mathrm{B}\) and \(\mathrm{B}\lesssim\mathrm{A}\). Given a Banach space \(X\), we denote its norm by \(\|\cdot\|_{X}\). For \(I\subset\mathbb{R}\), we denote by \(\mathcal{C}(I;X)\) the set of continuous functions on \(I\) with values in \(X\). Sometimes we will denote \(L^{p}(0,T;X)\) by \(L_{T}^{p}X\). For \(s\in\mathbb{R}\), the nonhomogeneous Sobolev space is defined by \(\|f\|_{H^{s}}^{2}=\int_{\mathbb{R}}(1+|\xi|^{2})^{s}|\widehat{f}(\xi)|^{2} \mathrm{d}\xi.\) We recall the classical result for later proof. **Lemma 1.1** ([2]).: _For \(s>0\), \(H^{s}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\) is an algebra. Moreover, we have for any \(u,v\in H^{s}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\)_ \[\|uv\|_{H^{s}(\mathbb{R})}\leq C(\|u\|_{H^{s}(\mathbb{R})}\|v\|_{L^{\infty}( \mathbb{R})}+\|v\|_{H^{s}(\mathbb{R})}\|u\|_{L^{\infty}(\mathbb{R})}).\] _In particular, for \(s>\frac{1}{2}\), due to the fact \(H^{s}(\mathbb{R})\hookrightarrow L^{\infty}(\mathbb{R})\), then we have_ \[\|uv\|_{H^{s}(\mathbb{R})}\leq C\|u\|_{H^{s}(\mathbb{R})}\|v\|_{H^{s}(\mathbb{ R})}.\] Proof of Theorem 1.2 For fixed \(\alpha>0\), by the classical local well-posedness result, we known that there exists a \(T_{\alpha}=T(\|u_{0}\|_{H^{s}},s,\alpha)>0\) such that the Camassa-Holm has a unique solution \(\mathbf{S}_{t}^{\alpha}(u_{0})\in\mathcal{C}([0,T_{\alpha}];H^{s})\). Furthermore, we can obtain that \(\exists\ T=T(\|u_{0}\|_{H^{s}},s)>0\) such that \(T\leq T_{\alpha}\) and there exists \(C_{1}>0\) independent of \(\alpha\) such that \[\|\mathbf{S}_{t}^{\alpha}(u_{0})\|_{L_{T}^{\infty}H^{s}}\leq C_{1}\,\|u_{0}\|_ {H^{s}}\,,\quad\forall\alpha\in[0,1). \tag{2.6}\] Moreover, if \(u_{0}\in H^{\gamma}\cap H^{s}\) for some \(\gamma\geq s-1\), then there exists \(C_{2}(\|u_{0}\|_{H^{s}})>0\) independent of \(\alpha\) such that \[\|\mathbf{S}_{t}^{\alpha}(u_{0})\|_{L_{T}^{\infty}H^{\gamma}}\leq C_{2}(\|u_{ 0}\|_{H^{s}})\,\|u_{0}\|_{H^{\gamma}}\,. \tag{2.7}\] For more details on the proof of (2.6) and (2.7), we can refer to see [18]. Next, we establish the following proposition will play a crucial role in the proof of Theorem 1.2. **Proposition 2.1**.: _Let \(\alpha\in[0,1)\). Assume that \(s>\frac{3}{2}\) and \(\|u_{0}\|_{H^{s}}\approx 1\). Let \(\mathbf{S}_{t}^{\alpha}(u_{0})\) and \(\mathbf{S}_{t}^{0}(u_{0})\) be the smooth solutions of (1.4) and (1.5) with the same initial data \(u_{0}\), respectively. Then we have_ \[\left\|\mathbf{S}_{t}^{\alpha}(u_{0})-u_{0}-t\mathbf{E}(\alpha,u_{0})\right\| _{H^{s}}\leq Ct^{2}\mathbf{F}(\alpha,u_{0}),\] _where we denote_ \[\mathbf{E}(\alpha,u_{0}) :=-3u_{0}\partial_{x}u_{0}-\alpha^{2}\partial_{x}^{3}\left(1- \alpha^{2}\partial_{x}^{2}\right)^{-1}u_{0}^{2}-\frac{\alpha^{2}}{2}\partial _{x}\left(1-\alpha^{2}\partial_{x}^{2}\right)^{-1}(\partial_{x}u_{0})^{2},\] \[\mathbf{F}(\alpha,u_{0}) :=\alpha\|u_{0}\|_{H^{s+1}}\left(\alpha\|u_{0}\|_{H^{s+1}}+\|u_{ 0}\|_{H^{s-1}}\|u_{0}\|_{H^{s+1}}\right)+(\alpha+\|u_{0}\|_{H^{s-1}})\left(\|u _{0}\|_{H^{s+1}}+\|u_{0}\|_{H^{s-1}}\|u_{0}\|_{H^{s+2}}\right).\] Proof.: For simplicity, we denote \(u(t)=\mathbf{S}_{t}^{\alpha}(u_{0})\). Firstly, we need to estimate the different Sobolev norms of the term \(u(t)-u_{0}\), which can be bounded by \(t\) multiplying the corresponding Besov norms of initial data \(u_{0}\). For \(t\in[0,T]\), by the fundamental theorem of calculus in the time variable and using the product estimates from Lemma 1.1, we obtain from (1.4) that \[\|u(t)-u_{0}\|_{H^{s}} \leq\int_{0}^{t}\|\partial_{x}u\|_{H^{s}}\mathrm{d}\tau\] \[\leq\int_{0}^{t}\left(3\|u\partial_{x}u\|_{H^{s}}+\alpha^{2}\left\| \partial_{x}^{3}\left(1-\alpha^{2}\partial_{x}^{2}\right)^{-1}u^{2}\right\|_{H ^{s}}\right)\mathrm{d}\tau\] \[\quad+\int_{0}^{t}\frac{\alpha^{2}}{2}\left\|\partial_{x}\left(1 -\alpha^{2}\partial_{x}^{2}\right)^{-1}(\partial_{x}u)^{2}\right\|_{H^{s}} \mathrm{d}\tau\] \[\lesssim t\left(\|u\partial_{x}u\|_{L_{T}^{\infty}H^{s}}+\alpha \left\|(\partial_{x}u)^{2}\right\|_{L_{T}^{\infty}H^{s}}\right)\] \[\lesssim t\left(\|u\|_{L_{T}^{\infty}H^{s-1}}\|u\|_{L_{T}^{ \infty}H^{s+1}}+\alpha\left\|\partial_{x}u\right\|_{L_{T}^{\infty}H^{s-1}} \left\|\partial_{x}u\right\|_{L_{T}^{\infty}H^{s}}\right)\] \[\lesssim t\left(\|u_{0}\|_{H^{s-1}}\|u_{0}\|_{H^{s+1}}+\alpha\|u_{ 0}\|_{H^{s+1}}\right), \tag{2.8}\] where we have used that \(H^{s-1}(\mathbb{R})\hookrightarrow L^{\infty}(\mathbb{R})\) with \(s>\frac{3}{2}\). Following the same procedure of estimates as above, we have \[\|u(t)-u_{0}\|_{H^{s-1}} \leq\int_{0}^{t}\|\partial_{\tau}u\|_{H^{s-1}}\mathrm{d}\tau\] \[\leq\int_{0}^{t}\left(3\|u\partial_{x}u\|_{H^{s-1}}+\alpha^{2} \left\|\partial_{x}^{3}\left(1-\alpha^{2}\partial_{x}^{2}\right)^{-1}u^{2} \right\|_{H^{s-1}}\right)\mathrm{d}\tau\] \[\quad+\int_{0}^{t}\frac{\alpha^{2}}{2}\left\|\partial_{x}\left(1- \alpha^{2}\partial_{x}^{2}\right)^{-1}\left(\partial_{x}u\right)^{2}\right\|_ {H^{s-1}}\mathrm{d}\tau\] \[\lesssim t\left(\|u\partial_{x}u\|_{L^{\infty}_{t}H^{s-1}}+\alpha \left\|(\partial_{x}u)^{2}\right\|_{L^{\infty}_{t}H^{s-1}}\right)\] \[\lesssim t\left(\|u\|_{L^{\infty}_{t}H^{s-1}}\|u\|_{L^{\infty}_{t} H^{s}}+\alpha\|u\|_{L^{\infty}_{t}H^{s}}^{2}\right)\] \[\lesssim t\left(\|u_{0}\|_{H^{s-1}}+\alpha\right), \tag{2.9}\] \[\|u(t)-u_{0}\|_{H^{s+1}} \leq\int_{0}^{t}\left\|\partial_{\tau}u\right\|_{H^{s+1}}\mathrm{ d}\tau\] \[\leq\int_{0}^{t}\left(3\|u\partial_{x}u\|_{H^{s+1}}+\alpha^{2} \left\|\partial_{x}^{3}\left(1-\alpha^{2}\partial_{x}^{2}\right)^{-1}u^{2} \right\|_{H^{s+1}}\right)\mathrm{d}\tau\] \[\quad+\int_{0}^{t}\frac{\alpha^{2}}{2}\left\|\partial_{x}\left(1- \alpha^{2}\partial_{x}^{2}\right)^{-1}\left(\partial_{x}u\right)^{2}\right\|_ {H^{s+1}}\mathrm{d}\tau\] \[\lesssim t\left(\|u\partial_{x}u\|_{L^{\infty}_{t}H^{s+1}}+\left\| \left(\partial_{x}u\right)^{2}\right\|_{L^{\infty}_{t}H^{s}}\right)\] \[\lesssim t\left(\|u\|_{L^{\infty}_{t}H^{s-1}}\|u\|_{L^{\infty}_{t} H^{s+2}}+\|u\|_{L^{\infty}_{t}H^{s}}\|u\|_{L^{\infty}_{t}H^{s+1}}\right)\] \[\lesssim t\left(\|u_{0}\|_{H^{s-1}}\|u_{0}\|_{H^{s+2}}+\|u_{0}\|_{H^{s +1}}\right). \tag{2.10}\] Next, we estimate the \(H^{s}\)-norm for the term \(u(t)-u_{0}-t\mathbf{E}(\alpha,u_{0})\) which can be bounded by \(t^{2}\) multiplying the Sobolev norms of initial data \(u_{0}\). For \(t\in[0,T]\), by the fundamental theorem of calculus in the time variable and using the product estimates from Lemma 1.1 again, we obtain from (1.4) that \[\|u(t)-u_{0}-t\mathbf{E}(\alpha,u_{0})\|_{H^{s}}\leq\int_{0}^{t} \|\partial_{\tau}u-\mathbf{E}(\alpha,u_{0})\|_{H^{s}}\mathrm{d}\tau\] \[\leq\int_{0}^{t}\left(3\left\|u\partial_{x}u-u_{0}\partial_{x}u_ {0}\right\|_{H^{s}}+\alpha^{2}\left\|\partial_{x}^{3}\left(1-\alpha^{2} \partial_{x}^{2}\right)^{-1}\left(u^{2}-u_{0}^{2}\right)\right\|_{H^{s}} \right)\mathrm{d}\tau\] \[\quad+\int_{0}^{t}\frac{\alpha^{2}}{2}\left\|\partial_{x}\left(1- \alpha^{2}\partial_{x}^{2}\right)^{-1}\left((\partial_{x}u)^{2}-(\partial_{x }u_{0})^{2}\right)\right\|_{H^{s}}\mathrm{d}\tau\] \[\lesssim\int_{0}^{t}\left(\left\|u^{2}-u_{0}^{2}\right\|_{H^{s+1 }}+\alpha\left\|(\partial_{x}u)^{2}-(\partial_{x}u_{0})^{2}\right\|_{H^{s}} \right)\mathrm{d}\tau\] \[\lesssim\int_{0}^{t}\left(\|u(\tau)-u_{0}\|_{H^{s-1}}\|u_{0}\|_{H^ {s+1}}+\|u_{0}\|_{H^{s-1}}\|u(\tau)-u_{0}\|_{H^{s+1}}\right)\mathrm{d}\tau\] \[\quad+\int_{0}^{t}\alpha\left(\|u_{0}\|_{H^{s+1}}\|u(\tau)-u_{0} \|_{H^{s}}+\|u(\tau)-u_{0}\|_{H^{s+1}}\right)\mathrm{d}\tau\] \[\lesssim t^{2}\mathbf{F}(\alpha,u_{0}),\] where we have used (2.8)-(2.10) in the last step. Thus, we complete the proof of Proposition 2.1. Before proving Theorem 1.2, we need to construct the initial data \(u_{0}\). Firstly, we introduce smooth, radial cut-off functions to localize the frequency region. Precisely, let \(\widehat{\phi}\in C_{0}^{\infty}(\mathbb{R})\) be an even, real-valued and non-negative function on \(\mathbb{R}\) and satisfy \[\widehat{\phi}(\xi)=\left\{\begin{aligned} & 1,&\text{if}\ |\xi|\leq\frac{1}{4},\\ & 0,&\text{if}\ |\xi|\geq\frac{1}{2}.\end{aligned}\right.\] Motivated by [17], we establish the following crucial lemmas which will be used later on. **Lemma 2.1**.: _Let \(s\in\mathbb{R}\). Define the high frequency function \(f_{n}\) and the low frequency function \(g_{n}\) by_ \[f_{n} =2^{-ns}\phi(x)\sin\left(\frac{17}{12}2^{n}x\right),\quad n\gg 1,\] \[g_{n} =2^{-n}\phi(x).\] _Then for any \(\sigma\in\mathbb{R}\), there exists a positive constant \(C=C(\phi)\) such that_ \[\|f_{n}\|_{L^{\infty}}\leq C2^{-ns},\quad\|\partial_{x}f_{n}\|_{L ^{\infty}}\leq C2^{-n(s-1)},\] \[\|f_{n}\|_{H^{\sigma}}\approx 2^{n(\sigma-s)},\quad\|g_{n}\|_{H^{ \sigma}}\approx 2^{-n},\] \[\liminf_{n\to\infty}\|g_{n}\partial_{x}f_{n}\|_{H^{n}}\geq C.\] **Lemma 2.2**.: _Let \(f_{n}\) and \(g_{n}\) be defined as Lemma 2.1. Assume that \(s\in\mathbb{R}\) and \(\alpha_{n}=2^{-n}\). Then there exists a positive constant \(C=C(\phi)\) such that_ \[\liminf_{n\to\infty}\left\|\alpha_{n}^{2}\partial_{x}^{2}\left(1-\alpha_{n}^{ 2}\partial_{x}^{2}\right)^{-1}\left(g_{n}\partial_{x}f_{n}\right)\right\|_{H^ {s}}\geq C.\] Proof.: By the construction of \(f_{n}\) and \(g_{n}\), one has (for more details see [17]) \[\text{supp}\ \widehat{g_{n}\partial_{x}f_{n}}\subset\left\{\xi\in\mathbb{R}: \ \frac{17}{12}2^{n}-1\leq|\xi|\leq\frac{17}{12}2^{n}+1\right\}.\] By the Plancherel's identity, we deduce that \[\left\|\alpha_{n}^{2}\partial_{x}^{2}\left(1-\alpha_{n}^{2}\partial_{x}^{2} \right)^{-1}\left(g_{n}\partial_{x}f_{n}\right)\right\|_{H^{s}}\approx\|g_{n} \partial_{x}f_{n}\|_{H^{s}}\] Using Lemma 2.1, we complete the proof of Lemma 2.2. Now we begin to prove our main Theorem 1.2. Letting \(\alpha_{n}=2^{-n}\). We set the initial data \(u_{0}^{n}=f_{n}+g_{n}\). It is easy to show that \[\|f_{n}\|_{H^{s+bs}}\lesssim 2^{bn}\quad\text{and}\quad\|g_{n}\|_{H^{s+bs}} \lesssim 2^{-n}\quad\text{for}\quad k\in\{-1,0,1,2\},\] which gives directly that \[\|u_{0}^{n}\|_{H^{n+1n}}\lesssim 2^{kn}.\] Thus \[\mathbf{F}(0,u_{0}^{n})\lesssim 1\quad\text{and}\quad\mathbf{F}( \alpha_{n},u_{0}^{n})\lesssim 1.\] We decompose the solution \(\mathbf{S}_{t}^{\alpha}(u_{0})\) to (1.4) and the solution \(\mathbf{S}_{t}^{0}(u_{0})\) to (1.5) into three parts, respectively \[\mathbf{S}_{t}^{\alpha_{n}}(u_{0}^{n})=u_{0}^{n}+\underbrace{ \mathbf{S}_{t}^{\alpha_{n}}(u_{0}^{n})-u_{0}-t\mathbf{E}(\alpha_{n},u_{0}^{n}) }_{=:\,\mathbf{I}_{1}}+t\mathbf{E}(\alpha_{n},u_{0}^{n}),\] \[\mathbf{S}_{t}^{0}(u_{0}^{n})=u_{0}^{n}+\underbrace{\mathbf{S}_{t }^{0}(u_{0}^{n})-u_{0}-t\mathbf{E}(0,u_{0}^{n})}_{=:\,\mathbf{I}_{2}}+t \mathbf{E}(0,u_{0}^{n})\quad\text{and}\] \[\mathbf{E}(\alpha_{n},u_{0}^{n})-\mathbf{E}(0,u_{0}^{n})=-2 \alpha_{n}^{2}\partial_{x}^{2}\left(1-\alpha_{n}^{2}\partial_{x}^{2}\right)^ {-1}(u_{0}^{n}\partial_{x}u_{0}^{n})-\frac{\alpha_{n}^{2}}{2}\partial_{x}\left( 1-\alpha_{n}^{2}\partial_{x}^{2}\right)^{-1}(\partial_{x}u_{0}^{n})^{2}.\] Furthermore, notice that \(u_{0}^{n}\partial_{x}u_{0}^{n}=g_{n}\partial_{x}f_{n}+f_{n}\partial_{x}f_{n}+ u_{0}^{n}\partial_{x}g_{n}\), by Proposition 2.1, we deduce that \[\left\|\mathbf{S}_{t}^{\alpha_{n}}(u_{0}^{n})-\mathbf{S}_{t}^{0}( u_{0}^{n})\right\|_{H^{n}}\geq\ t\left\|\mathbf{E}(\alpha_{n},u_{0}^{n})-\mathbf{E}(0,u_{0}^{ n})\right\|_{H^{n}}-\left\|\mathbf{I}_{1}\right\|_{H^{s}}-\left\|\mathbf{I}_{2} \right\|_{H^{s}}\] \[\geq t\left(2\left\|\alpha_{n}^{2}\partial_{x}^{2}\left(1-\alpha_{n}^ {2}\partial_{x}^{2}\right)^{-1}(u_{0}^{n}\partial_{x}u_{0}^{n})\right\|_{H^{s} }-\frac{\alpha_{n}^{2}}{2}\left\|\partial_{x}\left(1-\alpha_{n}^{2}\partial_{ x}^{2}\right)^{-1}(\partial_{x}u_{0})^{2}\right\|_{H^{s}}\right)-Ct^{2}\] \[\gtrsim t\left(\left\|\alpha_{n}^{2}\partial_{x}^{2}\left(1-\alpha_{n }^{2}\partial_{x}^{2}\right)^{-1}(g_{n}\partial_{x}f_{n})\right\|_{H^{s}}- \left\|f_{n}\partial_{x}f_{n}\right\|_{H^{s}}-\left\|u_{0}^{n}\partial_{x}g_{ n}\right\|_{H^{s}}-2^{-n}\left\|(\partial_{x}u_{0}^{n})^{2}\right\|_{H^{s}} \right)-t^{2}. \tag{2.11}\] Using Lemma 1.1 and Lemma 2.1, after simple calculation, we obtain \[\left\|f_{n}\partial_{x}f_{n}\right\|_{H^{s}}\lesssim\left\|f_{n }\right\|_{L^{\infty}}\left\|f_{n}\right\|_{H^{s+1}}\lesssim 2^{-n(s-1)},\] \[\left\|u_{0}^{n}\partial_{x}g_{n}\right\|_{H^{s}}\lesssim\left\|u_ {0}^{n}\right\|_{H^{s}}\|g_{n}\|_{H^{s+1}}\lesssim 2^{-n},\] \[\left\|(\partial_{x}u_{0}^{n})^{2}\right\|_{H^{s}}\lesssim\| \partial_{x}u_{0}^{n}\|_{L^{\infty}}\|u_{0}^{n}\|_{H^{s+1}}\lesssim 2^{n}(2^{-n}+2^{-n(s-1)}) \lesssim 1+2^{-n(s-2)}.\] Plugging the above estimates into (2.11) and combining Lemma 2.2 yields that \[\liminf_{n\to\infty}\left\|\mathbf{S}_{t}^{\alpha_{n}}(u_{0}^{n})- \mathbf{S}_{t}^{0}(u_{0}^{n})\right\|_{H^{s}}\gtrsim t\quad\text{for $t$ small enough}.\] This completes the proof of Theorem 1.2. ## Acknowledgements J. Li is supported by the National Natural Science Foundation of China (12161004), Training Program for Academic and Technical Leaders of Major Disciplines in Jiangxi Province (20232BCJ23009) and Jiangxi Provincial Natural Science Foundation (20224BAB201008). Y. Yu is supported by the National Natural Science Foundation of China (12101011). W. Zhu is supported by the National Natural Science Foundation of China (12201118) and Guangdong Basic and Applied Basic Research Foundation (2021A1515111018). ## Declarations **Data Availability** No data was used for the research described in the article. **Conflict of interest** The authors declare that they have no conflict of interest.
2307.12326
Scale jump-aware pose graph relaxation for monocular SLAM with re-initializations
Pose graph relaxation has become an indispensable addition to SLAM enabling efficient global registration of sensor reference frames under the objective of satisfying pair-wise relative transformation constraints. The latter may be given by incremental motion estimation or global place recognition. While the latter case enables loop closures and drift compensation, care has to be taken in the monocular case in which local estimates of structure and displacements can differ from reality not just in terms of noise, but also in terms of a scale factor. Owing to the accumulation of scale propagation errors, this scale factor is drifting over time, hence scale-drift aware pose graph relaxation has been introduced. We extend this idea to cases in which the relative scale between subsequent sensor frames is unknown, a situation that can easily occur if monocular SLAM enters re-initialization and no reliable overlap between successive local maps can be identified. The approach is realized by a hybrid pose graph formulation that combines the regular similarity consistency terms with novel, scale-blind constraints. We apply the technique to the practically relevant case of small indoor service robots capable of effectuating purely rotational displacements, a condition that can easily cause tracking failures. We demonstrate that globally consistent trajectories can be recovered even if multiple re-initializations occur along the loop, and present an in-depth study of success and failure cases.
Runze Yuan, Ran Cheng, Lige Liu, Tao Sun, Laurent Kneip
2023-07-23T13:48:50Z
http://arxiv.org/abs/2307.12326v1
# Scale jump-aware pose graph relaxation for ###### Abstract Pose graph relaxation has become an indispensable addition to SLAM enabling efficient global registration of sensor reference frames under the objective of satisfying pair-wise relative transformation constraints. The latter may be given by incremental motion estimation or global place recognition. While the latter case enables loop closures and drift compensation, care has to be taken in the monocular case in which local estimates of structure and displacements can differ from reality not just in terms of noise, but also in terms of a scale factor. Owing to the accumulation of scale propagation errors, this scale factor is drifting over time, hence scale-drift aware pose graph relaxation has been introduced. We extend this idea to cases in which the relative scale between subsequent sensor frames is unknown, a situation that can easily occur if monocular SLAM enters re-initialization and no reliable overlap between successive local maps can be identified. The approach is realized by a hybrid pose graph formulation that combines the regular similarity consistency terms with novel, scale-blind constraints. We apply the technique to the practically relevant case of small indoor service robots capable of effectuating purely rotational displacements, a condition that can easily cause tracking failures. We demonstrate that globally consistent trajectories can be recovered even if multiple re-initializations occur along the loop, and present an in-depth study of success and failure cases. ## I Introduction Exteroceptive sensor-based Simultaneous Localization And Mapping (SLAM) is a fundamental solution required in many robotics and intelligence augmentation scenarios. In comparison to its alternatives, the basic case of monocular visual SLAM is often considered a highly attractive solution owing to low cost, low weight, low energy demands, and low extrinsic calibration or sensor fusion requirements. However, using only a single camera often also poses an intricate challenge, which is the scale invariance of its geometry. The present paper focuses on the important SLAM back-end optimization problem of pose graph relaxation with a particular view onto its ability to handle the scale invariance occurring in the monocular setup. Pose graph relaxation is a fundamental sub-problem of SLAM that aims at estimating a set of (often sequentially captured) sensor poses relative to a common, global coordinate system. The problem can be represented by a graph where the nodes correspond to the optimized sensor poses, and the edges represent measured relative positions and orientations between pairs of sensor frames. The goal of pose graph optimization is to find the optimal set of sensor poses that will minimize the overall discrepancy with the measured relative poses. Pose graph relaxation is often referred to as pose averaging as the measured relative poses may be inaccurate and lead to cycle inconsistencies. Hence, there typically are no absolute sensor poses that will simultaneously satisfy all relative pose constraints, and errors will have to be distributed over loops. Applying pose graph optimization in the back-end of monocular visual SLAM requires special attention to the problem of scale invariance. The issue of a global scale invariance is easily addressed by simply estimating the euclidean sensor poses in the scaled domain, and--if the scale factor eventually becomes known--simply rescale the estimated sensor positions at the end of the optimization. However, the problem is that the scale factor is not globally consistent. Rather, it is a local variable and--owing to small error accumulations in each relative sensor pose estimation--subject to slow variation over time, a problem commonly referred to as _scale drift_. Although the small scale change between successive frames is the result of estimation errors and thus cannot be determined (i.e. the relative scale can only be assumed to be one), scale change can be determined if a known place is revisited, an event known as a _loop closure_. As introduced by Strasdat et al. [1], expressing both absolute and relative sensor transformations as similarities therefore leads to scale drift-aware pose graph optimization. Fig. 1: Illustration of the scenario addressed in this work. An agent equipped with a single monocular camera (e.g. a robotic vacuum cleaner) is traversing a room. Owing to the regular execution of purely rotational displacements, the monocular SLAM algorithm frequently loses tracking and needs to re-initialize. The result is a piece-wise scale-consistent trajectory. Our contribution is a hybrid pose graph optimization frame-work with the ability to reconcile a globally consistent trajectory. While certainly an advancement, pose graph relaxation using similarity transformations still suffers from an important drawback, which is that the relative scale change between neighboring sensor frames either needs to be small to the point of being ignorable or otherwise measurable. However, in monocular SLAM there is an important situation in which neither of the two is true. If a tracking failure occurs, the local map may easily leave the field of view of the camera before a successful re-initialization occurred. A possible lack of overlap between the current and the previous local map then makes it impossible to reconcile the in principle arbitrary relative scale between the corresponding sensor frames. Rather than just being an occasional error, the discussed problem occurs systematically on dual-drive platforms that are equipped with a single perspective camera and occasionally exhibit purely rotational displacements (e.g. a robotic vacuum cleaner [cf. Figure 1]). We make the following contributions: * We demonstrate that the simple solution of setting the relative scale to one and applying scale drift-aware relaxation can lead to severe distortions in the case of scale jumps caused by re-initializations. * We introduce a novel hybrid pose graph optimization framework that admits scale-ignorant factors along re-initialization edges, and demonstrate how this seemingly simple change leads to scale reconciliation across loopy graphs and distortion-free estimates even if facing multiple tracking failures along a loop. * We present a complete discussion on the number and spatial configuration of failure edges that can be tolerated, and reveal critical configurations in which a globally consistent estimate (up to a single global scale factor) can no longer be obtained. Our paper is organized as follows. Section II reviews the related literature on pose graph optimization. Section III reviews both traditional loop closure as well as our modified, Hybrid Pose Graph Optimization (HPGO). Section IV finally presents the experimental validation of our claims using both simulated and real datasets. ## II Related Work Pose graph optimization is a fundamental SLAM alternative to incremental filtering [2] and batch optimization over both poses and landmarks [3]. Also known as graph SLAM, it limits the estimation to the sensor frame poses, thereby achieving efficient large-scale registration capabilities. Furthermore, pose graph optimization is more easily amenable to scenarios in which local constraints do not adhere to simple noise models (e.g. Gaussian distributions) and motion models may not be given. The first approaches presented in the literature are restricted to the planar case and perform least-squares optimization using LU-decomposition [4]. Gutmann and Konolidge [5] later on apply the technique as a back-end module to reconcile front-end estimations coming from both incremental registration and loop closure detections. Later contributions mainly focus on the efficiency of the pose graph optimization by employing relaxations [6, 7, 8] and local updating [6, 7, 8, 9]. The seminal contribution of Olson et al. [9] leads to a significant improvement of optimization robustness and efficiency using tree-based updates [10] and stochastic gradient descent. Initial extensions to full 6 DoF pose optimization focus on handling the non-commutative nature of Euler angle parametrizations. Howard et al. [11] bypass the problem by assuming that roll and pitch are accurately measured, while Nuchter et al. [12] assume that these angles are affected by only small errors. Triebel et al. [13] in turn parametrize the environment as an ensemble of multiple horizontal surfaces located at different heights. The most notable seminal contributions in 6 DoF pose graph optimization have been made by Grisetti et al. [14, 15, 16], who extend the efficient tree-based stochastic optimization method of Olson et al. [9] and further push computational efficiency. Grisetti et al. [17] later on present yet another seminal extension of the technique, which enables hierarchical pose graph optimization. Around the same time, Koichi et al. [18] present the addition of unary factors to include GPS constraints into the optimization. Sunderhauf and Protzel [19] and Wang and Olson [20] contribute towards the robustness of pose graph optimization. A comprehensive tutorial on pose graph optimization is presented by Grisetti et al. [17], and a popular software framework containing many of the advancements listed here is given by _g2o_[21]. It is worth noting that both batch least-squares optimization methods [3] as well tree-based estimation techniques [22, 23] have also been introduced for joint optimization over poses and structure. Furthermore, the topics of graph-based optimization have also been investigated in the computer vision community (e.g. robust rotation averaging [24, 25, 26], robust translation averaging [27], and pose synchronization [28, 29]). Our work is a direct extension of the work by Strasdat et al. [1], who explicitly consider the scale invariant nature of single camera geometry. However, in their work they assume that the relative scale between connected nodes in the graph is always a measured, Gaussian-distributed variable. The scale invariant nature is also addressed by Pinies et al. [30], who even discuss the partitioning of the graph into several conditionally independent sub-graphs. However, rather than explicitly handling the unknown scale of each sub-graph, they assume that the latter are approximately scaled via the addition of absolute velocity readings during the initialization phase of monocular SLAM. To the best of our knowledge, our work is the first to directly reconcile sub-graphs of arbitrary relative scale through a joint, hybrid, scale jump-aware graph optimization framework. ## III Theory We will start by reviewing the traditional formulation of pose graph optimization, its solution, as well its scale-drift-aware adaptation using similarity transformations. Next, we will present our modified hybrid pose graph formulation which explicitly takes into account the existence of unknown scale jumps. To conclude, we will present a discussion on degenerate cases and a method to identify whether or not a single, globally consistent scale factor can be reconciled. ### _Brief review of scale drift-aware pose graph optimization_ The problem of pose graph optimization can be abstracted as follows. Let \(\tilde{\mathbf{T}}_{ij}\) be the measured relative transformation between two nearby sensor frames \(i\) and \(j\). It is a Euclidean transformation and can be used to linearly map points in homogeneous representation from sensor frame \(j\) to \(i\), hence \[\begin{bmatrix}^{i}\mathbf{p}^{T}&1\end{bmatrix}^{T} = \tilde{\mathbf{T}}_{ij}\begin{bmatrix}^{j}\mathbf{p}^{T}&1\end{bmatrix} ^{T}\] \[= \begin{bmatrix}\tilde{\mathbf{R}}_{ij}&\tilde{\mathbf{t}}_{ij}\\ \mathbf{0}&1\end{bmatrix}^{j}\mathbf{p}^{T}\quad 1\end{bmatrix}^{T}.\] The estimated variables in pose graph optimization are given by the absolute sensor frame poses \(\mathbf{T}_{i}\) which are defined such that they linearly map points from a sensor frame to a global reference frame, that is \[\begin{bmatrix}^{\mathcal{W}}\mathbf{p}^{T}&1\end{bmatrix}^{T} = \mathbf{T}_{i}\begin{bmatrix}^{i}\mathbf{p}^{T}&1\end{bmatrix}^{T}\] \[= \begin{bmatrix}\mathbf{R}_{i}&\mathbf{t}_{i}\\ \mathbf{0}&1\end{bmatrix}^{i}\begin{bmatrix}\mathbf{p}^{T}&1\end{bmatrix}^{T}.\] The goal of pose graph optimization consists of estimating absolute sensor frame poses \(\mathbf{T}_{i}\) such that their total discrepancy with respect to each measured relative pose is minimized. Formally, the objective is given by \[\operatorname*{argmin}_{\boldsymbol{\theta}_{1},\ldots,\boldsymbol{\theta}_{ N}}\sum_{k=1}^{M}\Big{\|}\mathrm{t2v}\left(\left(\mathbf{T}\left(\boldsymbol{ \theta}_{jk}\right)\right)^{-1}\mathbf{T}\left(\boldsymbol{\theta}_{ik}\right) \tilde{\mathbf{T}}_{ikj_{k}}\right)\Big{\|}^{2}\,, \tag{3}\] which minimizes the sum of squared deviations from identity transformation of the concatenations of each measured relative pose \(\tilde{\mathbf{T}}_{ikj_{k}},k=1,\ldots,M\) and the two corresponding, optimized absolute poses. The latter are represented minimally by the 6-vectors \(\boldsymbol{\theta}=\begin{bmatrix}\mathbf{t}^{T}&\boldsymbol{\phi}^{T}\end{bmatrix} ^{T}\) containing the translation \(\mathbf{t}\) and the Rodriguez vector \(\boldsymbol{\phi}\). \(\mathbf{T}(\boldsymbol{\theta})\) uses the Riemannian exponential map to obtain the corresponding rotation matrix, and \(\mathrm{t2v}(\mathbf{T})\) makes use of the Riemannian logarithmic map to again go back to minimal representation. It is intuitively clear that when the absolute poses are consistent with the relative pose measurements they will form transformation cycles and the corresponding minimal vector will be zero. Any inconsistencies will result in a non-zero vector, and thus contribute to the overall energy. In practice, we will often minimize the robust, covariance-reweighted alternative \[\operatorname*{argmin}_{\boldsymbol{\theta}_{1},\ldots,\boldsymbol{\theta}_ {N}}\sum_{k=1}^{M}\rho\left(\mathbf{r}_{1}(\boldsymbol{\theta}_{i_{k}}, \boldsymbol{\theta}_{j_{k}})^{T}\tilde{\boldsymbol{\Omega}}_{i_{k},j_{k}} \mathbf{r}_{1}(\boldsymbol{\theta}_{i_{k}},\boldsymbol{\theta}_{j_{k}})\right), \tag{4}\] where \(\mathbf{r}_{1}(\boldsymbol{\theta}_{i_{k}},\boldsymbol{\theta}_{j_{k}})= \mathrm{t2v}\left(\left(\mathbf{T}\left(\boldsymbol{\theta}_{i_{k}}\right) \right)^{-1}\mathbf{T}\left(\boldsymbol{\theta}_{j_{k}}\right)\right)- \tilde{\boldsymbol{\theta}}_{ikj_{k}}\) is the relative pose consistency term, \(\tilde{\boldsymbol{\theta}}_{ikj_{k}}\) is the minimal representation of the measured relative pose, \(\tilde{\boldsymbol{\Omega}}_{ik,j_{k}}\) the corresponding information matrix, and \(\rho(\cdot)\) a robust cost function (e.g. Huber-loss) that will down-weight the influence of outliers. In case of global estimation, the iterative updates are finally calculated using sparse Cholesky decomposition. In the monocular case, the initial relative pose estimate is given from direct frame-to-frame relative pose estimation, which is scale invariant. In sub-sequent frames, the algorithm then performs local map tracking, thus aiming at scale propagation. In an ideal scenario, the estimated relative poses therefore all differ from reality by 1) the presence of noise, and 2) a globally-consistent scale factor. We could simply apply the above pose-graph optimization formulation, which would find absolute sensor frame poses that would all be affected by the same global scale parameter. However, just like other parts of the algorithm, the scale propagation mechanism is affected by measurement uncertainties, thus causing the scale to drift over time and indeed turn into a local variable. For this reason, a naive application of the above pose graph optimization objective will lead to sub-optimal results. In order to take into account the local nature of the scale factor, Strasdat et al. [1] have proposed scale-aware pose-graph optimization based on similarity transformations. The absolute pose is thereby represented as \[\mathbf{T}_{i}=\begin{bmatrix}e^{s_{i}}\mathbf{R}_{i}&\mathbf{t}_{i}\\ \mathbf{0}&1\end{bmatrix}, \tag{5}\] and the minimal vectors used inside the pose graph optimizer are now given by the 7-vectors \(\boldsymbol{\theta}=\begin{bmatrix}\mathbf{t}^{T}&\boldsymbol{\phi}^{T}&s \end{bmatrix}^{T}\). The estimated relative transformations become \[\left(\mathbf{T}\left(\boldsymbol{\theta}_{ik}\right)\right)^{-1 }\mathbf{T}\left(\boldsymbol{\theta}_{jk}\right)= \tag{6}\] \[\begin{bmatrix}e^{s_{j}-s_{i}}\mathbf{R}(\boldsymbol{\phi}_{i_{k}} )^{T}\mathbf{R}(\boldsymbol{\phi}_{j_{k}})&e^{-s_{i}}\mathbf{R}(\boldsymbol{ \phi}_{i_{k}})^{T}(\mathbf{t}_{j}-\mathbf{t}_{i})\\ \boldsymbol{0}&1\end{bmatrix},\] and a few interesting facts can be noted: * The hypothesized relative poses are similarity transformations. * The scale of neighboring sensor frames is often similar. As required, the scale of the hypothesized relative pose becomes 1 in that situation (\(s_{i}\approx s_{j}\Rightarrow e^{s_{j}-s_{i}}\approx 1\)). * In a loop closure situation, the scale difference could potentially be very large. Again, this is properly reflected (\(s_{i}\neq s_{j}\Rightarrow e^{s_{j}-s_{i}}\neq 1\)). * The optimized relative translation is properly scaled in order to enable comparison against the possible scaled relative translation measurements. ### _Scale-jump aware hybrid pose graph optimization_ While adding the scale as an optimization variable helps to account for scale variations, its potential is nonetheless limited to situations in which the measured relative scale is known up to noise. During regular tracking, the relative scale is simply assumed to be 1, and hence the formulation only supports the estimation of relatively slow scale changes over time (i.e. drift). During loop closure, the relative scale is calculated by for example generalized Procrustes alignment. In both cases, the relative scale is represented as a Gaussian variable. In practice, we may easily face situations in which the relative scale factor can no longer be reflected by a Gaussian estimate with mean and standard deviation. A simple example is given when the algorithm loses the local map and needs to reinitialize. The scale after initialization is in principle arbitrary, and hence the relative scale becomes a uniformly distributed variable. In order to take into account the possible existence of edges with unknown relative pose, we propose an alternative Hybrid Pose Graph Optimization (HPGO) that makes use of the modified residual term \[\mathbf{r}_{2}(\boldsymbol{\theta}_{i_{k}},\boldsymbol{\theta}_{j_{k}})= \tag{7}\] \[\left[\mathrm{t2v}\left(\left(\mathbf{T}\left(\boldsymbol{\theta }_{i_{k}}\right)\right)^{-1}\mathbf{T}\left(\boldsymbol{\theta}_{j_{k}}\right) \right)-\tilde{\boldsymbol{\theta}}_{i_{k}j_{k}}\right]\left[\begin{matrix} \mathbf{I}_{\mathrm{G}\times\mathrm{6}}&\mathbf{0}\\ \mathbf{0}^{T}&0\end{matrix}\right]\] along edges where the relative scale factor is unknown. Note that, rather than introducing an alternative noise model, the term simply ignores errors in the relative scale, thus permitting arbitrary scale changes to occur. The complete objective of HPGO is finally given by \[\operatorname*{argmin}_{\boldsymbol{\theta}_{1},\ldots,\boldsymbol{\theta}_{N }}\sum_{k=1}^{M}\rho\left(\mathbf{r}_{d_{k}}(\boldsymbol{\theta}_{i_{k}}, \boldsymbol{\theta}_{j_{k}})^{T}\mathbf{\tilde{\Omega}}_{i_{k},j_{k}}\mathbf{ r}_{d_{k}}(\boldsymbol{\theta}_{i_{k}},\boldsymbol{\theta}_{j_{k}})\right), \tag{8}\] where * \(d_{k}=1\) if the \(k\)-th edge is a regular edge along which a Gaussian estimate for the relative scale exists, and * \(d_{k}=2\) otherwise. Note that there may well be situations in which the relative scale is unknown while other parameters are still given. Let us consider the example of a platform that is able to execute purely rotational displacements (e.g. a robotic vacuum cleaner). It is easily possible that the local map is pushed out of the current field of view. While the relative transformation (i.e. rotation) remains measurable, the eventual re-initialization will possibly lead to a jump in the local scale factor. Interestingly, if a loop closure occurs, it is still possible to reconcile a globally consistent graph for which the individual segments after each algorithm re-initialization remain consistently scaled. In the following, we will analyze the situations in which a globally consistent trajectory can still be recovered. ### _Feasibility study_ In order to explain why and when a globally consistent scale can be reconciled, let us consider a geometric graph abstraction. We identify the nodes which are located at the end of edges for which the relative scale is unknown, and denote them _critical nodes_. We record the true location of all critical nodes in a Euclidean space, and create pairwise connections between nodes if they are directly connected along the actual trajectory (i.e. without passing through other critical nodes). The obtained geometric graph abstraction is a construct of bars each one representing one part of the entire trajectory that--owing to known relative scale measurements--is consistently scaled after optimization. The above question may now be addressed as follows: _Besides a trivial globally consistent scaling of the entire bar construct, are there any subsets of bars that can be scaled individually without requiring a change of the relative angles between the bars?_ The requirement that the relative orientation of the bars needs to remain unchanged stems from the fact that indeed the orientation of the platform at the end of a trajectory segment must remain consistent with the orientation at the beginning of the next segment. Figure 2 illustrates a few basic cases that will serve to study the feasibility of global scale reconciliation. First, it is intuitively clear that a globally consistent scale can only be obtained if the graph contains at least one loop. For critical nodes that are not part of a loop, some trajectory segments will retain undetermined scale. Loopy cases are as follows: * Figure 2 (a): If there is only one critical node in the loop, the bar construct will degenerate. Traveling along the loop will always return to the same critical node, hence the latter is connected to itself by a bar with zero length. As a result, there is only one scale factor, and global consistency is always achievable. Fig. 2: Visualization of possible loopy graphs with tracking failures (i.e. critical nodes). (a), (b) and (c) show loops along which only one, two, or three tracking failures occur. (d) illustrates the special case in which the Euclidean position of the three critical nodes are aligned. (e) shows a planar scenario with 4 critical nodes. (f) illustrates how additional bars can reinstall global scale consistency. (g) indicates a 3D arrangement of 4 non-planar critical nodes. All configurations except (d) and (e) lead to global scale consistency. For (d) and (e), an-isotropic scaling is possible. See text for details. * Figure 2 (b): If there are two nodes in the graph abstraction, we would have two possible trajectory segments connecting the two nodes, and the respective bars would simply be the same. As they would always have equal scale, a loop with two tracking failures would always lead to reconcilable global consistency. * Figure 2 (c): For three critical nodes, we generally obtain a triangular construct of bars. It is clear that the requirement of unchanged angles means that we would require similar triangles, and all bars would again have to be isotropically scaled. A degenerate case is given if the triangle collapses and all bars become parallel (cf. Figure 2 (d)). In the latter case, it would become possible to apply different scales to the three segments without affecting the angular consistency requirement. * Figure 2 (e): For a single loop with four critical nodes, a single scale can no longer be reconciled in the planar case. A simple example is given by a rectangular arrangement of the bars, which would enable independent scaling of parallel bars. A similar situation occurs however for an arbitrary quadrilateral arrangement. * Figure 2 (f): Things change as soon as more connections are added to the graph. Compared to the previous case, if a diametrical connection is added, an-isotropic scaling would again violate the angle preservation requirement. A general rule can be stated as follows: If every critical node is part of at least one non-degenerate triangle, global scale consistency can be reconciled. * Figure 2 (g): It is important to note that all previous scenarios only consider the planar case. In the 3D case, for a non-planar arrangement of 4 nodes, global scale consistency can always be achieved. However, if there is at least one loop with more than four critical nodes, globally consistent scale can no longer be reconciled (irrespectively of the dimensionality of the problem). From a mathematical perspective, the possibility of a global scale reconciliation may be analyzed as follows. Let there be \(B\) bars in the graph, and let \(\mathbf{p}_{1},\ldots,\mathbf{p}_{N}\) be the absolute positions of the critical nodes. Let the starting and ending nodes of the \(k\)-th bar be given by \(\{\mathbf{p}_{s_{k}},\mathbf{p}_{e_{k}}\}\). If no relative angles between bars are allowed to change, it indeed means that the absolute orientation of each bar needs to remain unchanged. Let \(\mathbf{v}_{1},\ldots,\mathbf{v}_{B}\) be vectors representing the original globally registered bars. In order for the angular constraint to be satisfied, we need the end points of each bar to remain a scaled versions of the original bar vector, i.e. \[(\mathbf{p}_{e_{k}}-\mathbf{p}_{s_{k}})-\lambda_{k}\mathbf{v}_{k}=\mathbf{0}. \tag{9}\] Stacking all \(B\) constraints in a large matrix, we obtain \[\begin{bmatrix}\mathcal{I}_{3\times 3N}(s_{1},e_{1})&\mathbf{v}_{1}&&\\ \cdot&&\cdot&&\\ \cdot&&&&\cdot&\\ \cdot&&&&\cdot&\\ \mathcal{I}_{3\times 3N}(s_{B},e_{B})\\ \left[\mathbf{I}_{3\times 3}\ \mathbf{0}_{3\times 3(N-1)}\right]&\mathbf{0}& \cdot&\cdot&\cdot&\mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{p}_{1}\\ \cdot\\ \cdot\\ \mathbf{p}_{N}\\ \lambda_{1}\\ \cdot\\ \cdot\\ \lambda_{B}\end{bmatrix}=\mathbf{A}\mathbf{x}=\mathbf{0}, \tag{10}\] where \(\mathcal{I}_{3\times 3N}(s_{1},e_{1})\) is a horizontal concatenation of \(N\) matrices that are all \(3\times 3\) zero matrices except for the \(e_{k}\)-th matrix which is \(\mathbf{I}\) and the \(s_{k}\)-th matrix which is \(-\mathbf{I}\). Note that the last row is added in order to remove positional gauge freedom. The requirement of at most a single, global scale consistency can now easily be analysed by checking the rank of \(\mathbf{A}\). If \(\mathbf{A}\) has a single null-space vector, a globally consistent scale can be reconciled. However, if \(\mathbf{A}\) has a rank deficiency larger than 1, a global scale may no longer be reconciled. Note that in practice, rotational drift will make it impossible to figure out globally oriented bars before the actual pose graph optimization has finished, hence the overall graph consistency can only be analyzed retrospectively. ## IV Experiments We now proceed to the experimental evaluation of our novel hybrid pose graph optimization and demonstrate the validity of our findings on both simulated and real data. In both cases, the un-optimized input to the optimizer is given in the form of concatenated trajectory segments that exhibit scale jumps between successive segments. Note that in the case of real world data, we actually adopt an existing state-of-the-art open-source SLAM framework [31]. However, the standard framework does not concatenate successive trajectory segments but--upon tracking and re-localization failure--always initializes new maps from the same global reference frame (often set to identity rotation and zero translation). Hence, it would fail to properly initialize our hybrid pose graph optimizer. Throughout our experiments, we compare the following results: * NO-PGO: Simple concatenation of successive trajectory segments, but no loop closure and pose graph optimization. * SPGO: Scale-drift aware pose graph optimization. * HPGO: Scale-drift and scale-jump aware pose graph optimization. * GT: Ground truth obtained from external motion tracking system or wheel odometry (for qualitative comparison only). Given that in the monocular case an overall scale ambiguity will always remain, the final trajectories are aligned by SIM3 alignment using Umeyama's method [32] applied to the first 20 keyframes of each trajectory. We use the _evo_ odometry evaluation tool [33] to perform the alignment and compare the results obtained by the different methods. ### _Analysis on simulated data_ In order to validate the cases in which global scale consistency can be re-installed, we start by examining simulated planar trajectories during which the camera alternates between pure forward motion along the principal axis and pure rotations about the vertical \(y\)-axis, the idea being that the robot loses track of the local map when it rotates and the scale jumps after re-initialization. In the first experiment, the camera exhibits four equally long segments and rotates 120 degrees between successive segments. Each scale-consistent trajectory segment is composed of 20 key frames, and the assigned scales to each segment are 1, 0.5, 2, and 0.3. As a result of the rotations, the real trajectory admits a triangular shape and the final segment overlaps with the first one, thus simulating a scenario in which a loop closure occurs. The assumption therefore is that the relative scale between segments one and two, two and three, and three and four are unknown, while the relative scale between the last and the first segment is measured. As indicated in Figure 3 (a), the simple concatenation of the scaled segments leads to an obviously wrong result. SPGO on the other hand only models potential drift in the scale variable, but not discontinuous behavior. SPGO thus attempts to distribute the scale error uniformly over the whole graph, which is why the optimization result still presents severe distortions. As expected, our proposed HPGO framework recovers the exact ground truth trajectory. In order to verify the importance of the number of critical nodes along the loop, in a next experiment we augment the number of segments to 5 and reduce the inter-segment rotation angles to 90 degrees. The real trajectory therefore becomes a square, and it is now the fifth segment that overlaps with the first one. The assigned scales are 1, 0.5, 2, 0.2, and 0.8. As can be observed in Figure 3 (b), the simple concatenation of the scaled estimates again leads to severe drift, and SPGO is again unable to recover a correct, distortion-free result. Though better, our proposed HPGO this time also fails to recover a distortion-free result, and returns a rectangular trajectory. To confirm the importance of the spatial distribution of critical nodes rather than the exact shape of the trajectory, a third experiment was conducted. The camera moved along a flat circular path, with the principal axis tangential to the trajectory. In a first try we let the camera move along four thirds of a circle, again assuming critical nodes between segment one and two, two and three, and three and four. The fourth segment is again overlapping with the first, thus simulating the loop closure. As indicated in Figure 3 (c), the result is again similar to the case of the triangular trajectory, and HPGO is the only method able to recover an undistorted, globally consistent trajectory. On the other hand, as depicted in Figure 3 (d), if the camera exhibits five quarters of a circle, and the number of critical nodes again augments to four, the final result remains distorted. Note that numerical results for all simulations are indicated in Table I. ### _Validation on real data_ In our next experiment, we apply our proposed method to an existing Monocular SLAM system: ORB-SLAM3 [31]. The latter is a complete, modern system capable of handling tracking failures and multiple maps using Atlas[34]. To make our method work, we must adapt the bootstrapping and loop closure modules. The new bootstrapping module is designed for small dual-drive platforms capable of pure rotational displacements. First, we disable relocalization attempts after a tracking failure, and immediately attempt re-initializalition. We Fig. 3: Simulation results obtained for differently shaped planar trajectories. From left to right, each triplet indicates the original un-optimized trajectory, the result obtained by scale-drift aware pose graph optimization (SPGO), and the result obtained by our hybrid optimization framework (HPGO). The color reflects the time of each frame (light colors first, dark colors last). Critical nodes are indicated in pink. also detect pure rotations, and only bootstrap the estimation if sufficient displacement occurs. We furthermore use 1-point Ransac [35] to robustly initialize the algorithm even if only small displacements occur. After re-initialization, the edge connecting the last frame of the previous segment and the first frame of the new segment has pure rotation and unknown scale. Finally, the loop closure module is modified to use HPGO, and fragmented maps are merged into a complete and scale-consistent map after optimization. We apply our HPGO-based ORB-SLAM3 system to real-world data captured by the kobuki robot. The robot is capable of executing forward motion and pure rotations. Images are captured at 30 FPS using an onboard RGB camera with a resolution of \(640\times 480\) pix and a horizontal FoV of 57\({}^{\circ}\) and a vertical FoV of 43 \({}^{\circ}\). We use the OptiTrack motion capture system to capture the ground truth trajectory of the robot. As shown in Fig. 4 (a), our HPGO-based system can reconcile a globally consistent result in a situation with three re-initializations, but fails to reconcile scales in a scenario with 4 tracking failures (cf. Fig.4 (b)). The bottom row additionally illustrates the singular values of matrix \(\mathbf{A}\) introduced in (10). As can be observed, the matrix has additional rank deficiencies in the latter under-constrained case, which confirms its potential use in analysing the quality of the global scale reconciliation at the end of the optimization. Full ATE results are again listed in Table. II. ### _Tests on real-world, room-size scenarios_ For the final experiment, in order to evaluate the practicality of our HPGO-based ORB-SLAM3 system, the robot is assigned an exploratory task in a regular indoor environment (e.g. an office room). We use the onboard wheel odometry signals to obtain a metrically scaled, globally consistent version of the trajectory. It may not be as accurate as mocap ground truth, but still serves well towards a qualitative analysis of the algorithm's scale reconciliation ability. As shown in Figure 5 (a), in a 2-Bar constrained scenario, our HPGO-based system reconciles a globally consistent result while SPGO leads to severe distortions. Figure 5 (b) illustrates the case of executing HPGO multiple times and incrementally constructing a scale-consistent trajectory despite 6 tracking failures and re-initializations occurring Fig. 4: Trajectory and rank evaluation of real-world cases with three or four critical nodes. Top row: bird-eye view onto the estimated trajectories including a comparison against groundtruth (obtained by mocap system). Blue points denote critical nodes, while directional vectors indicate the connections between critical nodes. Bottom row: ordered singular values of matrix \(\mathbf{A}\) (from largest to smallest). The red line marks the break-down point where the singular values become very small. Fig. 5: Trajectory and rank evaluation of room-size real-world cases. Top row: bird-eye view onto the estimated trajectories with comparison against odometry and SPGO. Blue points denote critical nodes, red points indicate the initial position, and arrows show movement direction. Bottom row: ordered singular values of matrix \(\mathbf{A}\) (from largest to smallest) with break-down mark for each loop closure. For (b), the top and bottom figures correspond to the situation after the first and second loop closure, respectively. along the loop. A video demonstrating this result in real-time is indicated in the supplementary material. ## V Conclusion Our proposed hybrid pose graph optimization method represents a transparent way to reconcile multiple individually scaled trajectory segments captured by a monocular camera through a single optimization framework. The method is able to reconstruct accurate global trajectory results despite multiple re-initializations occurring along the loop. Our contribution furthermore comprises a theoretical analysis of global scale consistency purely based on the number, connectivity, and spatial arrangement of the re-initialization nodes. Although most results are obtained for a planar ground vehicle platform, it is worth noting that the optimization in all experiments was indeed carried out over SIM3. The complete theory presented in the paper remains valid for the general case. We believe that our contribution is of interest to the community, and plan to release a patch for ORB-SLAM3 to enable re-usability. ## VI Acknowledgement The research presented in this paper has been supported by projects 22dz1201900 and 22ZR1441300 funded by the Shanghai Science Foundation as well as the project 62250610225 by the National Science Foundation of China. The authors would furthermore like to acknowledge the support provided by Midea Robozone.
2301.10867
Resolution of cosmological singularity in Hořava-Lifshitz cosmology
The standard $\Lambda$CDM model despite its agreement with observational data still has some issues unaddressed, lie the problem of initial singularity. Solving that problem usually requires modifications of general relativity. However, there appeared the Ho\v{r}ava-Lifshitz (HL) theory of gravity, in which equations governing cosmological evolution include a new term scaling similarly as dark radiation term in the Friedmann equations, enabling a bounce of the universe instead of initial singularity. This review describes past works on a stability of such a bounce in different formulations of HL theory, initial detailed balance scenario and further projectable versions containing higher than quadratic term to the original action.
Ewa Czuchry
2023-01-25T23:29:22Z
http://arxiv.org/abs/2301.10867v2
# Resolution of cosmological singularity in Horava-Lifshitz cosmology ###### Abstract The standard \(\Lambda\)CDM model despite its agreement with observational data still has some issues unaddressed, lie the problem of initial singularity. Solving that problem usually requires modifications of general relativity. However, there appeared the Horava-Lifshitz (HL) theory of gravity, in which equations governing cosmological evolution include a new term scaling similarly as dark radiation term in the Friedmann equations, enabling a bounce of the universe instead of initial singularity. This review describes past works on a stability of such a bounce in different formulations of HL theory, initial detailed balance scenario and further projectable versions containing higher than quadratic term to the original action. Introduction Classical General Relativity (GR) apart its simple beauty and symmetry is also strongly confirmed in several experimental tests. However, it does not explain many issues like dark matter, spacetime singularities including the initial one in cosmology, and the ones inside the black holes. In order to answer these issues there has been many attempts to modify GR both on the classical and quantum level. Specifically, the quantisation of GR cosmology was supposed to resolve the initial singularity problem. Attempts to quantize gravity could be divided into two categories. One way was to assume the classical theory of gravity and quantise it in various manners, with the first attempts performed via the the covariant quantum gravity. In that classical approach one repeats the method successful in quantising electrodynamics, namely considering the path integral of the Hilbert-Einstein action and then calculates the perturbation of the metric around a background one. The obtained equations unlike in electrodynamics are non-renormalizable in higher energies. The canonical quantum gravity considers ADM \((3+1)\)-decomposition of the spacetime and quantisation of the constraints obtained from Hamiltonian. Other attempts included sophisticated theories like string theory and loop quantum gravity. These theories manage to solve some problems (such as a cosmological singularity [1]) but there are difficult to be phenomenologically tested [2; 3]. There are also attempts for resolving an initial singularity problem by combination of canonical and coherent state quantisation like the one in our paper [4], however at this moment they are difficult to be validated by observations data. Although there is still no full theory of quantum gravity developed it is supposed to manifest beyond a characteristic energy scale for quantum gravity \(E_{Pl}=\sqrt{\hbar c^{5}/G}\) built in terms of the speed of light \(c\), the gravitational constant \(G\) and the Planck's constant \(\hbar\). Therefore, there is the second research direction which aims to construct a modified version of GR with an improved UV behavior. General relativity after many tests performed seems to be consistent with all current observations. This makes it a very good IR limit of potential quantum gravity model. There has been made some proposals for UV-completions of general relativity in the past [5; 6]. They have one thing in common, namely the existence of some cutoff energy scale beyond which quantum effects could be detected, specially for a cutoff energy in the range of TeV. The widely discussed recent proposal is Horava gravity, which is a proposal of a UV complete theory of gravity. It seems to be renormalizable at high energies, which makes it a candidate for a quantum gravity model [7; 8]. The action of this theory contains additional higher order spatial derivatives and therefore the theory loses the full diffeomorphism invariance, keeping the (1+3) foliation preserving diffeomorphism. Moreover, there is an UV fixed point in this gravity model where there is an anisotropic Lifshitz scaling between time and space. Therefore, the resulting theory is called Horava-Lifshitz (HL) gravity. Significant work has been done on this theory where different aspects and properties were examined [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Many studies were devoted to cosmological solutions [12; 19; 23], braneworlds and dark radiation [12; 21]. Horava-Lifshitz cosmology obtained a novel feature enabling the existence of bounce instead of initial singularity predicted by classical GR. There has been also other research focused on finding specific solutions, including black holes, and their properties, and many works devoted to phenomenological aspects both astrophysical and concerning dark matter. Derivation of Horava-Lifshitz cosmology [12; 19; 23] made via varying action written Friedmann-Robertson-Walker space-time metrics resulted in equations analogous to the standard Friedman ones. These equations contain a new term which scales similarly as dark radiation [12; 19; 21], i.e. \(\sim 1/a^{4}\) (where \(a\) is a scale factor), and provides a negative contribution to the energy density. This feature enables obtaining non-singular cosmological evolution, resolving the initial singularity problem [14; 21; 24]. Such a possibility not only results in avoiding the initial singularity but may have other consequences for potential histories of the Universe like scenario of contraction from the infinite size connected by a bounce to the expansion to infinite size again, or eternal cycles of the similar scenario. Despite many promises made by this modified theory of gravity it seems that it contains instabilities and pathologies in different formulations (see e.g. [22; 25; 26; 27]). The original Horava formulation suffers from among many problems: the existence of ghost instabilities and strong coupling at IR [10; 28], the appearance of a term that violates parity [22], very large value and negative sign of cosmological constant [29; 30], issues with power counting renormalisation of the propagation of the scalar mode [13; 31]. Some of those problems might be solved by performing an analytic continuation of the parameters of the theory [20]. In the original Horava formulation it is assumed via the so called detailed balance condition that a potential part of the action is derived from the so-called superpotential, which limits the big number of its terms and corresponding independent couplings. Another imposed condition is the demand of projectability, used in a standard cosmology. It requires that lapse function \(N\) depends only on time \(N=N(t)\). It might seem that this condition is too strict but on the other hand it seems that non-projectable version of Horava gravity results in serious strong coupling problem ([28]) and does not possess a valid GR limit at IR. However, some authors [29; 32] claim the opposite, proposing adding additional terms to the superpotential (not to the action thus still keeping detailed balance or eventually softly breaking it) and relaxing projectability. Nonetheless, subsequent works demonstrated that it caused problems with the scalar mode power-counting renormalizability. One of the simplest models with the detailed balance condition relaxed is the Sotiriou-Visser-Weinfurtner (SVW) generalisation [22]. This version of HL gravity assumes a gravitational action containing terms not only quadratic in curvature, but also cubic ones, what was suggested already in [12; 19]. This model still maintains the projectability condition. Generalised Friedmann equations obtained from varying such an action contain not only a dark radiation term \(\sim 1/a^{4}\) but also terms scaling as \(\sim 1/a^{6}\) term. These new terms, although negligible at large values of \(a\), become dominating at small ones and might modify or cancel bounce solutions. Specifically, as it has the opposite sign than the \(1/a^{4}\) term, it may compensate the dark radiation term at small scales and result in singular solutions. Similar scenario arrives in the HL gravity with the softly broken detailed balance condition and negative spacial curvature [33]. Nonetheless, the issue of the initial singularity still remains one of the key questions of early Universe cosmology. The possibility that it might be avoided in a modified gravity and replaced by a bounce is a very promising feature. In this review we are going to present result of the research [14; 15] performed via phase portrait techniques, on occurrence and stability of the bounce in two simplest formulation of HL cosmology: original one with imposed detailed balance condition and SVW formulation relaxing this condition. As additional terms in analogs of Friedman equations are proportional the curvature parameter \(K=\{-1,0,1\}\) only non-flat cosmologies with \(K=\pm 1\) allow the existence of a bounce and existence of non-singular solutions. In [14] matter sector was described in terms of a scalar field with a potential given by a quadratic power of that field. More general approach and easier for fitting with observational data is the hydrodynamical approach used in [15] where matter sector is described in terms of density \(\rho\) and pressure \(p\). In the latter work it was assumed that \(w\) providing the relation between density and pressure in the equation of state, is constant, which is at some level an idealisation and simplification. At the moment we do not have the history of the HL universe constructed in a similar way as in the standard \(\Lambda\)CDM model, where we have phases and epochs containing different matter or radiation sectors. Therefore, as we still have limited understanding on the physical aspects of the theory and its parameters, current research rather describes different analytical possibilities, not some exact physical solutions. This paper is organised as follows: We first give a brief overview of HL cosmology in both scenarios under consideration in Section II. In Section III the possibility of bounce in both formulations is discussed. Section IV contains derivation and description of the phase portraits of the HL cosmology with imposed condition of detailed balance, while in the section V this condition is released. Section VI contains summary of results on possibility of a bounce in HL cosmology. In Section VII we discuss limitation of the underlying theory. ## II Horava-Lifshitz Cosmology The main obstacle in quantising gravity is that general relativity in its classical formulation is non-renormalisable. This might be visualised by expanding some quantity \(\mathcal{F}\) with respect to the gravitational constant [27] as follows: \[\mathcal{F}=\sum_{n=0}^{\infty}a_{n}\left(G_{N}E^{2}\right)^{n}. \tag{1}\] Here \(E\) is the energy of the system, \(a_{n}\) denotes a numerical coefficient and \(G_{N}\) is the gravitational coupling constant. Therefore, \(E^{2}\geq G^{-1}\) and the expansion above diverges. Consequently, as demonstrated, general relativity is not perturbatively renormalisable in the high energy regimes. There has been many researches pointing out that the ultraviolet behaviour of general relativity might be improved by including higher-order derivatives in the standard gravitational metric. The latter is the Einstein-Hilbert action: \[S=\int d^{4}x\sqrt{g}R, \tag{2}\] where \(\mathrm{d}^{4}x\) denotes volume element of the space-time, \(g\) is its metric matrix' determinant and \(R\) is a scalar curvature. Including higher order terms of the derivatives of the metric provides a following action: \[S=\int d^{4}x\sqrt{g}(R+f(R_{\mu\nu}R^{\mu\nu})). \tag{3}\] The additional terms, containing different derivatives of \(R\), \(R_{\mu\nu}\) etc. change the graviton propagator from \(1/k^{2}\) into \(1/(k^{2}-G_{N}k^{4})\)[7; 8]. The propagator part proportional to \(k^{-4}\) cancels the ultraviolet divergence. However, the resulting theory has time derivatives of \(\mathcal{O}>2\) and therefore non-unitary. Moreover, it possesses a spin-2 ghost with a non-zero mass [27] and derived form that action field equations are of the fourth order. The novel idea of Horava [7] was to construct a higher-order theory of gravity breaking the Lorentz invariance in the ultraviolet. In his theory only the _spatial_ derivatives are of \(\mathcal{O}>2\) which evaded the ghost. However, it is demanded that any theory of gravity theory should be consistent with all current experiments which have not detected any significant violation of Lorentz invariance. Thus it is necessary to restore the Lorentz invariance in the infrared limit. In order to overcome this problem Horava proposed an anisotropic scaling of space and time in high UV energies, which is known as Lifshitz scaling. In a 4-dimensional spacetime this scaling takes the form: \[t\to b^{-z}t,\,x^{i}\to b^{-1}x^{i}, \tag{4}\] here \(\,i=1,2,3\) and \(z\) is a critical exponent. Lorentz invariance is restored when \(z=1\), but the power-counting renormalisability demands \(z\geq 3\)[27], usually \(z=3\) is assumed. Therefore, the resulting theory is called Horava-Lifshitz (HL) gravity. Lorentz symmetry is here broken down to transformations \(t\rightarrow\xi_{0}(t)\), \(x^{i}\rightarrow\xi^{i}(t,x^{k})\), preserving the spatial diffeomorphisms unlike full space time diffeomorphisms invariance of GR. Thus such a theory acquires a symmetry preserving a space-time foliation [7; 27], where on each constant time hypersurface there are allowed arbitrary changes of the spatial coordinates. Preservation of a space-time foliation and anisotropic scaling between time and space and time introduces the ADM (1+3)decomposition of the spacetime. The standard ADM metrics in a preferred foliation and with \((-+++)\) signature is following: \[\mathrm{d}s^{2}=-N^{2}\mathrm{d}t^{2}+g_{ij}(\mathrm{d}x^{i}+N^{i}\mathrm{d}t) (\mathrm{d}x^{j}+N^{j}\mathrm{d}t). \tag{5}\] The dynamics is now described in terms of the lapse function \(N\), the shift vector \(N^{i}\), and the spatial metric \(g_{ij}\) (\(i\), \(j=1,2,3\)). The most general action for such theory can be written as: \[S=\int\mathrm{d}^{3}x\mathrm{d}t\,\,N\sqrt{g}\left[K^{ij}K_{ij}-\lambda K^{2}- \mathcal{V}(g_{ij})\right]. \tag{6}\] Here as usually \(g\) denotes the determinant of the spatial metric \(g_{ij}\), \(\lambda\) is a dimensionless running coupling constant, \(\mathcal{V}\) is a potential term and \(K\) is a trace of the extrinsic curvature of the spatial 3-dimensional hypersurface \(K_{ij}\): \[K_{ij}=\frac{1}{2N}\left(\dot{g}_{ij}-\nabla_{i}N_{j}-\nabla_{j}N_{i}\right). \tag{7}\] An overdot denotes a derivative with respect to the time coordinate \(t\). The trace of \(K_{ij}\) is \(K\). The potential \(\mathcal{V}\) is invariant only under three-dimensional diffeomorphisms [25] and depends only on the spatial metric and its (spatial) derivatives. Thus it contains only operators constructed from the spatial metric \(g_{ij}\) and of dimension 4 and 6. ### Detailed Balance As the action (6) is very complicated Horava [7; 26; 29] proposed to impose additional condition, the so-called detailed balance. It assumes that the \(\mathcal{V}\) could be derived from a superpotential \(W\)[7; 26; 29]: \[\mathcal{V}=E^{ij}\mathcal{G}_{ijkl}E^{kl},\quad E^{ij}=\frac{1}{\sqrt{g}} \frac{\delta W}{\delta g_{ij}}, \tag{8}\] and \[\mathcal{G}^{ijkl}=\frac{1}{2}\left(g^{ik}g^{jl}+g^{il}g^{jk}\right)-\lambda g ^{ij}g^{kl}. \tag{9}\] By carrying out an analytic continuation (e.g. [20]) of two constant parameters \(\omega\) and \(\mu\) we obtain he action for Horava-Lifshitz gravity in the detailed balance condition [26] and reads as \[\begin{split} S_{db}=\int\mathrm{d}t\,\mathrm{d}^{3}x\sqrt{g}N &\Bigg{[}\frac{2}{\kappa^{2}}\left(K_{ij}K^{ij}-\lambda K^{2} \right)+\frac{\kappa^{2}}{2\omega^{4}}C_{ij}C^{ij}-\frac{\kappa^{2}\mu}{2 \omega^{2}}\frac{\epsilon^{ijk}}{\sqrt{g}}R_{il}\nabla_{j}R_{k}^{l}\\ &+\frac{\kappa^{2}\mu^{2}}{8}R_{ij}R^{ij}+\frac{\kappa^{2}\mu^{2 }}{8(1-3\lambda)}\left(\frac{1-4\lambda}{4}R^{2}+\Lambda R-3\Lambda^{2}\right) \Bigg{]},\end{split} \tag{10}\] where \(C^{ij}\) is the Cotton tensor: \[C^{ij}=\epsilon^{ikl}\nabla_{k}\left(R^{j}_{\,l}-\frac{1}{4}R\delta^{j}_{l}\right), \tag{11}\] \(\epsilon^{ikl}\) denotes the totally antisymmetric tensor. The parameters \(\kappa,\omega,\) and \(\mu\) arriving in the theory have mass dimension respectively \(-1,\)\(0,\) and \(1\). The analytic continuation mentioned above reads as \(\mu\mapsto i\mu\) and \(\omega^{2}\mapsto-i\omega^{2}\) and it enables obtaining the positive values of the cosmological constant \(\Lambda\) as predicted by current observational results in the low energy regime. It is expected that action (10) reduces to the Einstein-Hilbert one in the IR limit of the theory. This is possible if the speed of light \(c\) and gravitational constant \(G\) correspond to HL parameters as follows: \[G=\frac{\kappa^{2}}{32\pi c},\quad c=\frac{\kappa^{4}\mu^{2}\Lambda}{8(3 \lambda-1)^{2}}. \tag{12}\] The coupling constant \(\lambda\) present in the action (10) is dimensionless. It runs with energy and flows to the three infrared (IR) fixed points ([7]): \(\lambda=1/3\), \(\lambda=1\) or \(\lambda=\infty\). However, some of those values seem unphysical, in the region \(1>\lambda>1/3\) there appear ghost instabilities in the IR limit of the theory [34]. The attempt to solve this problem [20] resulted in instabilities re-emerging at the other energy region, in UV. Thus the most physically interesting case is the regime \(\lambda\geq 1\) that allows for a possible flow towards GR, where \(\lambda=1\). Region \(\lambda\leq 1/3\) on the other hand is disconnected from \(\lambda=1\), therefore cannot be included in realistic physical considerations. In order to obtain a cosmological model it is necessary to populate the universe with matter (and radiation). The simplest method would be to model the matter sector by assuming it is described by a scalar field \(\varphi\) with a quadratic potential \(V(\varphi)=\frac{1}{2}m^{2}\varphi^{2}\). However, a more realistic approach is to apply a hydrodynamic approximation where matter is described by two quantities \(p\) and \(\rho\), which are respectively pressure and energy density and fullil the continuity equation \(\dot{\rho}+3H(\rho+p)=0\). To derive equations of HL cosmology one uses the projectability condition \(N=N(t)\)[7] and the spatial part of the metrics being the standard FLRW line element: \(g_{ij}=a^{2}(t)\gamma_{ij},\,N_{i}=0\), where \(\gamma_{ij}\) denotes a maximally symmetric metric with constant curvature: \[\gamma_{ij}\mathrm{d}x^{i}\mathrm{d}x^{j}=\frac{\mathrm{d}r^{2}}{1-Kr^{2}}+r^ {2}(\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{2}), \tag{13}\] values \(K=\{-1,0,1\}\) correspond respectively to closed, flat, and open Universe. This background metric implies that \[C_{ij}=0\,,\qquad R_{ij}=\frac{2K}{a^{2}}g_{ij}\,,\qquad K_{ij}=\frac{H}{N}g_{ ij}\,, \tag{14}\] where \(H\equiv\dot{a}/a\) denotes the Hubble parameter. On this background the gravitational action (10) take the following form : \[S_{\mathrm{FRW}}=\int dt\,d^{3}x\,Na^{3}\,\left\{\frac{3(1-3\lambda)}{2\kappa^ {2}}\frac{H^{2}}{N^{2}}+\frac{3\kappa^{2}\mu^{2}\Lambda}{4(1-3\lambda)}\left( \frac{K}{a^{2}}-\frac{\Lambda}{3}\right)-\frac{\kappa^{2}\mu^{2}}{8(1-3 \lambda)}\frac{K^{2}}{a^{4}}\right\}. \tag{15}\] In order to obtain equations of motion on a cosmological background one needs to vary the action (15) with respect to \(N\) and \(a\). Only after that and the lapse can be set to one: \(N=1\) and terms with density \(\rho\) and pressure \(p\) are added. This procedure provides the analogs to the Friedmann equations for the projectable Horava-Lifshitz cosmology with imposed the detailed-balance condition: \[H^{2} = \frac{\kappa^{2}\rho}{6(3\lambda-1)}\pm\frac{\kappa^{4}\mu^{2}}{8 (3\lambda-1)^{2}}\left(\frac{K\Lambda}{a^{2}}-\frac{\Lambda^{2}}{2}-\frac{K^{ 2}}{2a^{4}}\right), \tag{16}\] \[\dot{H} = -\frac{\kappa^{2}(\rho+p)}{4(3\lambda-1)}\mp\frac{\kappa^{4}\mu^ {2}}{8(3\lambda-1)^{2}}\left(\frac{K\Lambda}{a^{2}}+\frac{K^{2}}{4a^{4}} \right), \tag{17}\] together with the continuity equation: \[\dot{\rho}+3H(\rho+p)=0. \tag{18}\] In the equations above there are two signs before the terms with \(\Lambda\), namely the upper one corresponds the \(\Lambda<0\) case, the lower one describes the analytic continuation \(\mu\mapsto i\mu\) providing a positive \(\Lambda\). Some terms in the above equations which scale as \(a^{-4}\) are similar to the dark energy expressions therefore parameters: energy density \(\rho_{de}\) and pressure density \(p_{de}\) are interpreted as dark energy parameters: \[\rho_{de}|_{db} :=\frac{3\kappa^{2}\mu^{2}K^{2}}{8(3\lambda-1)a^{4}}+\frac{3\kappa^ {2}\mu^{2}\Lambda^{2}}{8(3\lambda-1)}, \tag{19}\] \[p_{de}|_{db} :=\frac{\kappa^{2}\mu^{2}K^{2}}{8(3\lambda-1)a^{4}}-\frac{3\kappa^ {2}\mu^{2}\Lambda^{2}}{8(3\lambda-1)}. \tag{20}\] We require that eqs (16) and (17) coincide with the standard Friedmann equations. Thus, we can identify the following: \[c=\frac{\kappa^{2}\mu}{4}\sqrt{\frac{\Lambda}{1-3\lambda}},\ \ G=\frac{\kappa^{2}c}{32\pi},\ \ \Lambda_{E}=-\frac{3\kappa^{4}\mu^{2}}{3\lambda-1}\frac{\Lambda^{2}}{32}=\frac{3 c^{2}}{2}\Lambda, \tag{21}\] respectively, as well as \(\mu^{2}\Lambda=1/32\pi^{2}G^{2}\) and \(\lambda=1\) (which is an IR fixed point). We demand a real value of the speed of light \(c\), therefore the cosmological constant \(\Lambda\) has to be negative for \(\lambda>1/3\) and positive for \(\lambda<1/3\). In order to obtain a positive cosmological constant \(\Lambda\), as suggested by observations, it is necessary to perform in (10) an analytic complex continuation of constant parameters \(\mu\) and \(\omega\) as follows \(\mu\mapsto i\mu\) and \(\omega^{2}\mapsto-i\omega^{2}\). On the level of equations for Horava-Lifshitz cosmology varying \(\lambda\)-parameter in the range \([1,\infty)\) results in the running of the speed of light, but does not change the structure of the equations (16) and (17). When we substitute the equation of state \(p=w\rho\) and the above expressions linking physical constants and HL parameters to (16) and (17) we obtain the following equations: \[H^{2} =\frac{2}{3\lambda-1}\left[\frac{\rho}{3}\pm\left(\frac{\Lambda_{ E}}{3}-\frac{K}{a^{2}}+\frac{3}{4\Lambda_{E}}\frac{K^{2}}{a^{4}}\right)\right] \tag{22}\] \[\dot{H} =\frac{2}{3\lambda-1}\left[-\frac{(1+w)}{2}\rho\pm\left(\frac{K}{ a^{2}}-\frac{3}{2\Lambda_{E}}\frac{K^{2}}{a^{4}}\right)\right]. \tag{23}\] ### Beyond detailed balance The gravitational action (10) contains terms up to quadratic in the curvature. However, a more general renormalizable theory could also contain cubic terms and there is not _a priori_ reason to keep only quadratic terms ([12; 19; 27]). Thus Sotiriou, Visser and Weinfurtner ([26]) built a projectable theory as the original Horava theory, but without imposing the detailed balance condition in the action. This formulation led to Friedmann equations with an additional term \(\sim 1/a^{6}\), moreover with additional and uncoupled coefficients: \[H^{2} =\frac{2}{(3\lambda-1)}\left(\frac{\rho}{3}+\sigma_{1}+\sigma_{2} \frac{K}{a^{2}}+\sigma_{3}\frac{K^{2}}{a^{4}}+\sigma_{4}\frac{K}{a^{6}}\right), \tag{24}\] \[\dot{H} =\frac{2}{(3\lambda-1)}\left(-\frac{p}{2}-\frac{\rho}{2}-\sigma_{2 }\frac{K}{a^{2}}-2\sigma_{3}\frac{K^{2}}{a^{4}}-3\sigma_{4}\frac{K}{a^{6}} \right). \tag{25}\] In order to coincide with the Friedmann equations in the IR limit \(\lambda=1\) and for large \(a\), when terms proportional to \(1/a^{4}\) and to \(1/a^{6}\) become negligibly small, one has to set \(\sigma_{1}=\Lambda_{E}/3\) and \(\sigma_{2}=-1\). However, values of constants \(\sigma_{3}\), \(\sigma_{4}\) are at this stage arbitrary. This way we obtain the following equations: \[H^{2} =\frac{2}{(3\lambda-1)}\left(\frac{\rho}{3}+\frac{\Lambda_{E}}{3} -\frac{K}{a^{2}}+\sigma_{3}\frac{K^{2}}{a^{4}}+\sigma_{4}\frac{K}{a^{6}} \right), \tag{26}\] \[\dot{H} =\frac{2}{(3\lambda-1)}\left(-\frac{\rho(1+w)}{2}+\frac{K}{a^{2}} -2\sigma_{3}\frac{K^{2}}{a^{4}}-3\sigma_{4}\frac{K}{a^{6}}\right), \tag{27}\] We can observe new terms in the above analogs of Friedmann equations, proportional to \(1/a^{6}\). They mimic stiff matter, such that \(\rho=p\) (\(w=1\)) which scales similarly \(\rho_{\rm stiff}\sim 1/a^{6}\)). These terms are negligibly small at large scales, but may play a significant role at small values of a scale parameter, thus changing the dynamics of the Universe around initial singularity or a bounce. ## III Existence of bounce Horava-Lifshitz cosmological equations contain additional terms proportional \(a^{-4}\) (DB) and to \(a^{-6}\) (BDB) that introduce the possibility of a cosmological bounce, namely a scenario in which contraction of the universe stops and reverse to expansion (or in the opposite direction). In a DB scenario, from the form of eq. (16) it follows that it is possible that \(H=0\). When this condition is fulfilled at some monet of time the realisation of the bounce is possible (but not necessary, for that we also need \(\dot{H}\neq 0\). In the case \(\lambda=1\)[12], the bounce may happen in non-empty Universe equipped with matter, at the critical time \(t_{*}\), \(a=a_{*}\), when the critical energy density reaches the following value: \[\rho=\rho_{*}=\frac{12}{\kappa^{2}}\left(\frac{K}{a_{*}^{2}}+\frac{\Lambda_{E }}{3}+\frac{\kappa^{4}\mu^{2}}{64}\frac{K^{2}}{a_{*}^{4}}\right), \tag{28}\] This value is determined by the values of couplings \(\kappa\) and \(\mu\). Additionally, a continuity equation implies that at the bounce \(\dot{H}>0\). Therefore, when the condition \(H=0\) is fulfilled we also have the sufficient condition for existence of a bounce \(\dot{H}\neq 0\). As \(\dot{H}>0\) is is only possible a transition from a contracting to an expanding phase, but not the reverse. Moreover, there is another condition for a realisation bounce [24] that requires that \((\frac{\rho}{12}-p)>0\) and the energy density of regular matter scales less fast than dark matter terms. Near the bounce so for small \(a\) the dominating terms in the Friedmann equations (22) and (23) are the terms scaling as \(a^{-4}\), while others terms become insignificant. Particularly, \(H^{2}\), \(\dot{H}\) and \(\rho\) scale as \(a^{-3(1+w)}\), where \(w\) is a constant parameter in the equation of state \(p=w\rho\). Subsequently, if \(w>-\frac{1}{3}\) the density term dominates over the curvature term \(\sim 1/a^{2}\). In the BDB scenario bounce might happed at the critical density: \[\rho_{*}=-\Lambda_{E}+3\frac{K}{a_{*}^{2}}-3\frac{\sigma_{3}K^{2}}{a_{*}^{4}} -\frac{3\sigma_{4}K}{a_{*}^{6}}. \tag{29}\] For flat universe and positive cosmological constant bounce is not positive as resulting critical density becomes negative. ## IV Bouncing stability in the detailed balance formulation We are mainly interested in the possibility of appearing of a bounce which could be given by dynamics of variables \(a\) and \(H\). From eq. (22) we might determine \(\rho\) and then insert its formula into (23). This way we obtain two systems, one containing the formula fo density and its derivative via the continuity equation (22), but still dependent on \(a\) and \(H\). The second system is independent and consist of two equations describing the evolution of \(a\) and \(H\). Specifically, eq. (22) provided a following expression for \(\rho\): \[\rho=\frac{3(3\lambda-1)}{2}H^{2}\mp\left(\Lambda_{E}-3\frac{K}{a^{2}}+\frac{ 9}{4\Lambda_{E}}\frac{K^{2}}{a^{4}}\right). \tag{30}\] This expression substituted in (23) results in \[\dot{H}=\frac{\pm 1}{3\lambda-1}\left[\left(1+w\right)\Lambda_{E}-\left(3w+1 \right)\frac{K}{a^{2}}+\frac{3\left(3w-1\right)}{4\Lambda_{E}}\frac{K^{2}}{a^ {4}}\right]-\frac{3}{2}\left(1+w\right)H^{2}. \tag{31}\] Adding the the definition of the Hubble parameter: \[\dot{a}=aH, \tag{32}\] we have a two dimensional dynamical system. The set of equations (31-32) is difficult to solve analytically. However, we are interested non in detailed solutions but in the qualitative analysis. In purpose of that we use the method of the phase portraits, where we search for critical points and analyse their character. These points are locations where the derivatives of all the dynamic variables, In our case when the r.h.s. of (31-32), vanish. What we obtain are the only points where phase trajectories could start, end or intersect. Moreover, they can also appear in infinity. In this case a suitable coordinate transformation, the so called Poincare projection, is used that projects the complete phase space onto a compact region. The nature of these points, both finite and infinite, is given by the properties of the Jacobian matrix of the linearized equations at those points. All that information provides a qualitative analysis of the dynamical system. The method of finding critical points consists of setting all right-hand-sides of dynamical equations to zero, thus finding points where derivative of dynamical variables vanish. In case of two equations (31)-(32) the corresponding solutions are two following \(P_{1}\) and \(P_{2}\) in phase-space \((a,H)\): \[P_{1}:a^{2} = \frac{3K}{2\Lambda_{E}},\;H=0, \tag{33}\] \[P_{2}:a^{2} = \frac{(3w-1)K}{(1+w)2\Lambda_{E}},\;H=0. \tag{34}\] These two points exist when the values of \(a\) obtained via square root of the expression on the right hand side of the above equations are real and nonnegative. Thus the point \(P_{1}\) exists if \(K/\Lambda_{E}>0\), if we assume a positive cosmological constant therefore only for \(K>0\). Point \(P_{2}\) exists when \(w>1/3\) and \(K/\Lambda_{E}>0\) or \(w<1/3\) and \(K/\Lambda_{E}<0\). Thus we have two critical points existing if the parameter of state \(w>1/3\). Moreover, they are both finite, unless \(w=-1\), when \(P_{2}\) blows to infinity. As mentioned above, due to \(\hat{H}>0\) both points represent a bouncing solution. In order to complete the analysis, the stability properties of the critical points needed. They are determined by the eigenvalues of the Jacobian \(A\) of the system (31)-(32). Eigenvalues of \(A\) with non-zero real parts different from zero point to hyperbolic points. They include sources (unstable) with positive real parts, saddle for real parts of opposite sign and sinks (stable) corresponding to negative real parts. Critical points at which all the eigenvalues have real parts different from zero are called hyperbolic. Among them one can distinguish sources (unstable) with positive real parts, saddles with real parts of different sign and sinks (stable) for negative real parts. If at least one eigenvalue has a real part equal to zero it is then called a non-hyperbolic critical point. For such points it is not possible to obtain conclusive information about the stability from the Jacobian matrix and other tools like e.g. numerical simulation [35] should be then used. In the case of (31)-(32) the eigenvalues of \(A\) at \(P_{1}\) are both imaginary and it is a center for all the values of the parameters. The character of \(P_{2}\) is more complicated and depend on the values of \(\Lambda_{E}\), \(K\) and \(w\). Thus \(P_{2}\) is a center when \(K/\Lambda_{E}<0\) and \(-1\leq w<1/3\), with a special subcase for \(w=-1\) that being so it becomes a linear center, so a center with only one eigenvector. Otherwise it becomes a saddle, thus without a bouncing possibility. To have a full picture of the dynamics of the Universe also the information about critical points that occurring at infinity is necessary. For this purpose the so called Poincare projection [36; 37] is used. It projects the whole infinite phase space \((a,H)\) onto a compact region. Specifically, we introduce the new coordinates \((\tilde{a},\tilde{H})\) which written in polar coordinates \(r,\phi\): \(\tilde{a}=r\cos\phi\) and \(\tilde{H}=r\sin\phi\). Moreover: \[a = \frac{r}{1-r}\cos\phi, \tag{35}\] \[H = \frac{r}{1-r}\sin\phi, \tag{36}\] It is also necessary to rescale the time parameter \(t\), which take infinite values, introducing the new time parameter \(T\) in a similar way, i.e. \(dT=dt/(1-r)\). In such coordinates the phase space is now compactified to a sphere of radius one and its interior. Here infinity corresponds to \(r=1\). We have to keep in mind that a scale factor \(a\) may take only nonnegative values, so actually a semi-sphere. This procedure provides the dynamical equations in terms of \(r\), \(\phi\) and their derivatives with respect to new time \(T\). At the surface of the sphere, so limit \(r=1\) there are 3 solutions \(P_{3}=(1,0)\), \(P_{4}=(1,\pi/2)\), \(P_{6}=(1,-\pi/2)\) written in polar coordinates \((r,\phi)\). These critical points are now hyperbolic unless \(w=-1\), resulting in \(P_{4}\) and \(P_{6}\) being respectively a repelling and an attracting node. For \(w=-1\) points \(P_{4}\) and \(P_{6}\) are non-hyperbolic and numerical simulations provide that they are saddles and also ends of a separatrice. The numerical phase portraits are presented at fig. 1, which contains the deformed phase space, scaled to fit on the compactified sphere. We observe that bounce scenarios are only possible when one of the critical points \(P_{1}\) and \(P_{2}\) exist and is a center. Then we have closed orbits around them and the Universe might go through eternal cycles of expansion and collapse, connected by a bounce of a finite size, expansion etc. However, the point \(P_{1}\) describes less physical bouncing solution with the density \(\rho=0\). More interesting case is when \(P_{2}\) is a center, as a density \(\rho\) is non-zero at that point. The special case of \(w=-1\) provides a third bounce scenario is around the linear center \(P_{2}\) located now at \(\infty\). In this case the universe begins in a static infinite state as \(H=0\)\(a=\infty\), then contracts to a finite size and rebounces to a static infinite universe. ## V Bounce stability in the beyond detailed balance formulation In the Sotiriou, Visser and Weinfurtner formulation the generalised Friedmann equations (26)-(27) contain additional terms \(\sim 1/a^{6}\) and uncoupled coefficients. Solving eq. (26) for \(\rho\) provides: \[\rho=3\frac{(3\lambda-1)}{2}H^{2}-\Lambda_{E}-3\frac{K}{a^{2}}-3\frac{\sigma_{ 3}K^{2}}{a^{4}}-\frac{3\sigma_{4}K}{a^{6}}. \tag{37}\] Substituting this expression on \(\rho\) into ((27)) and using the equation of state \(p=w\rho\) results in \[\dot{H}=\frac{2}{3\lambda-1}\left(\frac{\Lambda_{E}(1+w)}{2}-\frac{K(1+3w)}{2a ^{2}}+\frac{\sigma_{3}(-1+3w)K^{2}}{2a^{4}}+\frac{3\sigma_{4}(1+w)K}{2a^{6}} \right)-\frac{3(1+w)}{2}H^{2}. \tag{38}\] As in the DB case, supplementing the above equation with the definition of the Hubble parameter provides the two dimensional dynamical system. Again we search for critical points where \(\dot{a}\) and \(\dot{H}\) these points fulfil \(H=0\) and obtain the following condition: \[\Lambda_{E}(1+w)a^{6}-K(1+3w)a^{4}+\sigma_{3}(-1+3w)K^{2}a^{2}+3\sigma_{4}(-1+ w)K=0 \tag{39}\] It is a bicubic equation, which in general possesses quite complicated solutions but might be simplified in two special cases. Namely when \(w=-1\) describing the equation of state of the cosmological constant, and in case of radiation described by \(w=1/3\). Besides these two cases critical points of the system (32) and (38) have following coordinates \((a_{x},0)\). Here \(a_{x}^{2}\) is a root of the cubic equation: \[\Lambda_{E}(1+w)x^{3}-K(1+3w)x^{2}+\sigma_{3}(-1+3w)K^{2}x+3\sigma_{4}(-1+w)K=0. \tag{40}\] Such an equation might have zero, one, two or three real solutions depending on the sign of its discriminant. Moreover, if they exist they are either always stable or always unstable depending on the sign of \(K/(3\lambda-1)\). Their character depends on the values of \(a_{x}\), \(\Lambda_{E}\), \(\sigma_{3}\) and \(\sigma_{4}\). The most significant feature of oscillating (and bouncing) solutions in the SVW formulation is the existence of two centres, with a saddle between them (three finite critical points) for some values of parameters. In a more realistic situation, that includes dynamical change of state parameter, it would be possible to go from one oscillating bouncing solution to another. In order to study stability properties of infinite critical points one again has to perform the Poincare transformation. It leads to the similar results as in detailed balance scenario. Points at infinity are transformed to the sphere \(r=1\). Two points at \(\phi=\pi/2\) and at \(-\pi/2\) are respectively repelling and attracting node, respectively. The point at \(\phi=0\) is non-hyperbolic. Figure 2. shows the example of the phase space of system with three finite critical points. Here points \(S_{1}\) and \(S_{3}\) are centres and a point \(S_{2}\) is a saddle. Figure 1: Projected phase space of HL universe [15] in DB condition. On the left a case with \(K\Lambda_{E}>0\) and \(w>1/3\), on the right \(K/\Lambda_{E}<0\) and \(-1<w<1/3\). ## VI Discussion This paper reviews the research performed the cosmological bounce in different formulations of projectable versions of Horava-Lifshitz gravity, with and without detailed balance condition. The analogs of the Friedmann equations in both this models contain a term scaling as \(1/a^{4}\) and similar to dark radiation. That additional term enables that fhe the Hubble parameter might be \(H=0\) at some moment of time. This is a necessary condition for the realisation of the bounce while an additional condition \(\dot{H}\neq 0\) makes it sufficient. In the Sotiriou, Visser and Weinfurtner model there is an additional term \(1/a^{6}\) in the analogs of Friedmann equations. This term is of arbitrary sign, so it can enhance the possibility of a bounce or cancel it. The biggest difference between the detailed balance theory and its breaking arrives for the small values of a scale parameter \(a\) as the SVW gravity term \(1/a^{6}\) plays role only for the small values of \(a\) and becomes insignificant for the bigger ones. This difference is visible in phase portraits of both theories and number of potential bouncing solutions. In the original Horava formulation there exists one bouncing solutions for all values of parameters but it corresponds to density \(\rho=0\). For non zero \(\rho=0\) there might be a bouncing solution if \(K/\Lambda_{E}<0\) and \(-1\leq w<1/3\), for other values of parameters a bounce is not possible. The SVW HL cosmology is a bit more complicated as there are additional terms in the analogs of Friedmann equations. There exist bouncing solutions for some values of parameters of the theory, however a range of parameters that lead only to singular solutions is wider than in the detailed balance scenario. One very interesting special case includes two centres, with a saddle between them (corresponding to three finite critical points). If one takes into account dynamical change of state parameter, which is much more realistic scenario, it might be possible to go from one oscillating solution to another bouncing solution. The problem is that the existence of such solutions depends on the values of coupling constants \(\sigma_{3}\) and \(\sigma_{4}\) and their physical interpretation still remains an open question. Moreover, in both these formulations, bouncing non-singular solutions exist only in case of a non-flat universe \(K\neq 0\). Otherwise the bouncing solutions become singular. ## VII Conclusions The obtained cosmological results presented here are promising and suggest there is a possibility to replace the initial cosmological singularity of GR by finite bouncing solutions. However, one must also consider that there are many problems and contradicting statements in the different formulations and extensions of HL-type theories. Aside from that aspect there are also observational bounds on the existence of the Horava-Lifshitz gravity and the values of its constants and parameters. At present, HL-type theories, including the original one and its extensions, are not yet ruled out by observational Figure 2: Projected phase space of the HL universe in beyond detailed balance formulation with 3 critical points existing[15]. data. However, there now are tight bounds on some parameters of the theory [38] from the binary neutron star merger GW170817 [39]. Therefore, it is possible that further observational data might either rule out some specific scenarios or the whole model. It is also possible that some agreement with observations could provide a better justification for additional theoretical research as it is still hoped that HL gravity could offer a promising cosmological scenario without initial singularity and solve some shortcomings of classical GR, like non-renormalisability and thus problems with quantisation. There are several observational bounds on different regions of the Horava-Lifshitz framework, _e.g._ using data from binary pulsars [40; 41], using general cosmological data [16] and also bounds in the context of dark energy [42]. In the context of dark matter and dark energy there are also bounds on generally Lorentz violation [9; 43]. There is also quite recent research performed in the effective field theory formalism [18] of the extension HL gravity [25]. However, this analysis is reduced to a flat background spacetime, which limits the overall number of parameters. In our papers [44; 45] we have placed new bounds on parameters of Horava-Lifshitz cosmology, in its projectable version with and without imposing detailed balance condition. We found very interesting results on spatial curvature. Namely the original HL model is well fitted with a positive non-zero spatial curvature with accuracy to more than \(3\sigma\), whereas when we relaxed the detailed balance condition we obtained again positive non-zero spatial curvature at \(1\sigma\) accuracy. As this analysis also included BAO's, therefore there is needed further investigation of the curvature parameter which could possibly finally exclude some of the HL models. Anyway those results seem to be fascinating in view of future observation and also somehow demonstrate why an analysis limited to zero spatial curvature is somehow limited. Still non-singular bouncing solutions in HL universe appear only for non-zero spatial curvature, so these two topics are related. We have to take into account that most obtained bounds on the parameters of the HL cosmology are similar to those in \(\Lambda\)CDM model, except the non-zero curvature parameter. Of course, the \(\Lambda\)CDM model has still fewer parameters and from this point of view should be preferred; it also fits the data well. However, one has to bear in mind also the theoretical aspects of Horava gravity which make it a good candidate for an ultra violet complete theory of gravity. There also several implication like the possible resolution of the initial cosmological singularity, so there are still many reasons to keep investigating this model and its extensions.
2310.17691
Pair-density-wave and $s\pm \mathrm{i}d$ superconductivity in a strongly coupled, lightly doped Kondo insulator
We investigate the large Kondo coupling limit of the Kondo-Heisenberg model on one- and two-dimensional lattices. Focusing on the possible superconducting states when slightly doping the Kondo insulator state, we identify different pairing modes to be most stable in different parameter regimes. Possibilities include uniform $s$-wave, pair-density-wave with momentum $\pi$ (in both one and two dimensions) and uniform $s\pm \mathrm{i}d_{x^2-y^2}$-wave (in two dimensions). We attribute these exotic pairing states to the presence of various pair-hopping terms with a ``wrong'' sign in the effective model, a mechanism that is likely universal for inducing pairing states with spatially modulated pair wavefunctions.
Fangze Liu, Zhaoyu Han
2023-10-26T18:00:00Z
http://arxiv.org/abs/2310.17691v2
# Pair-density-wave and \(s\pm\mathrm{i}d\) superconductivity ###### Abstract We investigate the large Kondo coupling limit of the Kondo-Heisenberg model on one- and two-dimensional lattices. Focusing on the possible superconducting states when slightly doping the Kondo insulator state, we identify different pairing modes to be most stable in different parameter regimes. Possibilities include uniform \(s\)-wave, pair-density-wave with momentum \(\pi\) (in both one and two dimensions) and uniform \(s\pm\mathrm{i}d_{x^{2}-y^{2}}\)-wave (in two dimensions). We attribute these exotic pairing states to the presence of various pair-hopping terms with a "wrong" sign in the effective model, a mechanism that is likely universal for inducing pairing states with spatially modulated pair wavefunctions. The Kondo-Heisenberg model is a paradigmatic model of strongly correlated electronic systems, attracting interest as a model of various materials [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11] and as a prototypical model for various exotic physical phenomena including quantum criticality [12; 13; 14; 15; 16], fractionalization [17; 18], odd frequency pairing [19; 20] and pair density wave [20; 21; 22; 23; 24]. Although it has received considerable theoretical investigations, the majority of research efforts have adopted bosonization techniques in one dimension [2; 3; 21; 25; 26; 27] or large-\(N\) techniques [28; 29; 30] in two or higher dimensions [1; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44], and, with a few exceptions [2; 3; 45; 46; 47; 48], focused on relatively weak coupling regimes (i.e. the Kondo coupling strength is not strong compared to the bandwidth of the itinerant electrons). In pursuit of an in-depth understanding of this important model, this work focuses on the large Kondo coupling regime, where the (antiferromagnetic) Kondo coupling \(J_{\mathrm{K}}\) is much greater than the electron hopping amplitude \(|t|\) and the local moment (antiferromagnetic) Heisenberg coupling \(J_{\mathrm{H}}\). In this limit, a mapping to the infinite \(U\) Hubbard model [45] and a strong coupling expansion can be justified, based on which we explicitly derive the low-energy effective model. This model is similar to the \(t-J\) model derived from the strong coupling limit of the Hubbard model but features various additional pair-hopping terms. Based on this effective model, we investigate the superconducting (SC) phase diagram at a filling fraction slightly away from one electron per unit cell and at a low temperature, by means of numerical mean-field (MF) theory. For each set of parameters (\(t/J_{\mathrm{K}}\) and \(J_{\mathrm{H}}/J_{\mathrm{K}}\)), we solve the pairing wavefunction self-consistently and compute the free energy at every Cooper pair momentum \(\mathbf{q}\), allowing us to determine the optimal pairing momentum, and, if the pairing is at certain high-symmetry momentum, the pairing symmetry. The phase diagrams are summarized for the 2D and 1D cases in Figs. 1& 2, respectively. Remarkably, in the 2D scenario, a \(s\pm\mathrm{i}d_{x^{2}-y^{2}}\) pairing state featuring broken time-reversal symmetry is found to be stable within a regime of the phase diagram. More interestingly, within a similar parameter regime for both cases, a pair-density-wave (PDW) [49] with momentum \(\pi\) is found to be the most stable state, in possible agreement with the 1D result obtained by a bosonization method [21] and a numerical density-matrix-renormalization-group study [22]. We explain the observed exotic pairing phases from a strong-pairing perspective based on the observation that the leading pair-hopping terms in the effective model have a "wrong" sign, which is likely a general mechanism for such exotic pairing momenta and/or symmetries (see, e.g. Refs. [50; 51], for similar examples). **Model and Method.** In this paper, we study the Kondo-Heisenberg model: \[\hat{H}= -t\sum_{\langle ij\rangle,\sigma}(\hat{c}^{\dagger}_{i\sigma}\hat {c}_{j\sigma}+\mathrm{h.c.})+J_{\mathrm{K}}\sum_{i}\hat{\mathbf{s}}_{i}\cdot\hat{ \mathbf{S}}_{i}\] \[+J_{\mathrm{H}}\sum_{\langle ij\rangle}\hat{\mathbf{S}}_{i}\cdot\hat{ \mathbf{S}}_{j} \tag{1}\] where \(\hat{c}_{i\sigma}\) annihilates a spin-\(\sigma\), itinerant electron on site-\(i\), and \(\hat{\mathbf{s}}_{i}\) and \(\hat{\mathbf{S}}_{i}\) respectively represent the electron spin and the local moment on site \(i\). While this model is definable on any lattice, for concreteness we will focus on the 1D chain and the 2D square lattice. Due to the bipartite nature of the lattices, there is a particle-hole symmetry generated by \(\hat{c}_{i\sigma}\rightarrow(-1)^{i}\hat{c}^{\dagger}_{i\sigma}\), allowing us to concentrate on the hole-doped side, where \(n\equiv\frac{1}{N}\sum_{i}(\hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma})<1\) (\(N\) is the system size). Furthermore, since the sign of \(t\) can be trivially altered by a gauge transformation \(\hat{c}_{i\sigma}\rightarrow(-1)^{i}\hat{c}_{i\sigma}\), we assume \(t>0\) without loss of generality. In this work, we consider the large \(J_{\mathrm{K}}\) limit by regarding \(t/J_{\mathrm{K}}\) and \(J_{\mathrm{H}}/J_{\mathrm{K}}\) as small parameters. To the zeroth order of the analysis and when electron filling \(n<1\), each site has three possible states: \((\mid\Uparrow\downarrow)-\mid\Downarrow\uparrow))/\sqrt{2}\) (Kondo singlet) or \(\mid\varnothing\updownarrow\)), where \(\updownarrow\) represents the spin of the electron whereas \(\updownarrow\) represents the local moment, and \(\varnothing\) indicates the absence of any itinerant electron. Different tensor-product combinations of these states form the low-energy Hilbert space, \(\mathcal{H}_{\mathrm{eff}}\), which is separated from all the other states by an energy gap \(\sim J_{\rm K}\). To describe the low-energy physics within \(\mathcal{H}_{\rm eff}\), we define a set of fermionic operators \(\hat{h}_{i\sigma}\) to effectively describe the holes doped into the system. The mapping between these hole operators and the operators in \(\mathcal{H}_{\rm eff}\) can be locally established as [45] \[\hat{h}_{i\uparrow}\leftrightarrow\frac{1}{\sqrt{2}}(|\Uparrow\downarrow\rangle- |\Downarrow\uparrow\rangle)\langle\varnothing\updownarrow|, \tag{2}\] thus \(\hat{h}_{i\sigma}\) annihilates a spin-\(\sigma\), charge-\(-1\) object relative to the 'vacuum' of \(\mathcal{H}_{\rm eff}\), which refers to the strong coupling Kondo insulator state with a Kondo singlet on every site. Note that, to faithfully map between in the physical Hilbert space \(\mathcal{H}_{\rm eff}\) and the Fock space of the hole operators, it needs to be further recognized that two holes cannot simultaneously occupy the same site. We then perform a perturbation expansion to derive a low-energy effective Hamiltonian. Leveraging the above equivalence mapping of Hilbert space, we express the result in terms of the hole operators, including all terms to the zeroth and the first order in powers of \(1/J_{\rm K}\): \[H_{\rm eff}= \hat{P}(\hat{H}_{t}+\hat{H}_{\tau}+\hat{H}_{V})\hat{P} \tag{3}\] \[\hat{H}_{t}= t_{1}\sum_{\langle ij\rangle\sigma}\left(\hat{h}_{i\sigma}^{ \dagger}\hat{h}_{j\sigma}+\text{h.c.}\right)\] \[+t_{2}\sum_{\langle ijk\rangle\sigma}\left(\hat{h}_{i\sigma}^{ \dagger}\hat{h}_{k\sigma}+\text{h.c.}\right)\] (4) \[\hat{H}_{V}= J_{\rm H}\sum_{\langle ij\rangle}\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S} }_{j}+V\sum_{\langle ij\rangle}\hat{n}_{i}^{h}\hat{n}_{j}^{h}\] \[-J^{\prime}\sum_{\langle ijk\rangle}\hat{\mathbf{S}}_{i}\cdot\hat{\bm {S}}_{k}(1-\hat{n}_{j}^{h})\] (5) \[\hat{H}_{\tau}= \sum_{\langle ijk\rangle}\left[t_{1}^{\prime}\sum_{\sigma}\hat{h} _{i\sigma}^{\dagger}\hat{n}_{k}^{h}\hat{h}_{j\sigma}+\tau_{1}\hat{\xi}_{ik}^{ \dagger}\hat{\xi}_{kj}+(i\leftrightarrow k)\right.\] \[\left.\qquad\qquad+t_{2}^{\prime}\sum_{\sigma}\hat{h}_{i\sigma}^ {\dagger}\hat{n}_{j}^{h}\hat{h}_{k\sigma}+\tau_{2}\hat{\xi}_{ij}^{\dagger} \hat{\xi}_{jk}+\text{h.c.}\right] \tag{6}\] where \(\langle ijk\rangle\) represents a triplet of sites in which site \(j\) is a nearest-neighbor of two distinct sites \(i\) and \(k\), \(\hat{n}_{i}^{h}\equiv\sum_{\sigma}\hat{h}_{i\sigma}^{\dagger}\hat{h}_{i\sigma}\) is the hole density on site-\(i\), and \(\hat{P}\) is a projector enforcing the Hilbert constraint, i.e. excluding the states with double occupation of holes on any site. The effective parameters are \(t_{1}=\frac{t}{2}-\frac{3tJ_{\rm H}}{4J_{\rm K}}\), \(t_{2}=\frac{t^{2}}{6J_{\rm K}}\), \(V=(\frac{5t^{2}}{6J_{\rm K}}+\frac{9J_{\rm H}^{2}}{32J_{\rm K}})\), \(J^{\prime}=\frac{J_{\rm H}^{2}}{2J_{\rm K}}\), \(t_{1}^{\prime}=-\frac{tJ_{\rm H}}{8J_{\rm K}}\), \(\tau_{1}=\frac{tJ_{\rm H}}{2J_{\rm K}}\), \(t_{2}^{\prime}=\frac{t^{2}}{12J_{\rm K}}\), and \(\tau_{2}=\frac{t^{2}}{2J_{\rm K}}\). For convenience, we have defined \(\hat{\xi}_{ij}\equiv(\hat{h}_{i\uparrow}\hat{h}_{j\downarrow}+\hat{h}_{j \uparrow}\hat{h}_{i\downarrow})/\sqrt{2}\), the singlet annihilation operator on sites \(i\) and \(j\). We note that a similar mapping and expansion have been done for the Kondo lattice model without Heisenberg coupling [2; 47; 3], and our results agree with the existing literature upon setting \(J_{\rm H}=0\). To mitigate the complexities of this Hamiltonian, we invoke an exact rewriting (for any \(i,j\)) \[\hat{P}\hat{h}_{i\sigma}^{\dagger}\hat{h}_{j\sigma}\hat{P}=(1-\hat{n}_{i \sigma}^{h})\hat{h}_{i\sigma}^{\dagger}\hat{h}_{j\sigma}(1-\hat{n}_{j\bar{ \sigma}}^{h}) \tag{7}\] to equivalently implement the projection. Then, we consider the dilute hole limit, i.e. \(n^{h}\equiv 1-n\ll 1\). In this limit, the expectation values of all pairs of fermion operators, i.e. \(\hat{h}^{\dagger}\hat{h}\), are bounded by \(n^{h}\). Therefore, we neglect the terms consisting of more than four fermion operators, as they are of order \(\mathcal{O}[(n^{h})^{3}]\) and thus insignificant relative to other terms. After these manipulations of the Hamiltonian, writing in momentum space, we obtain a standard interacting Hamiltonian for fermions: \[H_{\rm eff}\approx \sum_{\mathbf{k}\sigma}\epsilon_{\mathbf{k}}\hat{h}_{\mathbf{k}\sigma}^{ \dagger}\hat{h}_{\mathbf{k}\sigma}\] \[+\frac{1}{N}\sum_{\begin{subarray}{c}\mathbf{k},\mathbf{k}^{\prime};\\ \sigma,\sigma^{\prime};\mathbf{q}\end{subarray}}\Gamma_{\mathbf{k},\mathbf{k}^{\prime}; \mathbf{q}}^{\sigma,\sigma^{\prime}}\hat{h}_{\frac{\sigma}{2}-\mathbf{k}\sigma}^{ \dagger}\hat{h}_{\frac{\sigma}{2}+\mathbf{k}\sigma^{\prime}}^{\dagger}\hat{h}_{ \frac{\sigma}{2}-\mathbf{k}^{\prime}\sigma}^{\dagger} \tag{8}\] where the expressions of the bare dispersion \(\epsilon_{\mathbf{k}}\) and interacting vertex \(\Gamma_{\mathbf{k},\mathbf{k}^{\prime};\mathbf{q}}^{\sigma,\sigma^{\prime}}\) are explicitly given in Supplemental Materials (SM) [52]. Finally, we perform MF analysis for the possible SC states in the system. Due to the spin rotation symmetry, we can focus on the sector with \(\sigma^{\prime}=\bar{\sigma}\), since the other two triplet states are degenerate with the one with \(S^{z}=0\). Then for each possible Cooper pair momentum \(\mathbf{q}\), we adopt the Bogoliubov-de-Gennes MF ansatz (note that in the equation below, \(\mathbf{q}\) is no longer a dummy variable) \[H_{\rm MF}^{(\mathbf{q})}= \sum_{\mathbf{k}\sigma}\epsilon_{\mathbf{k}}\hat{h}_{\mathbf{k}\sigma}^{ \dagger}\hat{h}_{\mathbf{k}\sigma}\] \[+\frac{2}{N}\sum_{\mathbf{k},\mathbf{k}^{\prime}}\Gamma_{\mathbf{k},\mathbf{k}^{ \prime};\mathbf{q}}^{\dagger}\left[\hat{h}_{\frac{\sigma}{2}-\mathbf{k}\downarrow}^{ \dagger}\hat{h}_{\frac{\sigma}{2}+\mathbf{k}\uparrow}^{\dagger}\phi_{\mathbf{k}^{ \prime}}^{(\mathbf{q})}+\text{h.c.}\right] \tag{9}\] where \[\phi_{\mathbf{k}^{\prime}}^{(\mathbf{q})}\equiv\langle\hat{h}_{\frac{\sigma}{2}+\mathbf{k}^{ \prime}}\hat{h}_{\frac{\sigma}{2}+\mathbf{k}^{\prime}\downarrow}\rangle_{\rm MF} \tag{10}\] gives the MF self-consistency equation that we will solve by numerical iteration. **Numerical results.** For each set of parameters \(\{t/J_{\rm K},J_{\rm H}/J_{\rm K},n^{h}\}\), we perform MF calculation on various different \(\mathbf{q}\) values. For each specific \(\mathbf{q}\), we solve the MF equation Eq. 10 and obtain a solution with the lowest Helmholtz free energy, \(F_{\rm H}^{(\mathbf{q})}\). By comparing the free energies for different \(\mathbf{q}\) values, the optimal SC states can then be determined. The condensation energy can be further obtained by comparing the free energy \(F_{\rm H}^{(\mathbf{q})}\) with that of a system without SC order, denoted as \(F_{\rm H}^{(\mathbf{q})}\). For all the computations presented in this study, we set \(J_{\rm K}=10\) as the large energy scale and explore the system's behavior at \(T=1/20\), the lowest temperature at which we can attain well-converged solutions. For these calculations, we properly choose the chemical potential to ensure a hole density of \(n^{h}=1/8\). We systematically explore a range of relatively small values for \(J_{\rm H}\in[0,5]\) and \(t\in[0,3]\) are studied. The primary results are presented in the main text, while more detailed data can be accessed in the SM [52]. We first investigate the 2D square lattice that is of most interest. The results of the condensation energy density are summarized in Fig. 1. In Fig. 1a&b, it is evident that over a broad region at large \(J_{\rm H}\), a PDW state with Cooper pair momentum \(\mathbf{q}=(\pi,0)\) (or \((0,\pi)\)) is energetically more favorable, and we have verified that there is no other competing \(\mathbf{q}\) within the entire Brillouin zone. To provide a better illustration, we select three representative sets of parameters and plot their condensation energy density as a function of \(\mathbf{q}\) in Fig. 1c. For \((t,J_{\rm H})=(1.3,4.5)\), it is clear that the energy minimum is located at \((\pi,0)\) (or \((0,\pi)\)). It should also be noted that the three curves have notable qualitative distinctions. Clearly, two of them (with \((t,J_{\rm H})=(1.3,4.5)\) and \((0.5,2.0)\)) have a small curvature around the minimum and a narrow bandwidth relative to the condensation energy, whereas the other point (with \((t,J_{\rm H})=(1.3,1.0)\)) has the opposite features. This observation suggests that the \(J_{\rm H}\gtrsim 2t\) region is in a strong pairing regime that can be more suitably described by a Bose-Einstein Condensation (BEC) of preformed pairs, whereas the \(J_{\rm H}\lesssim 2t\) region is a weak pairing region and can be effectively described by the Bardeen-Cooper-Schrieffer (BCS) theory. Figure 1: **Condensation energy in 2D square lattice.** (**a**) Condensation energy difference (per site) between \(\mathbf{q}=(\pi,0)\) and \(\mathbf{q}=(0,0)\), i.e. \((F_{\rm H}^{(\pi,0)}-F_{\rm H}^{(0,0)})/N\). The black dashed lines serve as visual guides to demarcate distinct phase regions. (**b**) Examples of condensation energies at several high-symmetry \(\mathbf{q}\)’s along the vertical dotted line cut in (a). (**c**) The full \(\mathbf{q}\) dependence for a few representative choices of parameters for different phases marked in (a). Simulations are done with a \(N=12\times 12\) lattice for (a) and (b), and a \(N=24\times 24\) lattice for (c). Figure 2: **Condensation energy in 1D lattice.** (**a**) Condensation energy difference (per site) between \(q=\pi\) and \(q=0\), i.e. \((F_{\rm H}^{(\pi)}-F_{\rm H}^{(0)})/N\). The black dashed lines serve as visual guides to demarcate distinct phase regions. (**b**) Examples of condensation energies at several high-symmetry \(\mathbf{q}\)’s along the vertical dotted line cut in (a). (**c**) The full \(\mathbf{q}\) dependence for a few representative choices of parameters for different phases marked in (a). Simulations are done with a \(N=128\) chain for (a) and (b), and a \(N=256\) chain for (c). To further investigate the pairing symmetries of the uniform pairing (\(\mathbf{q}=(0,0)\)) states occupying most parts of the phase diagram in Fig. 1a, we compute several order parameters defined as: \[O^{\ell}\equiv\frac{1}{N}\sum_{\mathbf{k}}f_{\mathbf{k}}^{\ell}\phi_{\mathbf{k}}^{(0,0)} \tag{11}\] where \(\ell=s,d_{xy},d_{x^{2}-y^{2}},(p_{x},p_{y})\) are the irreducible representations of the \(D_{4}\) group, and \(f^{\ell}\) are the corresponding form factors. For the pairing symmetries that are non-zero in our case, we take \(f_{\mathbf{k}}^{s}\equiv\cos k_{x}+\cos k_{y}\), and \(f_{\mathbf{k}}^{d_{x^{2}-y^{2}}}\equiv\cos k_{x}-\cos k_{y}\). To detect time-reversal symmetry breaking, we compute \[O_{\mathcal{T}}\equiv 1-\frac{|\sum_{\mathbf{k}}(\phi_{\mathbf{k}}^{(0,0)})^{2}|}{ \sum_{\mathbf{k}}|\phi_{\mathbf{k}}^{(0,0)}|^{2}} \tag{12}\] The amplitudes of these order parameters are plotted in Fig. 3. It is probably not surprising to see that the BCS uniform pairing state is a pure \(s\)-wave state. However, interestingly, we find the BEC uniform phase has coexisting \(s\) and \(d_{x^{2}-y^{2}}\) pairing components, and the time-reversal symmetry is also spontaneously broken, suggesting an exotic \(s\pm\mathrm{id}_{x^{2}-y^{2}}\) pairing. This finding gains further support through the direct visualization of the pairing wavefunctions in real space in Fig. 4, where it can be directly seen that the relative phase between the pair fields on the nearest neighbor bonds in \(x\) and \(y\) directions is \(\pi/2\). It is also remarkable that in the dashed-out regime where the uniform pairing state gives way to the PDW state, the uniform pairing state itself crossovers from \(s+\mathrm{id}_{x^{2}-y^{2}}\) to a \(d_{x^{2}-y^{2}}\)-wave state, and has competitive energy compared to the PDW state (Fig. 1c). Although mean-field theories are generically less reliable in 1D due to strong fluctuations, we nonetheless performed the same analysis for the 1D chain case, with the aim of facilitating comparison with existing results. The outcomes, as depicted in Fig. 2, closely resemble the findings in the 2D scenario, and the differences compared to the 2D case are 1) the PDW state with \(\mathbf{q}=\pi\) is favorable in an even broader regime, and 2) the BEC uniform pairing state is no longer exotic. It is encouraging to note that, a density-matrix-renormalization-group study has found PDW to be the ground state at \(J_{\mathrm{H}}/J_{\mathrm{K}}=1\), \(t/J_{\mathrm{K}}=1/2\)[22; 24], a point that is possibly connected to the PDW regime in Fig. 2a after extrapolating the phase boundary. **A possible mechanism of the exotic SC states.** As seen in Fig. 4, we find that the interesting PDW and \(s\pm\mathrm{id}_{x^{2}-y^{2}}\) states have dominant pairing amplitude on the nearest neighbor bonds. On the other hand, from Fig. 1c it can be seen that the energy gain associated with the pair formation \(\sim|F_{\mathrm{H}}^{(\mathbf{q})}/(n^{h}N)|\) (or the single-particle gap presented in SM [52]), is much higher than the phase stiffness \(\sim\nabla_{\mathbf{q}}^{2}F_{\mathrm{H}}^{(\mathbf{q})}/N\) at the optimal \(\mathbf{q}\), so it can be concluded that these pairs are pre-formed before phase coherence develops. This can be intuitively understood by the presence of a strong \(J_{\mathrm{H}}\) which stabilizes such a local singlet pairing at a relatively high energy scale. This observation motivates us to take a perspective starting from these preformed "bond dimers" by considering an effective dimer theory (subject to hard-core constraints that are relatively unimportant in the dilute limit due to the low collision probability): \[\hat{H}_{\mathrm{dimer}}=-\sum_{\langle ij\rangle,\langle mn\rangle}\left(\tau _{ij,mn}\hat{\xi}_{ij}^{\dagger}\hat{\xi}_{mn}+\mathrm{h.c.}\right), \tag{13}\] where \(\tau_{ij,mn}\) is the effective pair hopping amplitude between bond \(\langle ij\rangle\) and bond \(\langle mn\rangle\). From the form of the effective Hamiltonian in Eq. 3, the leading terms that can contribute to the boson hopping matrix are the \(t_{2}\) terms in Eq. 4 and \(t_{2}^{\prime}\), \(\tau_{2}\) terms in Eq. 6, which can move a Figure 3: The magnitude of pairing order parameters, as defined in Eqs. 11&12, for the uniform pairing states (\(\mathbf{q}=(0,0)\)) in 2D. Simulations are done on a \(12\times 12\) lattice. Note that as in Fig. 1a, distinct phase regions are delineated by black dashed lines; within the dashed-out region, the uniform pairing state is less favorable than the PDW state. Figure 4: The magnitude (represented by circle radius) and phase (indicated by color) of the normalized pair field as a function of \(\mathbf{r}\) (the relative coordinate between the two electrons in a cooper pair), at several representative parameter points marked in Fig. 1a. The \(\phi_{\mathbf{r}=(0,0)}^{(\mathbf{q})}\) is manually set to zero in these plots to respect the Hilbert space constraint. The parameters for these representative points are as follows: (left) \(t=1.3\), \(J_{\mathrm{H}}=1\), \(\mathbf{q}=(0,0)\); (middle) \(t=0.5,J_{\mathrm{H}}=2\), \(\mathbf{q}=(0,0)\); (right) \(t=1.3,J_{\mathrm{H}}=4.5,\mathbf{q}=(\pi,0)\). Simulations are done on a \(24\times 24\) 2D square lattice. bond dimer to another bond with a shared site. The crucial thing that allows the exotic pairing states to arise in this system is that these dominant terms contribute _negatively_ to the hopping matrix \(\tau_{ij,mn}\), circumventing the limitation of the Perron-Frobenius theorem that always gives rise to a uniform \(s\)-wave pairing state and is applicable when all matrix entries are non-negative. Actually, these leading contributions yield an exactly flat boson band at low energy, which opens room for the higher-order perturbations in the boson hopping matrix to lift the degeneracy and lead to an exotic pairing momentum and/or symmetry. This picture for PDW based on the "wrong" signs of certain pair-hopping terms seems to be a general, strong coupling mechanism generalizable and applicable to other systems [50; 51]. **Acknowledgement.** We thank helpful discussions with Steven Kivelson and Srinivas Raghu. This work is funded by the Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under contract DE-AC02-76SF00515 at Stanford.
2302.09866
Hydrodynamic limit of the Schelling model with spontaneous Glauber and Kawasaki dynamics
In the present article we consider the Schelling model, an agent-based model describing a segregation dynamics when we have a cohabitation of two social groups. As for several social models, the behaviour of the Schelling model was analyzed along several directions, notably by exploiting theoretical physics tools and computer simulations. This approach led to conjecture a phase diagram in which either different social groups were segregated in two large clusters or they were mixed. In this article, we describe and analyze a perturbation of the Schelling model as a particle systems model by adding a Glauber and Kawasaki dynamics to the original Schelling dynamics. As far as the authors know, this is the first rigorous mathematical analysis of the perturbed Schelling model. We prove the existence of an hydrodynamic limit described by a reaction-diffusion equation with a discontinuous non-linear reaction term. The existence and uniqueness of the solution is non trivial and the analysis of the limit PDE is interesting in its own. Based on our results, we conjecture, as in other variations of this model, the existence of a phase diagram in which we have a mixed, a segregated and a metastable segregation phase. We also describe how this phase transition can be viewed as a transition between a relevant and irrelevant disorder regime in the model.
Florent Barret, Niccolo Torri
2023-02-20T09:59:08Z
http://arxiv.org/abs/2302.09866v4
# Hydrodynamic limit of the Schelling model with spontaneous Glauber and Kawasaki dynamics ###### Abstract In the present article we consider the Schelling model, an agent-based model describing a segregation dynamics when we have a cohabitation of two social groups. As for several social models, the behavior of the Schelling model was analyzed along several directions, notably by exploiting theoretical physics tools and computer simulations. This approach led to conjecture a phase diagram in which either different social groups were segregated in two large clusters or they were mixed. As far as the authors know, a rigorous mathematical analysis of some aspect of the model has been made by Holden and Sheffield in [12]. In this article, we describe and analyze a perturbation of the the Schelling model as a particle system model by adding a Glauber and Kawasaki dynamics to the original Schelling dynamics. We prove the existence of an hydrodynamic limit described by a reaction-diffusion equation with a discontinuous non-linear reaction term. The existence and uniqueness of the solution is non trivial and the analysis of the limit PDE is interesting in its own. Based on our results, we conjecture, as in other variations of this model, the existence of a phase diagram in which we have a mixed, a segregated and a metastable segregation phase. _2010 Mathematics Subject Classification_: 60K35, 82C22, 82D99. _Keywords_: Schelling model, particle systems, hydrodynamics limit, reaction-diffusion equation, Ising model. ## 1 Introduction Schelling's model of segregation was introduced by Thomas Schelling in 1971 [21, 22]. The original model is defined on a square grid of \(N^{2}\) sites (or, more generally, on a regular graph with \(N\) sites) where agents (individuals) belonging to two groups are disposed. Each agent located at a given site of the grid compares its group with the group of its neighbors. More precisely, we fix a tolerance threshold \(T\in[0,1]\). We call \(r_{x}\) the fraction of neighbors belonging to the agent's group at site \(x\) and we say that the agent is satisfied if \(r_{x}\geq T\). If the agent is unsatisfied, then he _moves_ on a site that makes him satisfied. Several variations of this model exists, and these variations depend on several parameters [21]: 1. the neighborhood (its size, its geometry), 2. the initial distribution of the agents, 3. the choice of the satisfaction condition (e.g. the value of the tolerance parameter, or one could introduce a different tolerance for each group), 4. the local dynamics between agents (swapping between two unsatisfied agents, or between the exterior and the grid...) In the original model, some sites are assumed to be empty. Several variants of Schelling's model have been considered in the recent literature in order to study the behavior of the model when the fundamental parameters are modified. We refer to [18] for a complete overview on the subject. Among the different variations, let us mention that there can be more than two groups of agents [12], or/and that the Schelling dynamics can be perturbed: each site has a positive probability to switch regardless of its satisfaction. The main concern is the behavior of the model for large times, does the model reach a stationary state? a stationary distribution? If so what are the features of this equilibrium? A common result of the considered variants is the existence of three stationary states separated by two critical thresholds \(T_{f}\) and \(T_{c}\) (a _frozen_ state \(T<T_{f}\), a _segregated_ state \(T_{f}<T<T_{c}\) and a _mixed_ state \(T>T_{c}\)) towards which the system evolves, suggesting an _universal_ behavior of the model. When the system is open (agents can change the group) the behavior of the system is expected to be symmetric with respect to \(T=1/2\), see [19]. In the present paper, we approach the model from a physical point of view, by interpreting the agent dynamics as a particle systems in interaction. This approach was adopted by the physical community to study this model, see for instance [3, 8, 11, 19]. In particular we consider the setting where the points of the grid (a discrete torus \(\mathbb{T}_{N}^{d}:=\left(\mathbb{Z}/N\mathbb{Z}\right)^{d}\), \(d\geq 1\)) are fully occupied and unsatisfied agents flip their state if it makes them satisfied (Glauber dynamics). The size of the neighborhood taken into account to compute the fraction \(r_{x}\), grows at most logarithmically with \(N\). Moreover we introduce random perturbations, either by flipping a state of an agent at rate \(\beta\) (spontaneous Glauber dynamics) or exchanging the position of two agents at rate \(\alpha N^{2}\) (accelerated Kawasaki dynamics). To summarize, we assume the following features: 1. the neighborhood size is going to infinity with \(N^{d}\), the number of sites, 2. the initial distribution of the agents is fixed (deterministic) and converges as \(N\) goes to infinity, 3. we fix the tolerance parameter \(T\in[0,1]\), 4. we introduce two random perturbations of the Schelling mechanism: regardless of their satisfaction, a site can change type (spontaneous Glauber dynamics), and a site can swap type with a closest-neighbor (spontaneous accelerated Kawasaki dynamics). Our main result (Theorem 3.1) is to prove by rescaling the space at \(\frac{1}{N}\), an hydrodynamic limit described by a reaction-diffusion equation and to give a complete description of the limit PDE that we get. In the case where the size of the neighborhood stays finite in the limit, we obtain a classical reaction-diffusion equation, this is the case where the size of the interaction term stays finite and thus microscopic. However, when the size of the neighborhood goes to infinity we get a non-linearity (the reaction term) which is discontinuous at two points. In this case, the interaction of the Schelling dynamics takes into account more and more agents but in the limit, the reaction term is still purely local. In this "mesoscopic" limit, the existence and uniqueness of the solution of the reaction-diffusion equation with discontinuities is one of the major technical points of the paper. More precisely, the system does not have a unique solution for some class of initial condition, and some values of \(\beta\), the parameter tuning the spontaneous Glauber dynamics. Finally, we conjecture the existence of a rich phase diagram in which beyond a mixed and a segregated phase there is a transition in between. In this phase, the system has the potential to show a metastable segregation: the mixing phase should be the most stable one but symmetrical stable segregation phases also exist. The critical points depend on the parameters \(\beta\) and \(T\) but not on \(\alpha\), see Figure 1. A rigorous proof of the phase diagram requires a delicate analysis of the local dynamics that goes beyond the techniques used in the present paper. We reserve this for future work. Let us stress that in [12] the authors prove also a convergence of a discrete model of Schelling dynamics to the solution of a reaction equation (without diffusion) baptized a continuous Schelling dynamics. We point out that the model is quite different since, in their work, the authors consider a macroscopic neighborhood (which gives at the limit, an integro-differential equation), do not assume any spontanous random perturbation (either Glauber or Kawasaki) and consider a model with \(M\geqslant 2\) groups. Also, the authors consider a fixed tolerance parameter of \(T=\frac{1}{2}\) and assume that the initial configuration is given by random independent uniform variables. The proof of the convergence is based on a coupling between the discrete and the continuous Schelling dynamics. Our method of proof is based on the technique of the relative entropy method in the framework developed by Jara and Mezenes in [14], [13], and also used by Funaki and Tsunoda in [7] for a finite number of particle in the interaction. However, in our setting, we need to improve their bounds to cover the case where the number of particles in the interaction is going to infinity. More precisely, in order to use the relative entropy method, a central step is the control of \(\partial_{t}\mathcal{H}_{N}(t)\), the derivative in time of the relative entropy between the law of the process \(\mu_{N}(t)\) and a discrete measure which approximates the density solution of the reaction-diffusion equation, see Proposition 4.3 and (4.13). Since the number of the particles in the interaction grows with \(N\), the size of the system, we need to retrace the bounds obtained in [14], [13] and [7] by taking into account the size of the interactions. This is done in Theorem 4.5. With our bound (4.13), we get that as soon as the diameter of the interaction grows at most as \(\delta(\log(N))^{1/d}\) (see Assumption 1), the relative entropy is \(O(N^{d-\varepsilon})\) for some \(\varepsilon>0\) (see Equation (4.14)) which entails that the empirical measure is close in probability to a deterministic discrete process defined by Equation (4.10). To complete the proof of the main result (Theorem 3.1), we also prove that this deterministic discrete process, defined by Equation (4.10), converges to a solution of a limiting reaction-diffusion equation (Equation (3.3)). This is done in two steps: we first prove that the limiting PDE has a solution (in Proposition 6.3), and for some class of initial conditions, this solution is locally unique (in Proposition 6.4). The existence result is done via an approximating sequence of smooth non-linearities which are natural in our framework (defined by Equation (6.13)). Note that we do not use the deterministic process defined by Equation (4.10) which is discrete in space. The second step is therefore to prove that the deterministic process, defined by Equation (4.10), has accumulation points in a uniform norm on compact set which are all solutions of Equation (3.3) (this is Theorem 7.1). If we have uniqueness for solutions of Equation (3.3), we have the main result. We establish local uniqueness for (3.3) only for a class of initial conditions (in Proposition 6.4), we use and adapt arguments of Gianni [10] and Deguchi [4] (which proves existence and uniqueness with only one discontinuity). Note that for some initial conditions, (3.3) does not have a unique solution, see Remark 6.2 for a concrete example. It would be therefore quite interesting to understand if for such initial conditions, the empirical measure process converges in some sense. We also reserve this for future work. Note also that, still in [12], the continuous Schelling dynamics does not have a unique solution for all initial conditions. However, starting from a random Gaussian field, the authors prove the solution exists and is a.s unique. A detailed plan of the method of proof and the article is given at the end of Section 3 containing the main result and assumptions. In the following Section, we define the model. ### Acknowledgments The authors thank _Antoine Lucquiaud_ and _Alessandra Faggionato_ for the fruitful discussion on the model and on the article. The authors also thank _Oriane Blondel_ for pointing out reference [12]. This research has been conducted within the FP2M federation (CNRS FR 2036) and as part of the project Labex MME-DII (ANR11-LBX-0023-01). ## 2 The model ### Configurations For \(N\in\mathbb{N}=\{1,2,3,\ldots\}\) we let \(\mathbb{T}_{N}^{d}=(\mathbb{Z}/N\mathbb{Z})^{d}\) be the discrete torus and let \(\Omega_{N}=\{0,1\}^{\mathbb{T}_{N}^{d}}\) be the space of all possible configurations. We call \(\eta\in\Omega_{N}\) a configuration and \(i\in\mathbb{T}_{N}^{d}\) a site. Let \(\mathcal{V}_{N}\subset\mathbb{T}_{N}^{d}\backslash\{0\}\) be a bounded set. We say that two sites \(i,j\in\mathbb{T}_{N}^{d}\) are neighbors if \(i-j\in\mathcal{V}_{N}\). We denote \(K_{N}=|\mathcal{V}_{N}|\) its cardinality. For a configuration \(\eta\in\Omega_{N}\) and a site \(i\in\mathbb{T}_{N}^{d}\) we let \[r_{i}(\eta)=\frac{1}{K_{N}}\sum_{j\in\mathcal{V}_{N}}\mathbbm{1}_{\{\eta_{i}= \eta_{i+j}\}}\qquad\text{and}\qquad\rho_{i}(\eta)=\frac{1}{K_{N}}\sum_{j\in \mathcal{V}_{N}}\eta_{i+j}. \tag{2.1}\] The quantity \(\rho_{i}(\eta)\) is the mean field of \(\eta\) on the neighborhood \(\mathcal{V}_{N}+i\). Let us observe that \(\rho_{i}(\eta)\) is independent of \(\eta_{i}\). We note that \[r_{i}(\eta)=\rho_{i}(\eta)\mathbbm{1}_{\{\eta_{i}=1\}}+(1-\rho_{i}(\eta)) \mathbbm{1}_{\{\eta_{i}=0\}}. \tag{2.2}\] For a given configuration, we now introduce the definition of _stable, unstable_ and _potentially stable site_. **Definition 2.1**.: _For a given site \(i\), let us denote \(\eta^{i}\) the configuration where we change \(\eta_{i}\) to \(1-\eta_{i}\)._ _Let \(T\in[0,1]\). If \(r_{i}(\eta)<T\), the site \(i\) is said unstable for \(\eta\), otherwise if \(r_{i}(\eta)\geqslant T\) the site is said stable for \(\eta\). An unstable site \(i\) for \(\eta\) which is stable for \(\eta^{i}\) is said potentially stable._ Note that \(r_{i}(\eta^{i})=1-r_{i}(\eta)\). Thus 1. a site \(i\) is potentially stable if and only if \(r_{i}(\eta)<T\) and \(r_{i}(\eta)\leqslant 1-T\). In particular if \(T\leqslant\frac{1}{2}\) an unstable site for \(\eta\) is automatically potentially stable. 2. if \(T>\frac{1}{2}\) and \(1-T<r_{i}(\eta)<T\), we have \(r_{i}(\eta^{i})<T\) and the site \(i\) is unstable for \(\eta\) and \(\eta^{i}\). ### Infinitesimal generator, construction of the process Fix \(\alpha>0\) and \(\beta>0\). Let us consider the following dynamics: starting from a configuration \(\eta\) 1. if a site \(i\) is potentially stable, we flip the value at \(i\) with rate \(1\), 2. two nearest-neighbors \(i\) and \(j\) exchange their values with rate \(\alpha N^{2}\) (accelerated Kawasaki dynamics), 3. a site \(i\) can change its value at rate \(\beta>0\) (spontaneous Glauber dynamics). This dynamics defines an infinitesimal generator \(\mathcal{L}_{N}\) defined for \(F\) cylinder function on \(\Omega_{N}\) by \[\mathcal{L}_{N}F(\eta)=\sum_{i\in\mathbb{T}_{N}^{d}}(F(\eta^{i})-F(\eta))( \mathbbm{1}_{\{r_{i}(\eta)<T,r_{i}(\eta)\leqslant 1-T\}}+\beta)+\alpha N^{2} \sum_{\begin{subarray}{c}i,j\in\mathbb{T}_{N}^{d}\\ |i-j|=1,\end{subarray}}(F(\eta^{ij})-F(\eta)), \tag{2.3}\] where \(\eta^{ij}\) is the configuration where the values at site \(i\) and \(j\) have been exchanged. The following proposition states that the process is well defined, since the state space is finite. **Proposition 2.1**.: _Given an initial configuration \(\eta_{0}\), \(\mathcal{L}_{N}\) is the infinitesimal generator of a Feller process, denoted \((\eta^{N}(t))_{t\geqslant 0}\)._ We let \(\mu_{t}^{N}\) be the distribution of \(\eta^{N}(t)\). **Remark 2.1**.: _In this article we focus on the compact setting (torus) because a non compact framework, as \(\mathbb{R}^{d}\), presents technical problems for the convergence of the process, nevertheless the discrete model can be well defined on \(\mathbb{Z}^{d}\) (cf. (2.3) and Proposition 2.1)._ ## 3 Main results We let \(u_{0}^{N}(i):=\mathbb{E}_{\mu_{N}}\big{[}\eta_{i}^{N}(0)\big{]}\), \(i\in\mathbb{T}_{N}^{d}\) be the initial distribution of our process, that is, \(\eta_{i}^{N}(0)\) is distributed as a Bernoulli of parameter \(u_{0}^{N}(i)\). For a vector \(v\), \(|v|\) denotes its euclidean norm and \(|v|_{\infty}\) its uniform norm. **Assumption 1**.: _Assumptions on \(\mathcal{V}_{N}\)._ 1. _Let_ \(\ell_{V}\) _be the diameter of_ \(\mathcal{V}\)_. Then,_ \(\ell_{V}\leqslant\delta\big{(}\log N\big{)}^{\frac{1}{d}}\) _for some_ \(\delta>0\)_._ 2. _If_ \(d=1\)_, suppose that_ \(\mathcal{V}_{N}\subset\mathbb{Z}\backslash\mathbb{N}\)_._ **Assumption 2**.: _Assumptions on \(u_{0}^{N}\)._ 1. _There exists_ \(\varepsilon>0\) _such that_ \(\varepsilon\leqslant u_{0}^{N}(i)\leqslant 1-\varepsilon\) _uniformly on_ \(i\in\mathbb{T}_{N}^{d}\) _and_ \(N\in\mathbb{N}\)_._ 2. _There exists_ \(C_{0}>0\) _independent of_ \(N\) _such that_ \(|\nabla u_{0}^{N}(i)|_{\infty}\leqslant\frac{C_{0}}{N}\)_, where_ \(\nabla u_{0}^{N}(i)=(u_{0}^{N}(i+e_{k})-u_{0}^{N}(i))_{k=1}^{d},\) _with_ \(e_{k}\in\mathbb{Z}^{d}\) _the unit vector of direction_ \(k\)_._ 3. _Let_ \(\upsilon_{u_{0}^{N}}^{N}=\bigotimes_{i\in\mathbb{T}_{N}^{d}}\mathrm{B}\big{(}u _{0}^{N}(i)\big{)}\) _be the law of a sequence of independent Bernoulli of parameter_ \(u_{0}^{N}(i)\)_. Suppose that_ \(\mathcal{H}(\mu_{0}^{N}\,|\,\upsilon_{u_{0}^{N}(0)}^{N})=O(N^{d-\varepsilon_{0 }})\) _for some_ \(\varepsilon_{0}>0\) _small, where_ \(\mathcal{H}(\mu\,|\,v)\) _is the relative entropy of_ \(\mu\ll v\)_,_ \[\mathcal{H}(\mu\,|\,v):=\int\frac{\mathrm{d}\mu}{\mathrm{d}v}\log\bigg{(}\frac {\mathrm{d}\mu}{\mathrm{d}v}\bigg{)}\mathrm{d}v\] (3.1) 4. _Let_ \(\widetilde{u}_{0}^{N}(x)\) _be the linear interpolation on_ \(\mathbb{T}^{d}=(\mathbb{R}/\mathbb{Z})^{d}\)_, the_ \(d\) _dimensional torus, of_ \(u_{0}^{N}(i)\) _such that_ \(\widetilde{u}_{0}^{N}(i/N)=u_{0}^{N}(i)\)_. Then, there exists_ \(u_{0}\in\mathcal{C}(\mathbb{T}^{d})\) _such that_ \(\widetilde{u}_{0}^{N}\) _converges to_ \(u_{0}\) _in_ \(\mathcal{C}(\mathbb{T}^{d})\)_._ **Remark 3.1**.: _The assumption 1(2.) is only technical and it could be removed by considering the dimension \(d=1\) separately from the rest of the dimensions, cf. Remark 4.3._ Define \[\pi_{t}^{N}=\pi_{t}^{N}(\eta,\mathrm{d}v)=\frac{1}{N^{d}}\sum_{i\in\mathbb{T} _{N}^{d}}\eta_{i}(t)\delta_{\frac{i}{N}}(\mathrm{d}v) \tag{3.2}\] the empirical measure associated to the Markov process \(\eta\) where the space is rescaled by \(\frac{1}{N}\), which is a positive measure on \(\mathbb{T}^{d}\). We now state our main result, which concerns the convergence in probability of the empirical measure. **Theorem 3.1**.: _Under Assumptions 1 and 2, if \(u_{0}\) (the limit of the initial condition, according to Assumption 2-4) is such that_ 1. \(u_{0}\in\mathcal{C}^{1}(\mathbb{T}^{d})\)_, with_ \(\nabla u_{0}\) _Lipschitz,_ 2. \(\nabla u_{0}(p)\neq 0\) _for_ \(p=\min(T,1-T)\) _and_ \(p=1-\min(T,1-T)\)_,_ _then the reaction-diffusion equation_ \[\begin{cases}\partial_{t}u(t,x)=2\alpha\Delta u(t,x)+\beta(1-2u(t,x))+g_{ \infty}(u(t,x)),\\ u(0,x)=u_{0}(x),\end{cases} \tag{3.3}\] _has an unique density solution \(u=u(t,x)\) with \((t,x)\in[0,\tau]\times\mathbb{T}^{d}\) for some \(\tau>0\), where \(\mathbb{T}^{d}\) is the \(d\)-dimension torus and \(g_{\infty}(p):=(1-p)\mathbbm{1}_{\{1-p<\min(T,1-T)\}}-p\mathbbm{1}_{\{p\leqslant \min(T,1-T)\}}\). Moreover, for every test function \(\varphi:\mathbb{T}^{d}\to\mathbb{R}\) and for every \(\varepsilon>0\),_ \[\lim_{N\to+\infty}\mu^{N}\bigg{(}\left|\langle\pi^{N},\varphi\rangle-\langle u,\varphi\rangle\right|>\varepsilon\bigg{)}=0\,,\qquad\forall\,t\in[0,\tau]\,, \tag{3.4}\] _where \(\langle\pi^{N},\varphi\rangle\) and \(\langle u,\varphi\rangle\) denote the integral of \(\varphi\) with respect to the measure \(\pi^{N}\) or \(u(x)\mathrm{d}x\) respectively._ **Remark 3.2**.: _If the solution of (3.3) is not unique, which is not technical difficulty but a real possibility for some initial conditions (see Remark 6.2 for a concrete example), then any accumulation point of the sequence of empirical measure is a solution of (3.3)._ ### Organisation of the paper To prove Theorem 3.1 we first prove that the empirical measure is close to a discrete measure \(u^{N}\) which is a solution of a discrete analogous of (3.3), this is Theorem 4.1. Its proof is based on a entropy method approach in which the relative entropy between \(\mu^{N}\) and \(v^{N}_{u^{N}}\), see Theorem 4.2. Even if this technique is quite standard in the particle system theory, some new technical estimations arising from the geometry of the system are needed, this is Theorem 4.5. In Section 5 we discuss some central technical estimations about \(u^{N}\), in order to describe the behaviour of the discrete model. Then, in Section 6 we discuss the existence and uniqueness of (3.3) and in Section 7 we show the convergence of the \(u^{N}\) toward the density \(u\) by completing the proof of Theorem 3.1. We stress that the proof of the existence and uniqueness is not standard and the analysis of this PDE is interesting in its own. In Section 3.2 we state our conjecture on the phase diagram of the model. ### Conjecture on the phase diagram In this Section, we discuss the phase diagram that describes the mixed and segregated phases. We start by setting the Equation (3.3) in a more convenient form. Set \(p_{0}(T):=\min(T,1-T)\in[0,\frac{1}{2}]\). For \(p\in[0,1]\) we define \[\gamma_{\infty,\beta}(p)=\begin{cases}\beta\left(p-\frac{1}{2} \right)^{2}+\frac{1}{2}\left(p^{2}-p_{0}(T)^{2}\right)&\text{for }0\leqslant p<p_{0}(T),\\ \beta\left(p-\frac{1}{2}\right)^{2}&\text{for }p_{0}(T)\leqslant p<1-p_{0}(T), \\ \beta\left(p-\frac{1}{2}\right)^{2}+\frac{1}{2}\left((1-p)^{2}-p_{0}(T)^{2} \right)&\text{for }1-p_{0}(T)\leqslant p\leqslant 1.\end{cases} \tag{3.5}\] Our conjecture is based on the analysis of \(\gamma_{\infty,\beta}\) and it is represented in Figure 1. We observe that \(\gamma_{\infty,\beta}\) is continuous and satisfies \(\gamma_{\infty,\beta}(p)=\gamma_{\infty,\beta}(1-p)\). For \(p\neq p_{0}(T),1-p_{0}(T)\), we have that \(\gamma^{\prime}_{\infty,\beta}(p)=-\beta(1-2p)-g_{\infty}(p)\) and (3.3) can be written as \[\partial_{t}u(t,x)=2\alpha\Delta u(t,x)-\gamma^{\prime}_{\infty,\beta}(u(t,x)). \tag{3.6}\] To discuss the phase transition we can look at the structure of \(\gamma_{\infty,\beta}(p)\). The function \(p\mapsto\beta\left(p-\frac{1}{2}\right)^{2}+\frac{1}{2}\left(p^{2}-p_{0}(T)^{2}\right)\) has a unique minimum at \(p=p^{\ell}:=\frac{\beta}{1+2\beta}\). Therefore, if \(0\leqslant p^{\ell}<p_{0}(T)<\frac{1}{2}\), the function \(\gamma_{\infty,\beta}\) has three regular minima: \(p^{c}:=\frac{1}{2}\), \(p^{\ell}\) and \(p^{r}:=1-p^{\ell}\). Note that \[\gamma_{\infty,\beta}(p^{c})=0\quad\text{ and }\quad\gamma_{\infty,\beta}(p^{r})= \gamma_{\infty,\beta}(p^{\ell})=\frac{\beta}{4(1+2\beta)}-\frac{p_{0}(T)^{2}} {2}\,.\] Then we get that if \(p_{0}(T)<p^{m}:=\sqrt{\frac{\beta}{2(1+2\beta)}}\), we have \(\gamma_{\infty,\beta}(p^{c})<\gamma_{\infty,\beta}(p^{\ell})\) and if \(p^{m}<p_{0}(T)\), we have that \(\gamma_{\infty,\beta}(p^{c})>\gamma_{\infty}(p^{\ell})\). If \(p^{\ell}>p_{0}(T)\), \(p^{c}\) is the only minimum. The two thresholds for \(p_{0}(T)\) are then \(p^{\ell}\) and \(p^{m}\), see Figure 1. Since \(p^{\ell}<p^{m}\), we have the following picture: as \(T\) is close to \(0\) and below \(p^{\ell}\), we have a unique minimum of \(\gamma_{\infty,\beta}\), so that typical configurations are close to \(p=1/2\) which is of lowest energy \(\gamma_{\infty,\beta}\). It means that, at equilibrium, we expect a configuration balanced between \(0\) and \(1\) and we do not have segregation. Then, as \(T\) goes above the threshold \(p^{\ell}\) but stays below \(p^{m}\), other minima at \(p=p^{\ell}\) and \(p=p^{r}\) appear, and these two configurations are metastable since their energy is higher, so we can have segregation for a small proportion of the time. The next threshold is \(p^{m}\), at which the two metastable configurations become stable and \(p=1/2\) is the metastable one so that we expect stable segregation. For \(T\) above \(\frac{1}{2}\) the picture is symmetric. **Remark 3.3**.: _The hydrodynamic limit (3.3) has, at least, two different formulations as a gradient flow:_ 1. _in the classical_ \(L^{2}(\mathbb{T}^{d})\) _setting, with the potential_ \(\mathcal{F}\) _defined for_ \(u:\mathbb{T}^{d}\to[0,1]\)__ \[\mathcal{F}(u)=\int\alpha\|\nabla u\|^{2}+\gamma_{\infty,\beta}(u),\] (3.7) _Equation (_6.2_) can be written as_ \(\partial_{t}u=-\delta\mathcal{F}(u)\) _where_ \(\delta\mathcal{F}\) _denotes the Frechet derivative of_ \(\mathcal{F}\)_._ 2. _in a Wasserstein-like setting defined in_ _[_17_]_ _with the entropy potential_ \(\mathcal{H}\)_, for_ \(u:\mathbb{T}^{d}\to]0,1]\)_, and_ \(\xi\ :\ \mathbb{T}^{d}\ \to\ \mathbb{R}\) _(seen as an element in the tangent bundle)_ \[\mathcal{H}(u)=\frac{1}{2}\int(2u\log(2u)-2u+1)\geq 0,\qquad\mathcal{K}(u) \xi=-2\alpha\nabla\cdot(u\nabla\xi)+\frac{\gamma^{\prime}_{\infty,\beta}(u)}{ \log(2u)}\xi,\] (3.8) _Equation (_3.3_) can be written as_ \[\partial_{t}u=-\mathcal{K}(u)(\delta\mathcal{H}).\] _Since_ \(\delta\mathcal{H}=\log(2u)\) _and_ \(\mathcal{K}(u)(\delta\mathcal{H})=-2\alpha\Delta u+\gamma^{\prime}_{\beta, \infty}(u).\)__ _Both formulations could be useful to establish a rigorous proof of the phase diagram given in Figure 1. In particular, along a gradient flow the potential is non-increasing, thus for all \(t>0\), along a solution \(u\) we have \(\mathcal{F}(u(t))\leq\mathcal{F}(u(t=0))\) and if \(u\) converges to a stationary solution \(v\), it must be a stationary point of \(\mathcal{F}\) (i.e. \(\delta\mathcal{F}(v)=0\))._ _For the second formulation, note that_ \[\frac{\gamma^{\prime}_{\infty,\beta}(u)}{\log(2u)}>0,u\in]0,1]\Longleftrightarrow p _{0}(T)<p^{\ell}=\frac{\beta}{1+2\beta}\Longleftrightarrow\beta>\frac{p_{0}(T )}{1-2p_{0}(T)}. \tag{3.9}\] _We are thus in the mixing phase of the diagram and \(\mathcal{K}(u)\) is positive definite in the sense that_ \[\int\xi\mathcal{K}(u)\xi=\int 2\alpha u|\nabla\xi|^{2}+\frac{\gamma^{\prime}_{ \infty,\beta}(u)}{\log(2u)}\xi^{2}\geq 0. \tag{3.10}\] Figure 1: Representation of the different phases of the system as function of the parameter \(p_{0}(T)\in[0,\frac{1}{2}]\). When \(T\) is close to \(0\) or \(1\) (\(p_{0}(T)\in(0,p^{\ell})\)) we do not have segregation (red parts) and typical configurations are provided by a mixing of \(0\) and \(1\). If \(T\) is close to \(1/2\) (\(p_{0}(T)\in(p^{m},\frac{1}{2})\)) we have segregation (green parts): a very large parts of the configuration are composed of \(0\) or \(1\). We have also intermediate values of \(T\) (\(p_{0}(T)\in(p^{\ell},p^{m})\)) for which the segregation is metastable (yellow parts). _Then, one can prove that along a solution \(u\), we get that:_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}(u)=\int\partial_{t}u\delta\mathcal{H}( u)\leq-\int\gamma^{\prime}_{\infty,\beta}(u)\log(2u)\leq-c\mathcal{H}(u) \tag{3.11}\] _where \(c=\beta-\frac{p_{0}(T)}{1-2p_{0}(T)}>0\), and we get that \(\mathcal{H}(u(t))\leq\mathcal{H}(u(0))e^{-ct}\). Thus \(\mathcal{H}(u(t))\) goes to \(0\) as \(t\to\infty\), this entails that \(u\) converges to the only stationary point of \(\mathcal{H}\) which is the constant \(\frac{1}{2}\). Heuristically, it suggests that an exponential relaxation is taking place in the mixing phase of Figure 1._ ## 4 Relative entropy method Using the _relative entropy_ method, in this section we prove that the empirical measure \(\pi^{N}(t)\) is close to \(u^{N}(t)=\left(u^{N}(t,i)\right)_{i\in\mathbb{T}_{N}^{d}}\) the solution of a suitable discrete PDE. In order to state the main result of this section, we observe that the generator \(\mathcal{L}_{N}\) in (2.3) can be written as \(\mathcal{L}_{N}=\mathcal{G}_{N}+2\alpha N^{2}\mathcal{K}_{N}\) where \(\mathcal{G}_{N}\) is the generator which describes the Glauber-Schelling dynamics and \(\mathcal{K}_{N}\) the generator which describes the Kawasaki dynamics, that is, \[\mathcal{G}_{N}F(\eta)= \sum_{i\in\mathbb{T}_{N}^{d}}\Big{(}\mathbbm{1}_{\{r_{i}(\eta)<T,r_{i}(\eta)\leq 1-T\}}+\beta\Big{)}(F(\eta^{i})-F(\eta)), \tag{4.1}\] \[\mathcal{K}_{N}F(\eta)= \frac{1}{2}\sum_{\begin{subarray}{c}i,j\in\mathbb{T}_{N}^{d}\\ |i-j|=1\end{subarray}}(F(\eta^{ij})-F(\eta)). \tag{4.2}\] Let us stress that \(\mathcal{G}_{N}\) can be written as follows \[\mathcal{G}_{N}F(\eta)=\sum_{i\in\mathbb{T}_{N}^{d}}\big{(}c_{i}(\eta)+\beta \big{)}\big{(}F(\eta^{i})-F(\eta)\big{)}, \tag{4.3}\] where \(c_{i}(\eta)\) is a local function which describes the dynamics (Glauber-Schelling dynamics). To be more precise, we write \(c_{i}(\eta)=c_{0}(\tau_{i}\eta)\) where \(c_{0}\) is the flipping rate of a particle at the origin, that is, \[c_{0}(\eta):=\mathbbm{1}_{\{r_{0}(\eta)<T,r_{0}(\eta)\leq 1-T\}} \tag{4.4}\] and \((\tau_{i}\eta)_{j}=\eta_{i+j}\), likewise \(\tau_{i}\) acts on \(u\). Note that \(c_{0}\) is a random variable and take the value \(0\) or \(1\). We let \[\kappa_{N}=\kappa_{N}(T):=\min\Big{\{}[K_{N}T]-1;[K_{N}(1-T)]\Big{\}}. \tag{4.5}\] We observe that \(\lim_{n\to+\infty}\frac{\kappa_{N}}{K_{N}}=\min(T,1-T)\). Since \(c_{0}(\eta)=\mathbbm{1}_{\{r_{0}(\eta)\leq\frac{\kappa_{N}}{K_{N}}\}}= \mathbbm{1}_{\{r_{0}(\eta)\leq\frac{\kappa_{N}}{K_{N}},\eta_{0}=0\}}+\mathbbm {1}_{\{r_{0}(\eta)\leq\frac{\kappa_{N}}{K_{N}},\eta_{0}=1\}}\), we define \[c_{0}^{+}(\eta):=\mathbbm{1}_{\{\rho_{0}(\eta)\geq 1-\frac{\kappa_{N}}{K_{N}}\}} \qquad\text{and}\qquad c_{0}^{-}(\eta):=\mathbbm{1}_{\{\rho_{0}(\eta)\leq \frac{\kappa_{N}}{K_{N}}\}}, \tag{4.6}\] so that \(c_{0}(\eta)=c_{0}^{+}(\eta)(1-\eta_{0})+c_{0}^{-}(\eta)\eta_{0}\), by (2.2). The functions \(c^{+}\) and \(c^{-}\) can be viewed as the rate of creation and annihilation of a particle at \(i=0\) respectively. For any function \(u=(u_{i})_{i\in\mathbb{T}_{N}^{d}}\) we define \[\upsilon_{u}(\mathrm{d}\eta)=\upsilon_{u}^{N}(\mathrm{d}\eta):=\bigotimes_{i \in\mathbb{T}_{N}^{d}}\mathrm{B}\big{(}u_{i}\big{)} \tag{4.7}\] and we let \(c_{0}^{+}(u)\) and \(c_{0}^{-}(u)\) be the expectation of \(c_{0}^{+}(\eta)\) and \(c_{0}^{-}(\eta)\) under \(v_{u}\), that is, \[c_{0}^{+}(u):=\mathbb{P}_{v_{u}}\!\left(\rho_{0}(\eta)\geqslant 1-\frac{\kappa_{N} }{K_{N}}\right)\qquad\text{and}\qquad c_{0}^{-}(u):=\mathbb{P}_{v_{u}}\! \left(\rho_{0}(\eta)\leqslant\frac{\kappa_{N}}{K_{N}}\right)\!. \tag{4.8}\] We finally define \[G(u):=c_{0}^{+}(u)(1-u_{0})-c_{0}^{-}(u)u_{0}\quad\text{and}\quad G(i,u):=G( \tau_{i}u). \tag{4.9}\] Let \(u^{N}(t)=(u^{N}(t,i))_{i\in\mathbb{T}_{N}^{d}}\) be the solution of \[\begin{cases}\partial_{t}u^{N}(t,i)=2\alpha N^{2}\Delta u^{N}(t,i)+\beta(1-2u ^{N}(t,i))+G(i,u^{N})\\ u^{N}(t,i)=u_{0}^{N}(i),\end{cases} \tag{4.10}\] where \(\Delta u^{N}(t,i)\) is the discrete Laplacian on the torus. Note that (4.10) can be interpreted as a discretized version of (3.3). To prove Theorem 3.1 we first define a discrete approximation of \(u\) and we show an equivalent of Theorem 3.1 for \(u^{N}\) defined in (4.10). More precisely, we consider \(u^{N}\) as a measure on \(\mathbb{T}^{d}\), that is, \[u^{N}(t,\mathrm{d}v):=\frac{1}{N^{d}}\sum_{i\in\mathbb{T}_{N}^{d}}u^{N}(t,i) \delta_{\frac{1}{N}}(\mathrm{d}v)\,, \tag{4.11}\] The main result of this section is the following theorem. **Theorem 4.1**.: _Under Assumptions 1 and 2, for every test function \(\varphi:\mathbb{T}^{d}\to\mathbb{R}\) and for every \(\delta>0\) there exists \(\tau>0\) such that_ \[\lim_{N\to+\infty}\mu^{N}\bigg{(}\left|\langle\pi^{N},\varphi\rangle-\langle u ^{N},\varphi\rangle\right|>\delta\bigg{)}=0\,,\qquad\forall\,t\in[0,\tau]\,.\] To prove Theorem 4.1, the main ingredient is that the relative entropy of \(v_{u^{N}(t)}^{N}\) (cf. (4.7)) with respect to \(\mu_{t}^{N}\) stays small in time, if it is small at \(t=0\). **Theorem 4.2**.: _Under Assumptions 1 and 2, with \(\delta>0\) sufficiently small, we have that for some \(\varepsilon>0\) small_ \[\mathcal{H}_{N}(t):=\mathcal{H}(\mu_{t}^{N}\,|\,v_{u^{N}(t)}^{N})=O(N^{d- \varepsilon}),\quad\forall\,t\in[0,\tau] \tag{4.12}\] _with \(\tau=\tau(\delta)>0\)._ To prove Theorem 4.2 it is enough to show that \[\partial_{t}\mathcal{H}_{N}(t)\leqslant C\ell_{\mathcal{V}}^{d}\Big{(} \mathcal{H}_{N}(t)+O(N^{d-a})\Big{)} \tag{4.13}\] with \(a\in(0,1)\) if \(d\geqslant 2\) and \(a\in(0,\frac{1}{2})\) if \(d=1\), indeed in this case Gronwall's inequality gives \[\mathcal{H}_{N}(t)\leqslant\Big{(}\mathcal{H}_{N}(0)+tO(N^{d-a})\Big{)}e^{C \ell_{\mathcal{V}}^{d}t}. \tag{4.14}\] Since \(e^{C\ell_{\mathcal{V}}^{d}t}\leqslant N^{C\delta t}\) by Assumption 1, the proof of Theorem 4.2 is complete. We prove (4.13) in Section 4.1 Let us observe that Theorem 4.2 implies Theorem 4.1. Indeed, we recall the entropy inequality stated for a set \(\mathcal{A}\) and two measures \(v\ll\mu\), cf. A1.8.2 of [15] or Section 2.2 of [7], \[\mu(\mathcal{A})\leqslant\frac{\log 2+\mathcal{H}(\mu\,|\,v)}{\log\big{(}1+ \frac{1}{v(\mathcal{A})}\big{)}}. \tag{4.15}\] For a given test function \(\varphi\) and \(\delta>0\), we let \[\mathcal{A}^{\delta}_{N,t,\varphi}:=\big{\{}\eta\in\Omega_{N}\colon\big{|}\langle \pi^{N},\varphi\rangle-\langle u^{N},\varphi\rangle\big{|}>\delta\big{\}}, \tag{4.16}\] so that the proof follows by Theorem 4.2 and (4.15) if \[\upsilon^{N}_{u^{N}(t)}(\mathcal{A}^{\delta}_{N,t,\varphi})\leqslant e^{-C_{ \delta}N^{d}}. \tag{4.17}\] Since \(u^{N}\in(0,1)\) (cf. Proposition 5.2), the proof of (4.17) is model independent and follows line by line the proof of Proposition 2.2 in [7], we omit the details. **Remark 4.1**.: _Let us note that \(c_{0}\) can be expressed as a polynomial on the variables \(\eta_{j}\)'s, as in relation (1.5) of [7]. This remark will be useful in the sequel of the paper. For this purpose, let \(A\subset\mathcal{V}_{N}\), we denote:_ \[c_{A}^{+}(\eta)=\prod_{j\in\bar{A}}(1-\eta_{j})\prod_{j\in\bar{A}\wedge \mathcal{V}_{N}}\eta_{j}\quad\text{and}\quad c_{A}^{-}(\eta)=\prod_{j\in\bar{ A}}\eta_{j}\prod_{j\in\bar{A}\wedge\mathcal{V}_{N}}(1-\eta_{j})=c_{A}^{+}(1-\eta) \tag{4.18}\] _where \(1-\eta\) is the configuration with \((1-\eta)_{i}=1-\eta_{i}\) since \(r_{0}(\eta)=r_{0}(1-\eta)\). Note that_ \[c_{A}^{+}(\eta)=\begin{cases}1\text{ if }\eta_{j}=0\text{ for }j\in A\text{ and }\eta_{j}=1\text{ for }j\in\bar{A}\cap\mathcal{V}_{N}\\ 0\text{ otherwise}.\end{cases} \tag{4.19}\] _By an abuse of notation, for a function \(u=(u_{j})_{j\in\mathbb{T}_{N}^{d}}\) we let let also \(c_{A}^{+}(u)=\prod_{j\in A}(1-u_{j})\prod_{j\in\bar{A}\wedge\mathcal{V}_{N}}u_ {j}\) and accordingly for \(c_{A}^{-}(u)\). By (4.6) we have that_ \[c_{0}^{+}(\eta)=\sum_{k=0}^{\kappa_{N}}\mathbbm{1}_{\{r_{0}(\eta)=\frac{k}{K_ {N}},\eta_{0}=0\}}=(1-\eta_{0})\sum_{\begin{subarray}{c}A\subset\mathcal{V}_{ N}\\ |A|\leqslant\kappa_{N}\end{subarray}}c_{A}^{+}(\eta). \tag{4.20}\] _Accordingly, we have_ \[c_{0}^{-}(\eta)=\sum_{k=0}^{\kappa_{N}}\mathbbm{1}_{\{r_{0}(\eta)=\frac{k}{K_ {N}},\eta_{0}=1\}}=\eta_{0}\sum_{\begin{subarray}{c}A\subset\mathcal{V}_{N}\\ |A|\leqslant\kappa_{N}\end{subarray}}c_{A}^{-}(\eta). \tag{4.21}\] In the rest of this section we prove Theorem 4.2. The strategy that we use follows the one used to prove the analogous result in [7] and [13], but some extra-technicality is required due to the geometry of our problem. ### Proof of Theorem 4.2 To make the notation lighter, for \(t\geqslant 0\), \(j\in\mathbb{T}_{N}^{d}\), \(p\in(0,1)\) and \(u^{N}\) the solution of (4.10), we let \[u_{j}(t):=u^{N}(t,j),\qquad\chi(p):=p(1-p),\qquad\omega_{j}:=\frac{\eta_{j}-u_ {j}}{\chi(u_{j})}. \tag{4.22}\] and, more generally, whenever the context is clear we omit the superscript \(N\), so that \(\upsilon^{N}_{u^{N}(t)}\) and \(\mu^{N}_{t}\) will be denoted simply by \(\upsilon_{u(t)}\) and \(\mu_{t}\) respectively. To compare \(\mu\) and \(\upsilon_{u}\) we introduce \[\vartheta_{\alpha}(\mathrm{d}\eta):=\bigotimes_{i\in\mathbb{T}_{N}^{d}} \mathrm{B}\big{(}\alpha\big{)} \tag{4.23}\] be a sequence of independent Bernoulli of parameter \(\alpha\in(0,1)\) defined on the space of configurations. We define \[f_{t}:=\frac{\mathrm{d}\mu_{t}}{\mathrm{d}v_{u(t)}}\qquad\text{and}\qquad\psi_{t}: =\frac{\mathrm{d}v_{u(t)}}{\mathrm{d}\vartheta_{\alpha}}. \tag{4.24}\] We have all the ingredients to state Yau's inequality in our context. The proof is quite standard (cf. proof of Lemma A.1 in [13]), so that it is omitted. **Proposition 4.3**.: _For any \(t\geq 0\) we have that_ \[\partial_{t}\mathcal{H}_{N}(t)\leq-\int\Gamma_{N}\Big{(}\sqrt{f_{t}(\eta)} \Big{)}\,v_{u(t)}(\mathrm{d}\eta)+\int f_{t}(\eta)\left[\mathcal{L}_{N}^{ \bullet,v_{u(t)}}\mathbf{1}-\partial_{t}\log\psi_{t}\right]v_{u(t)}(\mathrm{ d}\eta), \tag{4.25}\] _where \(\mathcal{L}_{N}^{\bullet,v_{u(t)}}\) is the adjoint of \(\mathcal{L}_{N}\) with respect to the measure \(v_{u(t)}\) and \(\Gamma_{N}(h)(\eta)=\mathcal{L}_{N}h^{2}(\eta)-2h(\eta)\mathcal{L}_{N}h(\eta)\) is the carre du champ operator._ We define the _current_\(\mathcal{J}_{t}=\mathcal{J}_{t}^{N}(\eta)\) as \[\mathcal{J}_{t}:=\mathcal{L}_{N}^{\bullet,v_{u(t)}}\mathbf{1}-\partial_{t} \log\psi_{t}. \tag{4.26}\] Our main goal is to estimate the current \(\mathcal{J}_{t}\) to control the right hand side of (4.25) and get (4.13). #### 4.1.1 The current \(\mathcal{J}_{t}^{N}\) To control \(\mathcal{L}_{N}^{\bullet,v_{u(t)}}\mathbf{1}\) we have to compute the adjoint of \(\mathcal{K}_{N}\) and of \(\mathcal{G}_{N}\), cf. (4.1) ans (4.2). We follow the computations done in [7]. By Lemma 2.4 of [7], we get \[\mathcal{K}_{N}^{\bullet,v_{u(t)}}\mathbf{1}=-\frac{1}{2}\sum_{ \begin{subarray}{c}i,j\in\mathbb{T}_{N}^{d},\\ |i-j|=1\end{subarray}}(u_{i}-u_{j})^{2}\omega_{j}\omega_{i}+\sum_{i\in\mathbb{T }_{N}^{d}}(\Delta u)_{i}\omega_{i} \tag{4.27}\] where \((\Delta u)_{i}=\sum_{j\in\mathbb{T}_{N}^{d},|i-j|=1}(u_{j}-u_{i})\) is the discrete Laplacian. Since \(c_{0}\) satisfies the condition (1.5) of [7] (see Remark 4.1), by Lemma 2.5 of [7] we get \[\begin{split}\mathcal{G}_{N}^{\bullet,v_{u(t)}}\mathbf{1}& =\sum_{i\in\mathbb{T}_{N}^{d}}\left(c_{i}^{+}(\eta)(1-u_{i})-c_{i}^ {-}(\eta)u_{i}+\beta(1-2u_{i})\right)\omega_{i}\\ &=\sum_{i\in\mathbb{T}_{N}^{d}}\left((c_{i}^{+}(\eta)-c_{i}^{+}( u))(1-u_{i})-(c_{i}^{-}(\eta)-c_{i}^{-}(u))u_{i}\right)\omega_{i}\\ &\qquad\qquad\qquad+\sum_{i\in\mathbb{T}_{N}^{d}}\left(c_{i}^{+}( u)(1-u_{i})-c_{i}^{-}(u)u_{i}+\beta(1-2u_{i})\right)\omega_{i}.\end{split} \tag{4.28}\] In the second equality we centered the variables \(c_{i}^{+}(\eta)\)\(c_{i}^{-}(\eta)\) since under \(\upsilon_{u}\), \(c_{i}^{+}(\eta)\) and \(c_{i}^{-}(\eta)\) are Bernouilli random variables with expectation \(c_{i}^{+}(u)\) and \(c_{i}^{-}(u)\) respectively. Finally, by Lemma 2.6 of [7] we get \[\partial_{t}\log\psi_{t}(\eta)=\sum_{i\in\mathbb{T}_{N}^{d}}(\partial_{t}u_{i} )\omega_{i}. \tag{4.29}\] Summarizing, we obtain the following result. **Proposition 4.4**.: _The current \(\mathcal{J}_{t}(\eta)\) satisfies_ \[\begin{split}\mathcal{J}_{t}(\eta)=\sum_{i\in\mathbb{T}_{N}^{d} }(-\partial_{t}u_{i}+2\alpha N^{2}(\Delta u)_{i}+\beta(1-2u_{i})+& G(i,u))\omega_{i}\\ &\qquad\qquad\qquad\qquad\qquad\qquad-V(u,\eta)+V^{+}(u,\eta)-V^{-} (u,\eta),\end{split} \tag{4.30}\] _where \(G\) was defined in (4.9) and_ \[V^{+}(u,\eta) =\sum_{i\in\mathbb{T}_{N}^{d}}\left(c_{i}^{+}(\eta)-c_{i}^{+}(u) \right)(1-u_{i})\omega_{i}, \tag{4.31}\] \[V^{-}(u,\eta) =\sum_{i\in\mathbb{T}_{N}^{d}}\left(c_{i}^{-}(\eta)-c_{i}^{-}(u) \right)u_{i}\omega_{i},\] (4.32) \[V(u,\eta) =-\alpha N^{2}\sum_{\begin{subarray}{c}i,j\in\mathbb{T}_{N}^{d}, \\ |i-j|=1\end{subarray}}(u_{i}-u_{j})^{2}\omega_{j}\omega_{i}. \tag{4.33}\] _In particular, if \(u\) satisfies (4.10) the current reduces to the second line._ In the rest of the section we provide estimates of \(V^{+}\), \(V^{-}\) and \(V\). #### 4.1.2 Estimates of \(V^{+}\) and \(V^{-}\) Let us denote \(\{e_{1},e_{2},\ldots,e_{d}\}\) the canonical basis of \(\mathbb{Z}^{d}\). For \(\varphi:\mathbb{T}_{N}^{d}\to\mathbb{R}\) and \(i\in\mathbb{T}_{N}^{d}\) and \(k\in\{1,\ldots,d\}\), let \(\nabla_{k}\varphi(i)=\varphi(i+e_{k})-\varphi(i)\). We denote \(\|\nabla\varphi\|_{\infty}=\max_{i,k}|\nabla_{k}\varphi(i)|\). We note that \[V^{-}(u,\eta)=-V^{+}(1-u,1-\eta), \tag{4.34}\] so that the bound for \(V^{+}\) can be transferred to \(V^{-}\), see Remark 4.2. In the following we get an upper-bound for \(V^{+}\). Denote \(\vartheta_{i}^{+}=c_{i}^{+}(\eta)-c_{i}^{+}(u)\) and \(\omega_{i}^{+}=(1-u_{i})\omega_{i}=\frac{\eta_{i}-u_{i}}{u_{i}}\). Then \[V^{+}(u,\eta)=\sum_{i\in\mathbb{T}_{N}^{d}}\vartheta_{i}^{+}\omega_{i}^{+} \tag{4.35}\] To bound \(V^{+}(u,\eta)\) we follow the method used by Jara and Menezes [13] and by Funaki and Tsuneda [7]. For this purpose let us observe that the carre du champ referred to the generator \(\mathcal{L}_{N}\), namely, \(\Gamma_{N}(h)=\mathcal{L}_{N}h^{2}-2h\mathcal{L}_{N}h\) can be decomposed as \[\Gamma_{N}(h)=\Gamma_{N}^{\mathcal{G}}(h)+2\alpha N^{2}\Gamma_{N}^{\mathcal{ K}}(h), \tag{4.36}\] where \(\Gamma_{N}^{\mathcal{G}}(h)\) and \(\Gamma_{N}^{\mathcal{K}}(h)\) are the carre du champ related to the generator \(\mathcal{G}_{N}\) and \(\mathcal{K}_{N}\) respectively (cf. (4.1) and (4.2)). In particular, \[\Gamma_{N}^{\mathcal{K}}(h)=\frac{1}{2}\sum_{\begin{subarray}{c}i,j\in \mathbb{T}_{N}^{d}:\\ |i-j|=1\end{subarray}}(h(\eta^{i,j})-h(\eta))^{2}\,. \tag{4.37}\] In the next result we provide the control that we need for \(V^{+}\). **Theorem 4.5**.: _Under Assumption 1, we have that for any \(u:\mathbb{T}_{N}^{d}\to[0,1]\) such that_ 1. \(\varepsilon:=1-|||u|||_{\infty}>0\) _with_ \(|||u|||_{\infty}:=\min\left\{\|u\|_{\infty},\|1-u\|_{\infty}\right\}\)_,_ 2. \(\|\nabla u(i)\|_{\infty}\leq\frac{C_{0}}{N}\) _with_ \(C_{0}\) _independent of_ \(N\)_,_ _and for any density \(f\) with respect to \(v_{u}\) we have that_ \[\int V^{+}(u,\eta)f(\eta)v_{u}(\mathrm{d}\eta)\leq\delta N^{2}\int\Gamma_{N}^{ \mathcal{K}}\big{(}\sqrt{f}\big{)}(\eta)\upsilon_{u}(\mathrm{d}\eta)+C_{1} \ell_{\mathcal{V}}^{d}\mathcal{H}(fv_{u}\,|\,\upsilon_{u})+C_{2}N^{d-a}, \tag{4.38}\] _for any \(\delta>0\) and \(C_{1}=3^{d}d(d+1)\frac{6}{\varepsilon^{2}}\left(\frac{1}{\delta}+\frac{C_{0}}{ \varepsilon^{2}}\right)\), \(C_{2}=\frac{6C_{0}}{\varepsilon^{4}}\) for \(N>\frac{2C_{0}}{\varepsilon^{2}}\)._ **Remark 4.2**.: _Note that if the estimate (4.38) holds, then it holds for \(V^{-}\). Indeed, using (4.34), the fact that \(|||u|||_{\infty}=|||-u|||_{\infty}=1-\varepsilon\) and that, under \(\upsilon_{1-u}(\mathrm{d}\eta)\), \(1-\eta\) has for law \(\upsilon_{u}\) we get_ \[\int V^{-}(u,\eta)f(\eta) \upsilon_{u}(\mathrm{d}\eta)=-\int V^{+}(1-u,1-\eta)f(\eta) \upsilon_{u}(\mathrm{d}\eta)\] \[=\int V^{+}(1-u,\eta)f(1-\eta)\upsilon_{1-u}(\mathrm{d}\eta)\] \[\leqslant\delta N^{2}\int\Gamma^{\mathcal{K}}_{N}\big{(}\sqrt{f} \big{)}(1-\eta)\upsilon_{1-u}(\mathrm{d}\eta)+C_{1}\ell^{d}_{\mathcal{V}} \mathcal{H}(f(1-\cdot)\upsilon_{1-u}\,|\,\upsilon_{1-u})+C_{2}N^{d-a}\] \[=\delta N^{2}\int\Gamma^{\mathcal{K}}_{N}\big{(}\sqrt{f}\big{)}( \eta)\upsilon_{u}(\mathrm{d}\eta)+C_{1}\ell^{d}_{\mathcal{V}}\mathcal{H}(f \upsilon_{u}\,|\,\upsilon_{u})+C_{2}N^{d-a}.\] Proof of Theorem 4.5.: The proof will proceed in several steps. Note that, according to Hoeffding Inequality (Lemma A.4), under the probability measure \(\upsilon_{u}\), \(\vartheta_{i}^{+}\) is sub-Gaussian with variance parameter \(\frac{1}{4}\) and \(\omega_{i}^{+}\) is sub-Gaussian with variance parameter \(\frac{1}{4u_{i}^{2}}\). As in Jara and Menezes [13], we proceed to use an averaged version of \(V^{+}\). Let \(\ell>0\) and consider \(\Lambda_{\ell}=\{0\ldots,\ell-1\}^{d}\) the cube of size \(\ell\) starting at \(0\). Let \(p_{\ell}(i)=\ell^{-d}\mathds{1}_{\{i\in\Lambda_{\ell}\}}\) and \(\hat{p}_{\ell}(i)=\ell^{-d}\mathds{1}_{\{i\in-\Lambda_{\ell}\}}=p_{\ell}(-i)\). Then for \(\varphi\) defined on \(\mathbb{T}^{d}_{N}\), we set \[\overleftarrow{\varphi}(i):=p_{\ell}*\varphi(i) =\sum_{j+k=i}p_{\ell}(j)\varphi(k)=\sum_{j}p_{\ell}(j)\varphi(i-j )=\ell^{-d}\sum_{j\in\Lambda_{\ell}}\varphi(i-j) \tag{4.39}\] \[\overrightarrow{\varphi}(i):=\hat{p}_{\ell}*\varphi(i) =\sum_{j+k=i}p_{\ell}(-j)\varphi(k)=\sum_{j}p_{\ell}(-j)\varphi(i -j)=\ell^{-d}\sum_{j\in\Lambda_{\ell}}\varphi(i+j). \tag{4.40}\] Let us denote \(q_{\ell}=p_{\ell}*p_{\ell}\). We have \(q_{\ell}(i)=\ell^{-2d}|\Lambda_{\ell}\cap(i-\Lambda_{\ell})|.\) Thus, \(0\leqslant q_{\ell}(i)\leqslant\ell^{-d}\) and \(q_{\ell}(i)=0\) if and only if \(i\notin\Lambda_{2\ell-2}\). All sum being finite, we get for \(\varphi\) and \(\psi\) defined on \(\mathbb{T}^{d}_{N}\): \[\sum_{i,j}\varphi(i)\psi(i+j)q_{\ell}(j) =\sum_{i,j,k}\varphi(i)\psi(i+j)p_{\ell}(k)p_{\ell}(j-k)=\sum_{i,k }\varphi(i)p_{\ell}(k)\sum_{j}\psi(i+j)p_{\ell}(j-k)\] \[=\sum_{i,k}\varphi(i)p_{\ell}(k)(\hat{p}_{\ell}*\psi)(k+i)=\sum_{ j}\sum_{i}\varphi(i)p_{\ell}(j-i)(\hat{p}_{\ell}*\psi)(j)\] \[=\sum_{j}(p_{\ell}*\varphi)(j)(\hat{p}_{\ell}*\psi)(j)=\sum_{j} \overleftarrow{\varphi}(j)\overrightarrow{\psi}(j).\] Thus let \(V^{+,\ell}:=\sum_{i\in\mathbb{T}^{d}_{N}}\vartheta_{i}^{+}\omega_{i}^{+,\ell}\) with \(\omega_{i}^{+,\ell}=\sum_{j}\omega_{i+j}^{+}q_{\ell}(j)\). We have that \[V^{+,\ell}=\sum_{i,j\in\mathbb{T}^{d}_{N}}\vartheta_{i}^{+}\omega_{i+j}^{+}q_{ \ell}(j)=\sum_{i\in\mathbb{T}^{d}_{N}}\overleftarrow{\vartheta^{+}}_{i} \overrightarrow{\omega^{+}}_{i}. \tag{4.41}\] Then Theorem 4.5 is proved with the following two estimates. **Lemma 4.6**.: _Suppose that \(|||u|||_{\infty}<1\) and let \(\varepsilon=1-|||u|||_{\infty}\). Then for any \(\ell>\ell_{\mathcal{V}}\)_ \[\int V^{+,\ell}fv_{u}(\mathrm{d}\eta)\leqslant C\varepsilon^{-1}\ell^{d/2}_{ \mathcal{V}}\left(\mathcal{H}(fv_{u}\,|\,\upsilon_{u})+\frac{N^{d}}{\ell^{d}} \right). \tag{4.42}\] _The constant \(C\) depends only on the dimension, namely it can be taken as \(C=3^{d}2^{d/2+2}\)._ **Lemma 4.7**.: _Suppose \(\varepsilon=1-||[u]||_{\infty}>0\) and \(\|\nabla u(i)\|_{\infty}\leqslant\frac{C_{0}}{N}\) for some \(C_{0}>0\) independent of \(N\), where \(\nabla u(i)=(\nabla_{k}u(i))_{k=1}^{d}\). Then, for any \(\ell>\ell_{\mathcal{V}}\), with \(\ell=N^{\kappa}\) for some \(\kappa>0\), and \(\delta>0\)_ \[\int(V^{+}-V^{+,\ell})f\upsilon_{u}(\mathrm{d}\eta)\leqslant\delta N ^{2}\int\Gamma_{N}^{\mathcal{K}}(\sqrt{f})\,\upsilon_{u}(\mathrm{d}\eta)\\ +C_{1}\ell_{\mathcal{V}}^{d}\frac{g_{d}(\ell)\ell^{d}}{N}\Bigg{(} \Big{(}2+\frac{\ell_{\mathcal{V}}}{\ell}\Big{)}^{d}\mathcal{H}\big{(}f\upsilon _{u}\big{|}\,\upsilon_{u}\big{)}+\frac{N^{d}}{\ell^{d}}\Bigg{)}+C_{2}N^{d-1} \tag{4.43}\] _where \(C_{2}=\frac{3C_{0}}{\varepsilon^{4}}\left(1+\frac{2C_{0}}{N\varepsilon^{2}}\right)\) and \(C_{1}=3^{d}d(d+1)\left(\frac{2}{\delta\varepsilon^{2}}\left(1+\frac{C_{0}}{N \varepsilon^{2}}\right)+C_{2}\right)\) depends only on the dimension and \(g_{d}(\ell)\) is defined in (4.47)._ Indeed, we choose \(\ell\) such that \(\frac{\ell^{d}g_{d}(\ell)}{N}\leqslant C_{0}\): more precisely, for any \(\delta_{0}\in(0,1)\) we take \[\ell=N^{\frac{1}{2}(1-\delta_{0})}\quad\text{for}\quad d=1,\qquad\text{and} \qquad\ell=N^{\frac{1}{2}(1-\delta_{0})}\quad\text{for}\quad d\geqslant 2. \tag{4.44}\] Since \(\ell_{\mathcal{V}}^{d}\) has a log-growth (cf. Assumption 1), this choice of \(\ell\) together with Lemmas 4.6 and 4.7 concludes the proof of Theorem 4.5. We now prove Lemma 4.6 and Lemma 4.7. Proof of Lemma 4.6.: We start by recalling that \(V^{+,\ell}=\sum_{i\in\mathbb{T}_{N}^{d}}\overbrace{\vartheta^{+}}_{i}\overset {\longrightarrow}{\omega^{+}}_{i}\). Note that, under \(\upsilon_{u}\), using Lemma A.3, \(\overset{\longrightarrow}{\omega^{+}}_{i}\) is a sub-Gaussian variable with variance parameter \(\sum_{i\in\Lambda_{\ell}}\frac{1}{4u_{i}^{2}\ell^{2d}}\leqslant\frac{1}{4 \varepsilon^{2}\ell^{d}}\). Since \(c_{i}^{+}(\eta)\) is a function of \((\eta_{i+j})_{j\in\mathcal{V}_{N}}\), then for \(i\) and \(j\) such that \(|i-j|_{\infty}>\ell_{\mathcal{V}}\), \(\vartheta_{i}^{+}\) and \(\vartheta_{j}^{+}\) are independent, and sub-Gaussian with variance parameter \(\frac{1}{4}\). Using Lemma A.3, \(\overbrace{\vartheta^{+}}_{i}\) is a sub-Gaussian variable with variance parameter \(\frac{\ell_{\mathcal{V}}^{d}}{4\ell^{d}}\left(1+\frac{\ell_{\mathcal{V}}}{\ell} \right)^{d}\leqslant 2^{d-2}\frac{\ell_{\mathcal{V}}^{d}}{\ell^{d}}\). We note that all the sites involved in the averages \(\overbrace{\vartheta^{+}}_{i}\overset{\longrightarrow}{\omega^{+}}_{i}\) are in \(i+Q_{\ell_{\mathcal{V}}/2+\ell}\), where \(Q_{m}=\{-m,\ldots,m\}^{d}\) is the \(d\)-dimensional cube centered at \(0\). In particular, for \(i\) and \(j\) such that \(|i-j|_{\infty}>\ell_{\mathcal{V}}+2\ell\), the corresponding averages are independent (under \(\upsilon_{u}\)). Then, we can take a partition of \(\mathbb{T}_{N}^{d}\) into independent sites by letting \(i=j+(\ell_{\mathcal{V}}+2\ell)k\) where \(j\in\Lambda_{\ell_{\mathcal{V}}+2\ell}\) and \(k\in\Lambda_{[N/(\ell_{\mathcal{V}}+2\ell)]}\). The entropy inequality (cf. (B.3) in [13]) gives \[\int V^{+,\ell}fv_{u}(\mathrm{d}\eta) =\sum_{j\in\Lambda_{\ell_{\mathcal{V}}+2\ell}}\int\sum_{k\in \Lambda_{[N/(\ell_{\mathcal{V}}+2\ell)]}}\overbrace{\vartheta^{+}}_{j+(\ell_ {\mathcal{V}}+2\ell)k}\overset{\longrightarrow}{\omega^{+}}_{j+(\ell_{ \mathcal{V}}+2\ell)k}fv_{u}(\mathrm{d}\eta)\] \[\leqslant\sum_{j\in\Lambda_{\ell_{\mathcal{V}}+2\ell}}\frac{1}{ \gamma}\Bigg{(}\mathcal{H}(fv_{u}\,|\,\upsilon_{u})\] \[\qquad\qquad+\sum_{k\in\Lambda_{[N/(\ell_{\mathcal{V}}+2\ell)]}} \log\int\exp\Bigg{\{}\gamma\overbrace{\vartheta^{+}}_{j+(\ell_{\mathcal{V}}+2 \ell)k}\overset{\longrightarrow}{\omega^{+}}_{j+(\ell_{\mathcal{V}}+2\ell)k }\Bigg{\}}\upsilon_{u}(\mathrm{d}\eta)\Bigg{)}\] \[\leqslant\frac{(\ell_{\mathcal{V}}+2\ell)^{d}}{\gamma}\mathcal{H }(fv_{u}\,|\,\upsilon_{u})+\frac{1}{\gamma}\sum_{i\in\mathbb{T}_{N}^{d}}\log \int\exp\Big{\{}\gamma\overbrace{\vartheta^{+}}_{i}\overset{\longrightarrow} {\omega^{+}}_{i}\Big{\}}\upsilon_{u}(\mathrm{d}\eta)\] According to Lemma A.2, for \(\gamma^{-1}=2^{d/2+2}\ell^{-d}\varepsilon^{-1}\ell^{d/2}_{\mathcal{V}}\), we have that \[\log\int\exp\Big{\{}\gamma\overbrace{\vartheta^{+}}_{i}\omega^{+}_{i}\Big{\}}v_ {u}(\mathrm{d}\eta)\leqslant\log 3.\] Thus we obtain \[\int V^{+,\ell}fv_{u}(\mathrm{d}\eta)\leqslant 2^{d/2+2}\varepsilon^{-1}\ell^{d/ 2}_{\mathcal{V}}\left(\left(2+\frac{\ell_{\mathcal{V}}}{\ell}\right)^{d} \mathcal{H}(fv_{u}\,|\,v_{u})+\frac{N^{d}}{\ell^{d}}\log 3\right).\] Proof of Lemma 4.7.: We first prove the Lemma by also assuming that \(\mathcal{V}_{N}\subset\mathbb{Z}^{d}\backslash\mathbb{N}^{d}\). In Remark 4.3 at the end of the proof we show how to remove this assumption in dimension \(d>1\). We then proceed as in Jara and Menezes [13]. We use the fact that \[V^{+}-V^{+,\ell} =\sum_{i\in\mathbb{T}^{d}_{N}}\vartheta^{+}_{i}(\omega^{+}_{i}- \omega^{+,\ell}_{i})\] \[=\sum_{i,j\in\mathbb{T}^{d}_{N}}\vartheta^{+}_{i}\omega^{+}_{i+j} (\mathds{1}_{\{0\}}(j)-q_{\ell}(j)). \tag{4.45}\] We now use Lemma 3.2 in [13] stating that there exists a function \(\Phi_{\ell}:\mathbb{Z}^{d}\times\mathbb{Z}^{d}\to\mathbb{R}\) which is a flow connecting the distribution \(\mathds{1}_{\{0\}}\) to \(q_{\ell}\), i.e * \(\Phi_{\ell}(i,j)=-\Phi_{\ell}(j,i)\) * \(\sum_{j:\ |i-j|=1}\Phi_{\ell}(i,j)=\mathds{1}_{\{0\}}(i)-q_{\ell}(i)\) * \(\Phi_{\ell}(i,j)=0\) for \(i,j\notin\Lambda_{2\ell-1}\) * there is a constant \(C=C(d)\) independent of \(\ell\) such that \[\sum_{|i-j|=1}|\Phi_{\ell}(i,j)|^{2}<Cg_{d}(\ell)\qquad\text{and}\qquad\sum_{ |i-j|=1}|\Phi_{\ell}(i,j)|<C\ell\] (4.46) where \[g_{d}(\ell)=\begin{cases}\ell\text{ for }d=1\\ \log(\ell)\text{ for }d=2\\ 1\text{ for }d\geqslant 3\end{cases}\] (4.47) Using the flow \(\Phi_{\ell}\) we therefore have \[V^{+}-V^{+,\ell} =\sum_{i,j\in\mathbb{T}^{d}_{N}}\vartheta^{+}_{i}\omega^{+}_{i+j} \sum_{k:\ |j-k|=1}\Phi_{\ell}(j,k)\] \[=\sum_{i,j\in\mathbb{T}^{d}_{N}}\vartheta^{+}_{i}\omega^{+}_{i+j} \sum_{k=1}^{d}\Big{\{}\Phi_{\ell}(j,j+e_{k})-\Phi_{\ell}(j-e_{k},j)\Big{\}}\] \[=\sum_{k=1}^{d}\sum_{i,j\in\mathbb{T}^{d}_{N}}\vartheta^{+}_{i} \Phi_{\ell}(j,j+e_{k})(\omega^{+}_{i+j}-\omega^{+}_{i+j+e_{k}})\] \[=\sum_{k=1}^{d}\sum_{i,j\in\mathbb{T}^{d}_{N}}\vartheta^{+}_{i-j} \Phi_{\ell}(j,j+e_{k})(\omega^{+}_{i}-\omega^{+}_{i+e_{k}})\] \[=\sum_{k=1}^{d}\sum_{i\in\mathbb{T}^{d}_{N}}h^{k}_{i}(\omega^{+}_ {i}-\omega^{+}_{i+e_{k}}) \tag{4.48}\] where \[h_{i}^{k}=h_{i}^{\ell,k}:=\sum_{j\in\mathbb{T}_{N}^{d}}\vartheta_{i-j}^{+}\Phi_{ \ell}(j,j+e_{k})=\sum_{j\in\Lambda_{2\ell-1}}\vartheta_{i-j}^{+}\Phi_{\ell}(j,j +e_{k}). \tag{4.49}\] To complete the result we need to estimate (4.48). We apply Lemma 3.5 of [7]. Note that the hypothesis of Lemma 3.5 are satisfied in our case: \(u_{-}=\varepsilon\) and \(u_{+}=1-\varepsilon\) and \(h_{i}^{k}(\eta^{i,i+e_{k}})=h_{i}^{k}(\eta)\) for any configuration \(\eta\) inasmuch \(h_{i}^{k}\) is only a function of the sites \(\eta_{i-j+s}\), for \(s\in\mathcal{V}_{N}\), \(j\in\Lambda_{2\ell-1}\), so that it does not depend on \(\eta_{i}\) and \(\eta_{i+e_{k}}\) since \(\mathcal{V}_{N}\subset\mathbb{Z}^{d}\backslash\mathbb{N}^{d}\). Moreover, we observe that Lemmas 3.4 and 3.5 apply to our case replacing \(\chi(u_{i})\) by \(u_{i}\). Therefore, for any \(\alpha^{\prime}>0\) we have that \[\int h_{i}^{k}(\omega_{i}^{+}-\omega_{i+e_{k}}^{+})f\upsilon_{u }(\mathrm{d}\eta)\leqslant\frac{\alpha^{\prime}}{2} \int\big{(}\sqrt{f(\eta^{i,i+e_{k}})}-\sqrt{f(\eta)}\big{)}^{2} \upsilon_{u}(\mathrm{d}\eta)\] \[+\frac{C_{1,\varepsilon}}{\alpha^{\prime}}\int(h_{i}^{k})^{2}fv_{ u}(\mathrm{d}\eta)+R_{i}^{k}. \tag{4.50}\] with \(C_{1,\varepsilon}=\frac{2}{\varepsilon^{2}}\left(1+\frac{C_{0}}{N\varepsilon^ {2}}\right)\) and where the rest term \(R_{i}^{k}\) is controlled as \[R_{i}^{k}\leqslant C_{2,\varepsilon}\big{|}\,\nabla_{k}u_{i}\,\big{|}\int|h_{ i}^{k}(\eta)|fv_{u}(\mathrm{d}\eta)\,, \tag{4.51}\] with, for \(\varepsilon<\frac{1}{2}\), \(C_{2,\varepsilon}=\frac{3}{\varepsilon^{4}}\left(1+\frac{2C_{0}}{N\varepsilon ^{2}}\right)\). We now take \(\alpha^{\prime}=\delta N^{2}\), with \(\delta>0\). We get (cf. (4.37)) \[\int(V^{+}-V^{+,\ell})f\upsilon_{u}(\mathrm{d}\eta)=\sum_{k=1}^{ d}\sum_{i\in\mathbb{T}_{N}^{d}}\int h_{i}^{k}(\omega_{i}^{+}-\omega_{i+e_{k}}^{+}) f\upsilon_{u}(\mathrm{d}\eta)\] \[\leqslant\delta N^{2}\int\Gamma_{N}^{K}(\sqrt{f})\,\upsilon_{u}( \mathrm{d}\eta)+\frac{C_{1,\varepsilon}}{\delta N^{2}}\sum_{k=1}^{d}\sum_{i\in \mathbb{T}_{N}^{d}}\int(h_{i}^{k})^{2}fv_{u}(\mathrm{d}\eta)+\sum_{k=1}^{d} \sum_{i\in\mathbb{T}_{N}^{d}}R_{i}^{k}. \tag{4.52}\] The first term is the same of (4.43), then to conclude the proof we have to upper bound the second and the third term of (4.52). Let us start from the third one. Using that \(\big{|}\,\nabla_{k}u_{i}\,\big{|}\leqslant\frac{C_{0}}{N}\) and \(|h_{i}^{k}|\leqslant 1+(h_{i}^{k})^{2}\) in (4.51) we get \[R_{i}^{k}\leqslant\frac{C_{0}C_{2,\varepsilon}}{N}\int\big{(}1+(h_{i}^{k})^{ 2}\big{)}f\upsilon_{u}(\mathrm{d}\eta)\,.\] So that, the last term of (4.52) is bounded by \[C_{0}C_{2,\varepsilon}\left(N^{d-1}+\frac{1}{N}\sum_{k=1}^{d}\sum_{i\in \mathbb{T}_{N}^{d}}\int(h_{i}^{k})^{2}f\upsilon_{u}(\mathrm{d}\eta)\right).\] Let us note that this last term dominates the second term of (4.52). To conclude the proof we need to upper-bound \[\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}\int(h_{i}^{k})^{2}f\upsilon_{u}( \mathrm{d}\eta).\] As we noted above, \(h_{i}^{k}\) depends only on the sites \(\eta_{i-j+s}\), for \(s\in\mathcal{V}_{N}\) and \(j\in\Lambda_{2\ell-1}\), so that the random variables \(h_{i}^{k}\) and \(h_{i^{\prime}}^{k}\) are independent if \(|i-i^{\prime}|_{\infty}>(2\ell+\ell_{V})\). We then decompose \(\mathbb{T}_{N}^{d}\) as a disjoint union of cubes of size \(2\ell+\ell_{\mathcal{V}}\), that is, we write \(i=j+(\ell_{\mathcal{V}}+2\ell)z\) where \(j\in\Lambda_{\ell_{\mathcal{V}}+2\ell}\) and \(z\in\Lambda_{[N/(\ell_{\mathcal{V}}+2\ell)]}\), \[\sum_{i\in\mathbb{T}_{N}^{d}}\int(h_{i}^{k})^{2}fv_{u}(\mathrm{d}\eta)=\sum_{j \in\Lambda_{2\ell+\ell_{\mathcal{V}}}}\sum_{z\in\Lambda_{[N/(\ell_{\mathcal{V}}+ 2\ell)]}}\int(h_{j+z}^{k})^{2}fv_{u}(\mathrm{d}\eta).\] We then apply the entropy inequality and we get \[\begin{split}\sum_{i\in\mathbb{T}_{N}^{d}}\int(h_{i}^{k})^{2}f\upsilon _{u}(\mathrm{d}\eta)&\leq\frac{1}{\gamma}\sum_{j\in\Lambda_{2\ell+ \ell_{\mathcal{V}}}}\left(\mathcal{H}\big{(}fv_{u}\big{|}\,v_{u}\big{)}+\log \int\exp\left\{\gamma\sum_{z\in\Lambda_{|\mathcal{N}/(\ell_{\mathcal{V}}+2\ell )|}}(h_{j+z}^{k})^{2}\right\}v_{u}(\mathrm{d}\eta)\right)\\ &=\frac{(2\ell+\ell_{\mathcal{V}})^{d}}{\gamma}\mathcal{H}\big{(}fv _{u}\big{|}\,v_{u}\big{)}+\frac{1}{\gamma}\sum_{i\in\mathbb{T}_{N}^{d}}\log \int\exp\left\{\gamma(h_{i}^{k})^{2}\right\}\upsilon_{u}(\mathrm{d}\eta)\end{split} \tag{4.53}\] To conclude we use a concentration inequality. We have that \(h_{i}^{k}\) is sub-Gaussian random variable, let \(\sigma^{2}\) be its variance parameter. By Proposition A.1 we have that for any \(\gamma\leq\frac{1}{4\sigma^{2}}\), \[\int\exp\left\{\gamma(h_{i}^{k})^{2}\right\}\upsilon_{u}(\mathrm{d}\eta)\leq \log 3\,.\] Moreover, to get an upper bound on the variance parameter, we use the same decomposition of the sum (4.49) into subsets which are independent (since \(\vartheta_{j}^{+}\) is a function of \(\eta_{i+j}\), for \(i\in\mathcal{V}_{N}\) and is sub-Gaussian with variance parameter \(\frac{1}{4}\)). This is done in Lemma F.12 in [13]. We then have that \(\sigma^{2}\leq C_{d}\ell_{\mathcal{V}}^{d}g_{d}(\ell)\), where \(C_{d}\) is a constant which depends only on the dimension. By getting \(\gamma\) as large as possible, namely \(\gamma^{-1}=(d+1)\ell_{\mathcal{V}}^{d}g_{d}(\ell)\) we obtain that \[\sum_{i\in\mathbb{T}_{N}^{d}}\int(h_{i}^{k})^{2}f\upsilon_{u}(\mathrm{d}\eta) \leq(d+1)\ell_{\mathcal{V}}^{d}\,g_{d}(\ell)\ell^{d}\Bigg{(}\Big{(}2+\frac{ \ell_{\mathcal{V}}}{\ell}\Big{)}^{d}\mathcal{H}\big{(}fv_{u}\big{|}\,v_{u} \big{)}+\frac{N^{d}}{\ell^{d}}\log 3\Bigg{)}. \tag{4.54}\] To conclude the proof we have to remove the assumption \(\mathcal{V}_{N}\subset\mathbb{Z}^{d}\backslash\mathbb{N}^{d}\). We show that in Remark 4.3. **Remark 4.3**.: _By (4.48) we recall that \(V^{+}-V^{+,\ell}=\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}h_{i}^{k}(\omega_{ i}^{+}-\omega_{i+e_{k}}^{+})\) where \(h_{i}^{k}\) is defined in (4.49). We also observe that, by assumption, \(\ell>\ell_{\mathcal{V}}\). Since \(h_{i}^{k}\) is a function of the sites \(\eta_{i-j+s}\), for \(s\in\mathcal{V}_{N}\), \(j\in\Lambda_{2\ell-1}\), whenever \(|j|>\ell_{\mathcal{V}}\), then \(\vartheta_{i-j}^{+}\) does not depend on \(\eta_{i}\) and \(\eta_{i+e_{k}}\). Therefore we split \(h_{i}^{k}\) into the sum of two functions \(h_{i}^{\prime}\) and \(h_{i}^{\prime\prime}\). The first one is independent of \(\eta_{i}\) and \(\eta_{i+e_{k}}\), while the second one depends on these sites,_ \[h_{i}^{\prime}:=\sum_{\begin{subarray}{c}j\in\Lambda_{2\ell-1}\\ |j|>2\ell_{\mathcal{V}}\end{subarray}}\vartheta_{i-j}^{+}\Phi_{\ell}(j,j+e_{k }),\qquad\text{and}\qquad h_{i}^{\prime\prime}:=\sum_{\begin{subarray}{c}j\in \Lambda_{2\ell-1}\\ |j|\leq 2\ell_{\mathcal{V}}\end{subarray}}\vartheta_{i-j}^{+}\Phi_{\ell}(j,j+e _{k}). \tag{4.55}\] _In such a way_ \[V^{+}-V^{+,\ell}=(V^{+}-V^{+,\ell})^{\prime}+(V^{+}-V^{+,\ell})^{\prime\prime}, \tag{4.56}\] _where_ \[(V^{+}-V^{+,\ell})^{\prime}:=\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}h_{i}^ {\prime}(\omega_{i}^{+}-\omega_{i+e_{k}}^{+})\quad\text{and}\quad(V^{+}-V^{+, \ell})^{\prime\prime}:=\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}h_{i}^{\prime \prime}(\omega_{i}^{+}-\omega_{i+e_{k}}^{+}) \tag{4.57}\] _We can apply the method used above to \((V^{+}-V^{+,\ell})^{\prime}\) obtaining that (4.54) holds. We control \((V^{+}-V^{+,\ell})^{\prime\prime}\) by showing that \(\big{\{}\!\!\big{\}(V^{+}-V^{+,\ell})^{\prime\prime}fv_{u}(\mathrm{d}\eta)=O(N ^{d-\varepsilon})\), for some \(\varepsilon>0\). For this purpose we apply Cauchy-Swartz inequality to the measure \(fv_{u}(\mathrm{d}\eta)\), which gives_ \[\int h_{i}^{\prime\prime}(\omega_{i}^{+}-\omega_{i+e_{k}}^{+})fv_{u}(\mathrm{d} \eta)\leq\Big{(}\int(h_{i}^{\prime\prime})^{2}f\upsilon_{u}(\mathrm{d}\eta) \Big{)}^{\frac{1}{2}}\Big{(}\int(\omega_{i}^{+}-\omega_{i+e_{k}}^{+})^{2}fv_{u}( \mathrm{d}\eta)\Big{)}^{\frac{1}{2}} \tag{4.58}\] _Therefore_ \[\int(V^{+}-V^{+,\ell})^{\prime\prime}fv_{u}(\mathrm{d}\eta)=\sum_{k=1 }^{d}\sum_{i\in\mathbb{T}_{N}^{d}}\int h_{i}^{\prime\prime}(\omega_{i}^{+}- \omega_{i+e_{k}}^{+})fv_{u}(\mathrm{d}\eta)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\leqslant\sum_{k=1}^{d} \sum_{i\in\mathbb{T}_{N}^{d}}\Big{(}\int(h_{i}^{\prime\prime})^{2}fv_{u}( \mathrm{d}\eta)\Big{)}^{\frac{1}{2}}\Big{(}\int(\omega_{i}^{+}-\omega_{i+e_{k} }^{+})^{2}f\upsilon_{u}(\mathrm{d}\eta)\Big{)}^{\frac{1}{2}}\] \[\qquad\qquad\qquad\qquad\qquad\leqslant\left(\sum_{k=1}^{d}\sum_ {i\in\mathbb{T}_{N}^{d}}\Big{(}\int(h_{i}^{\prime\prime})^{2}fv_{u}(\mathrm{d }\eta)\Big{)}\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}\Big{(}\int(\omega_{i }^{+}-\omega_{i+e_{k}}^{+})^{2}fv_{u}(\mathrm{d}\eta)\Big{)}\right)^{\frac{1 }{2}}, \tag{4.59}\] _where in the last inequality we used again Cauchy-Swartz inequality._ _We observe that \((\omega_{i}^{+}-\omega_{i+e_{k}}^{+})=(\frac{\eta_{i}}{u_{i}}-\frac{\eta_{i+e_ {k}}}{u_{i+e_{k}}})=\eta_{i}(\frac{1}{u_{i}}-\frac{1}{u_{i+e_{k}}})+\frac{1}{ u_{i+e_{k}}}(\eta_{i}-\eta_{i+e_{k}}),\) so that_ \[\int(\omega_{i}^{+}-\omega_{i+e_{k}}^{+})^{2}f\upsilon_{u}( \mathrm{d}\eta) \leqslant 2\int\eta_{i}\Big{(}\frac{1}{u_{i}}-\frac{1}{u_{i+e_{k}}} \Big{)}^{2}f\upsilon_{u}(\mathrm{d}\eta)+2\int\frac{1}{u_{i+e_{k}}^{2}}(\eta_{ i}-\eta_{i+e_{k}})^{2}f\upsilon_{u}(\mathrm{d}\eta)\] \[\leqslant C_{\varepsilon}|\nabla u_{k}|_{\infty}^{2}+C_{ \varepsilon}\int(\eta_{i}-\eta_{i+e_{k}})^{2}f\upsilon_{u}(\mathrm{d}\eta) \leqslant C_{\varepsilon}^{\prime},\] _uniformly on \(i\) and \(N\). So that,_ \[\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}\Big{(}\int(\omega_{i}^{+}-\omega_ {i+e_{k}}^{+})^{2}fv_{u}(\mathrm{d}\eta)\Big{)}\leqslant C_{\varepsilon}^{ \prime}N^{d}. \tag{4.60}\] _Therefore, by (4.59), to conclude the proof it is enough to show that_ \[\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}\Big{(}\int(h_{i}^{\prime\prime})^{ 2}fv_{u}(\mathrm{d}\eta)\Big{)}=O(N^{d-\varepsilon}), \tag{4.61}\] _for some \(\varepsilon>0\) small. For this purpose we have to look more carefully at the function \(\Phi_{\ell}(j,j+e_{k})\) which defines the flow. We recall (cf. Appendix G of [13]) that in the construction of the flow connecting \(\mathbbm{1}_{\{0\}}\) and \(p_{\ell}*p_{\ell}\), we first define a flow \(\Psi_{\ell}\) connecting \(\mathbbm{1}_{\{0\}}\) and \(p_{\ell}\) supported in \(\Lambda_{\ell}\) which satisfies (4.46) and then we define_ \[\Phi_{\ell}(j,j+e_{k}):=\sum_{i\in\mathbb{T}_{N}^{d}}\Psi_{\ell}(i,i+e_{k})p_{ \ell}(j-i),\] _therefore_ \[\sum_{\begin{subarray}{c}j\in\Lambda_{2\ell-1}\\ |j|\leqslant 2\ell_{\mathcal{V}}\end{subarray}}\big{|}\Phi_{\ell}(j,j+e_{k})\big{|} \leqslant\sum_{\begin{subarray}{c}j\in\Lambda_{2\ell-1}\\ |j|\leqslant 2\ell_{\mathcal{V}}\end{subarray}}\sum_{i\in\mathbb{T}_{N}^{d}} \big{|}\Psi_{\ell}(i,i+e_{k})\big{|}p_{\ell}(j-i)\] \[\leqslant\sum_{i\in\mathbb{T}_{N}^{d}}\big{|}\Psi_{\ell}(i,i+e_{ k})\big{|}\sum_{\begin{subarray}{c}j\in\Lambda_{2\ell-1}\\ |j|\leqslant 2\ell_{\mathcal{V}}\end{subarray}}p_{\ell}(j-i)\leqslant C\ell\frac{ \ell_{\mathcal{V}}^{d}}{\ell^{d}},\] _where we used that \(\sum_{\begin{subarray}{c}j\in\Lambda_{2\ell-1}\\ |j|\leqslant 2\ell_{\mathcal{V}}\end{subarray}}p_{\ell}(j-i)\leqslant\frac{\ell_{ \mathcal{V}}^{d}}{\ell^{d}}\) uniformly on \(i\) and (4.46) applied to \(\Psi_{\ell}\). We deduce that, since \(|\vartheta_{i-j}^{+}|\leqslant 2\),_ \[\sum_{k=1}^{d}\sum_{i\in\mathbb{T}_{N}^{d}}\Big{(}\int(h_{i}^{\prime\prime})^{ 2}fv_{u}(\mathrm{d}\eta)\Big{)}\leqslant CN^{d}\frac{\ell_{\mathcal{V}}^{2d}}{ \ell^{2(d-1)}}.\] _Since \(\ell=N^{\kappa}\) for some \(\kappa>0\) and \(\ell_{\mathcal{V}}^{d}\) grows logarithmically, the result follows (in dimension \(d\geqslant 2\))._ #### 4.1.3 Estimate of \(V\) We show that under the hypothesis of Theorem 4.5, \(V(u,\eta)\) satisfies (4.38). For \(i\in\mathbb{T}^{d}_{N}\) and \(k\in\{1,\ldots,d\}\) we let \[\widetilde{\omega}^{k}_{i}:=-\alpha N^{2}(u_{i}-u_{i+e_{k}})^{2} \omega_{i}.\] In such a way we get \[V(u,\eta) =-\alpha N^{2}\sum_{\begin{subarray}{c}i,j\in\mathbb{T}^{d}_{N} \\ |i-j|=1\end{subarray}}(u_{i}-u_{j})^{2}\omega_{i}\omega_{j}\] \[=-\alpha N^{2}\sum_{k=1}^{d}\sum_{i\in\mathbb{T}^{d}_{N}}\left\{( u_{i}-u_{i+e_{k}})^{2}\omega_{i}\omega_{i+e_{k}}+(u_{i}-u_{i-e_{k}})^{2} \omega_{i}\omega_{i-e_{k}}\right\}\] \[=\sum_{k=1}^{d}\sum_{i\in\mathbb{T}^{d}_{N}}\left\{\widetilde{ \omega}^{k}_{i}\omega_{i+e_{k}}+\widetilde{\omega}^{k}_{i-e_{k}}\omega_{i} \right\}=2\sum_{k=1}^{d}\sum_{i\in\mathbb{T}^{d}_{N}}\widetilde{\omega}^{k}_{ i-e_{k}}\omega_{i}.\] For \(k\in\{1,\ldots,d\}\) we let \(\widetilde{V}^{k}:=\sum_{i\in\mathbb{T}^{d}_{N}}\widetilde{\omega}^{k}_{i-e_{ k}}\omega_{i}\) and \(\widetilde{V}^{k,\ell}:=\sum_{i\in\mathbb{T}^{d}_{N}}\overbrace{\widetilde{ \omega}^{k}_{i-e_{k}}}^{\ell}\overrightarrow{\omega_{i}}^{\ell}\). We have that \(\widetilde{V}^{k,\ell}\) and \(\widetilde{V}^{k}-\widetilde{V}^{k,\ell}\) satisfy (4.42) and (4.43) respectively, which implies that \(V(u,\eta)\) satisfies (4.38). The proof follows the same ideas of \(V^{+}\) and \(V^{-}\) and it is actually simpler since we do not have to deal with the diameter of \(\mathcal{V}_{N}\). We omit the details. #### 4.1.4 Conclusion: Gronwall's inequality (4.13) We observe that since \(\Gamma^{\mathcal{K}}_{N}(\sqrt{J})\leq\Gamma_{N}(\sqrt{J})\), cf. (4.36) and that the carre du champ operator is non-negative, (4.13) is a consequence of Proposition 4.3 and Theorem 4.5 with \(u(t)=u^{N}(t)\), the solutions of (4.10), \(f=\frac{\mathrm{d}\mu_{t}}{\mathrm{d}v_{u(t)}}\) if we show that \(u^{N}\) satisfies Assumption 2 for any \(t\in[0,\tau]\). This is one of the goal of the Section 5, see Propositions 5.2 and 5.3. ## 5 Estimates on the solutions \((u^{N})\) of (4.10) Note that (4.10) is a first order ordinary differential equation in \(\mathbb{R}^{\mathbb{T}^{d}_{N}}\). By standard theory, we have a solution, locally in time, starting from every initial condition. We say that \(u\) is supersolution of (4.10) if \(\partial_{t}u_{i}\geq 2\alpha N^{2}(\Delta u)_{i}+\beta(1-2u_{i})+G(i,u)\) and that it is a subsolution if \(\partial_{t}u_{i}\leq 2\alpha N^{2}(\Delta u)_{i}+\beta(1-2u_{i})+G(i,u)\). Note that any solution is both a super- and a subsolution. We have a comparaison lemma between super and subsolutions. **Proposition 5.1**.: _Let \(u\) be a supersolution, and \(v\) be a subsolution such that \(u(0,i)\geq v(0,i)\) for all \(i\in\mathbb{T}^{d}_{N}\). Then, for all \(t\geq 0\) and all \(i\in\mathbb{T}^{d}_{N}\), \(u(t,i)\geq v(t,i)\)._ Proof.: Since \(u\) and \(v\) have derivative in time they are continuous, consider \(i\) and \(t\) such that \(u(t,i)=v(t,i)\) and \(u(t,j)\geq v(t,j)\). Then, we have \(G(i,u(t))\geq G(i,v(t))\) by Proposition B.1, and thus \[\partial_{t}(u(i,t)-v(i,t))=2\alpha N^{2}\sum_{j,|i-j|=1}(u(j,t)-v(j,t))+G(i,u( t))-G(i,v(t))\geq 0. \tag{5.1}\] This proves that \(u\) stays above \(v\) at all time. From the two previous propositions we conclude: **Proposition 5.2**.: _Let \(\delta=\frac{\beta}{1+4\beta}\) and suppose \(0<\delta<\frac{1}{4e}\). Let also \(4\delta\leq T\leq 1-4\delta\) and \(0\leq\varepsilon<2\delta\). For all \(K_{N}>\frac{|\log(\varepsilon/2)|}{\varepsilon^{2}}\), and all \(N\), consider the solution \((u^{N}(t,i))_{i\in\mathbb{T}_{N}^{d}}\) of Equation (4.10) starting from \(u_{0}\in[\varepsilon,1-\varepsilon]\), then we have that \(u^{N}(t,i)\in[\varepsilon,1-\varepsilon]\) for all \(i\) and \(t\geq 0\)._ Note that the proposition holds for \(\varepsilon=0\). Proof.: Let \(p\in[0,1]\), and set \(u(i)=p\), for all \(i\). Using that \(g_{K_{N}}(p)=G(i,p)\), we have that * if \(g_{K_{N}}(p)+\beta(1-2p)\geq 0\), then \(u\) is a subsolution; * if \(g_{K_{N}}(p)+\beta(1-2p)\leq 0\), then \(u\) is a supersolution. Then, by using the result of Proposition B.2 on the analysis of \(g_{K}(p)\) close to \(p=0\) and \(p=1\), it is easy to check that \(u(i)=\varepsilon\) is a subsolution and \(u(i)=1-\varepsilon\) is supersolution, for \(\varepsilon\) satisfying the hypothesis of the proposition. In particular, for \(p=1\), \(g_{K_{N}}(1)=0\), we have a supersolution. For \(p=0\), \(g_{K_{N}}(0)=0\), we have a subsolution. In the next result, we show that if \(u\) solves (4.10) and \(\|\nabla u(0,i)\|_{\infty}\leq\frac{C}{N}\), then \(\|\nabla u(t,i)\|_{\infty}\leq\frac{C\sqrt{t}}{N}\) for any \(t>0\). **Proposition 5.3**.: _Let \(u\) be a solution of (4.10) with \(u(0)=u_{0}\) such that, there exists \(C_{0}>0\) for which \(|\nabla u_{0}|_{\infty}\leq\frac{C_{0}}{N}\). Then, there exists \(C>0\) such that \(\|\nabla u(t,i)\|_{\infty}\leq\frac{C_{0}+C\sqrt{t}}{N}\) for all \(t\geq 0\)._ To prove Proposition 5.3 we follow [7] and the reference therein, in particular [5]. Using the notation of [5] we let \(p(t,x,z)\) be the heat kernel of discrete Laplacian \[\Delta u(t,i)=\sum_{j\in\mathbb{Z}^{d}}a(t,i,i+j)\big{[}u(t,i+j)-u(t,i)\big{]}, \tag{5.2}\] with \(a(t,i,j)=\mathbb{1}_{\{j-i\in\Gamma\}}\) and \(\Gamma=\{\pm e_{i},\,i=1,\ldots,d\}\), that is, \[\begin{cases}p(\cdot,i_{0},i)=\delta_{i_{0}}(\cdot),\\ \partial_{t}p(t,i_{0},i)=\Delta p(t,i_{0},i).\end{cases}\] In this case (comments below (1.2) of [5]) we have that \(a^{*}(t,i,j)=a(t,i,j)\) so that \(p^{*}(t,i,j)=p(t,i,j)\). Let \(\nabla_{k}u(t,i)=u(t,i+e_{k})-u(t,i)\), then since \(p\) is a uniform transitions function, there exist \(c,C>0\) independent of \(t,k\) such that (cf. (1.3) of [5]) \[\big{|}\nabla_{k}p(t,0,i)\big{|}\leq C\frac{p(ct,0,i)}{\sqrt{1\lor t}}. \tag{5.3}\] We refer to [5], Section 4 and the reference therein for a proof. We stress that the delicate point is to extend the classical theory of E. De Giorgi, J. Nash and J. Moser to discrete operators. The authors follow mainly [6], but similar results can be also found in [9] (Appendix B) and [23]. Proof of Proposition 5.3.: Using Duhamel's formula (i.e., variation of constant) we get that \[u(t,i)=\sum_{j\in\mathbb{T}_{N}^{d}}u(0,j)p_{N}(t,i,j)+\int_{0}^{t}\mathrm{d}s \sum_{j\in\mathbb{T}_{N}^{d}}(\beta(1-2u_{j})+G_{j}(u))p_{N}(t-s,i,j), \tag{5.4}\] where \(p_{N}(t,i,j)=\sum_{z\in N\mathbb{Z}^{d}}p(2\alpha N^{2}t,i,j+z)\) is the heat kernel of discrete Laplacian on the torus speeds up by a factor \(2\alpha N^{2}\) and \(p(t,i,j)\) is the heat kernel introduced above. We observe that (5.3) gives \[\big{|}\nabla_{k}p_{N}(t,0,i)\big{|}\leqslant\frac{C}{N}\frac{p_{N}(ct,0,i)}{ \sqrt{t}}. \tag{5.5}\] For the first term, using that \(p_{N}(t,i,j)=p_{N}(t,i+z,j+z)\) for any \(i,j,z\in\mathbb{T}^{d}_{N}\) and the assumption on \(\nabla u_{0}\) (cf. Assumptions 2), an integration by parts gives \[\bigg{|}\nabla_{k}\bigg{\{}\sum_{j\in\mathbb{T}^{d}_{N}}u(0,j)p_{N }(t,i,j)\bigg{\}}\bigg{|} =\bigg{|}\sum_{j\in\mathbb{T}^{d}_{N}}u(0,j)\big{(}p_{N}(t,i+e_{k},j)-p_{N}(t,i,j)\big{)}\bigg{|}\] \[=\bigg{|}\sum_{j\in\mathbb{T}^{d}_{N}}u(0,j)p_{N}(t,i,j-e_{k})- \sum_{j\in\mathbb{T}^{d}_{N}}u(0,j)p_{N}(t,i,j)\bigg{|}\] \[=\bigg{|}\sum_{j\in\mathbb{T}^{d}_{N}}\Big{(}u(0,j+e_{k})-u(0,j) \Big{)}p_{N}(t,i,j)\bigg{|}\leqslant\frac{C}{N}.\] By using that \((\beta(1-2u_{j})+G_{j}(u))\) is bounded and (5.5) we get that for any \(k=1,\ldots,d\), \[\bigg{|}\nabla_{k}\bigg{\{}\int_{0}^{t}\mathrm{d}s\sum_{j\in\mathbb{T}^{d}_{N }}(\beta(1-2u_{j})+G_{j}(u))p_{N}(t-s,i,j)\bigg{\}}\bigg{|}\leqslant\frac{C}{ N}\int_{0}^{t}\frac{1}{\sqrt{t-s}}\mathrm{d}s=\frac{C}{N}\sqrt{t}\,.\] ## 6 Existence and uniqueness of reaction-diffusion PDE In Section 7, we prove that in the limit \(N\to\infty\), the solution of the discretized Equation 4.10, with \(u^{N}(t=0)=u_{0}^{N}\in[0,1]^{\mathbb{T}^{d}_{N}}\) satisfying Assumption 2, converges to a solution of the scalar nonlinear reaction diffusion equation * when \(K_{N}\to K\) as \(N\to\infty\): \[\partial_{t}u(t,x)=2\alpha\Delta u(t,x)+\beta(1-2u(t,x))+g_{K}(u(t,x)),\] (6.1) * when \(K_{N}\to\infty\) as \(N\to\infty\): \[\partial_{t}u(t,x)=2\alpha\Delta u(t,x)+\beta(1-2u(t,x))+g_{\infty}(u(t,x)).\] (6.2) The main difference between the two equations is that in the first case (\(K<+\infty\)), \(g_{K}\) is a \(C^{1}\) function on \([0,1]\) (thus Lipschitz) so the reaction diffusion equation (6.1) is very classical, whereas for the second case (\(K=+\infty\)), since \(g_{\infty}\) is not even continuous we need to consider (6.2) as a subdifferential inclusion. The main results of this section are * Proposition 6.3 which proves existence of a solution, in a suitable sense, for the equation (6.2), * Proposition 6.4 which proves local uniqueness of the solution, in a suitable sense, for the equation (6.2), starting from a suitable class of initial conditions, * and Theorem 7.1 which proves that all accumulation points of \((u^{N})_{N}\) is a solution, in a suitable sense, for the equation (6.2). In the rest of this section we change our notations and define \(v=2u-1\). We center the solution around the constant steady state \(u=\frac{1}{2}\). It simplifies the presentation and proofs of our results. The original form of our equations can be retrieved by letting \(u=\frac{1}{2}(v+1)\). In such a way (6.1) takes the form \[\partial_{t}v(t,x)-2\alpha\Delta v(t,x)+2\beta v-2g_{K}\left(\frac{1}{2}(v+1) \right)=0. \tag{6.3}\] ### Solution of (6.1) Let us denote \(s(t,x,y)\) the semigroup of the operator \(\frac{1}{2}\Delta\) on \(\mathbb{T}^{d}\), that is, \[s(t,x,y)=\frac{1}{(2\pi t)^{d/2}}\sum_{k\in\mathbb{Z}^{d}}\exp\left(-\frac{\|x -y-k\|^{2}}{2t}\right). \tag{6.4}\] Denote also \(s_{0}(t,x,y)\) the semigroup of the operator \(\frac{1}{2}\Delta\) on \(\mathbb{R}^{d}\), \[s_{0}(t,x,y)=\frac{1}{(2\pi t)^{d/2}}\exp\left(-\frac{\|x-y\|^{2}}{2t}\right). \tag{6.5}\] Note that \(\xi\mapsto s_{0}(t,0,\xi)\) is the density of \(d\) independent normal random variables with variance \(t\). Let us consider \((S^{\lambda,\gamma}_{t})\) the semigroup on \(L^{1}(\mathbb{T}^{d})\) defined by, for \(f\in L^{1}(\mathbb{T}^{d})\), \(\lambda\geqslant 0\) and \(\gamma>0\) \[S^{\lambda,\gamma}_{t}f(x)=\int_{\mathbb{T}^{d}}e^{-\lambda t}s(\gamma t,x,y) f(y)\mathrm{d}y=\int_{\mathbb{R}^{d}}e^{-\lambda t}s_{0}(\gamma t,x,y)\widetilde{ f}(y)\mathrm{d}y, \tag{6.6}\] where for a measurable function \(f\) on \(\mathbb{T}^{d}\), we denoted \(\widetilde{f}\) its extension on \(\mathbb{R}^{d}\) defined by \(\widetilde{f}(x)=f(x-[x])\). Another way to define \(S^{\lambda,\gamma}_{t}\) is to use the Brownian motion: denote by \(X\) a Brownian motion on \(\mathbb{R}^{d}\) starting from \(x\) on some probability space \((\Omega,\mathcal{F},\mathbb{P}_{x})\), indeed we have \(S^{\lambda,\gamma}_{t}f(x)=e^{-\lambda t}\mathbb{E}_{x}(\widetilde{f}(X_{ \gamma t}))\), and for all \(\lambda\geqslant 0\), and \(\gamma>0\), \(S^{\lambda,\gamma}\) is a \(C_{0}\)-contraction semigroup on \(L^{p}(\mathbb{T}^{d})\) for \(p\in[1,+\infty]\). As we will look at (6.1) in its mild form, the following result is crucial to study the regularity of the solution. **Proposition 6.1**.: _For \(v_{0}\in L^{\infty}(\mathbb{T}^{d})\) and \(g\in L^{\infty}([0,\tau]\times\mathbb{T}^{d})\), define_ \[v(t,x):=S^{\lambda,\gamma}_{t}v_{0}(x)+\int_{0}^{t}S^{\lambda,\gamma}_{t-s}(g (s,\cdot))(x)\mathrm{d}s. \tag{6.7}\] _then \(v\in C([0,\tau],\mathbb{T}^{d}).\)We have the following estimates, for all \((t,x)\in\mathbb{R}^{+}\times\mathbb{T}^{d}\),_ \[|v(t,x)|\leqslant e^{-\lambda t}\|v_{0}\|_{\infty}+\frac{1}{\lambda}(1-e^{- \lambda t})\|g\|_{\infty}\leqslant\|v_{0}\|_{\infty}+\frac{1}{\lambda}\|g\|_{\infty} \tag{6.8}\] _and for all \(\tau>0\), there exists a constant \(C\) depending only on \(\tau,\gamma,\lambda\) and \(d\), such that for all \((t,x),(s,y)\in[1/\tau,\tau]\times\mathbb{T}^{d}\) with \(s<t\)_ \[|v(t,x)-v(s,y)|\leqslant C((t-s)|\log(t-s)|+\|x-y\|)(\|g\|_{\infty}+\|v_{0}\|_{ \infty}). \tag{6.9}\] **Remark 6.1**.: \(v\) _is called a mild solution of the equation \(\partial_{t}v-\frac{\gamma}{2}\Delta v+\lambda v=g\) with initial value \(v_{0}\). The fact that a mild solution is a classical solution if \(g\) is sufficiently regular is a result from Pazy ([20], Corollary 4.2.5)._ Estimates (6.8) and (6.9) are quite standard but we include the proof for the sake of completeness. Proof of Proposition 6.1.: The fact that \(v\in C([0,\tau],L^{\infty}(\mathbb{T}^{d}))\) is a consequence of the fact that \(S^{\lambda,\gamma}\) is a \(C_{0}\) contraction semigroups on \(L^{\infty}\). For the first estimate (6.8), we have that: \[|v(t,x)| \leq|S_{t}^{\lambda,\gamma}v_{0}(x)|+\left|\int_{0}^{t}S_{t-u}^{ \lambda,\gamma}(g(u,\cdot))(x)\mathrm{d}u\right|\] \[\leq e^{-\lambda t}\|v_{0}\|_{\infty}\int_{\mathbb{R}^{d}}s_{0}( \gamma t,x,z)\mathrm{d}z+\|g\|_{\infty}\int_{0}^{s}e^{-\lambda(t-u)}\int_{ \mathbb{R}^{d}}s_{0}(\gamma(t-u),x,z)\mathrm{d}z\mathrm{d}u\] \[=e^{-\lambda t}\|v_{0}\|_{\infty}+\frac{1}{\lambda}(1-e^{- \lambda t})\|g\|_{\infty}.\] For the estimate (6.9), we start by letting \(s<t\), we have \[|v(t,x)-v(s,y)| \leq|S_{t}^{\lambda,\gamma}v_{0}(x)-S_{s}^{\lambda,\gamma}v_{0}(y )|+\left|\int_{0}^{t}S_{t-u}^{\lambda,\gamma}(g(u,\cdot))(x)\mathrm{d}u-\int_ {0}^{s}S_{s-u}^{\lambda,\gamma}(g(u,\cdot))(y)\mathrm{d}u\right|\] \[\leq I_{1}\|v_{0}\|_{\infty}+(I_{2}+I_{3})\|g\|_{\infty}\] where \[I_{1} =\int_{\mathbb{R}^{d}}\left|e^{-\lambda t}s_{0}(\gamma t,x,z)-e^{ -\lambda s}s_{0}(\gamma s,y,z)\right|\mathrm{d}z,\] \[I_{2} =\int_{0}^{s}\int_{\mathbb{R}^{d}}\left|e^{-\lambda(t-u)}s_{0}( \gamma(t-u),x,z)-e^{-\lambda(s-u)}s_{0}(\gamma(s-u),y,z)\right|\mathrm{d}z \mathrm{d}u,\] \[I_{3} =\int_{s}^{t}\int_{\mathbb{R}^{d}}e^{-\lambda(t-u)}s_{0}(\gamma(t -u),x,z)\mathrm{d}z\mathrm{d}u.\] We have that \(I_{3}\leq t-s\). For \(I_{1}\) and \(I_{2}\), we use the fact that, for \(i=1\ldots d\) \[\partial_{t}s_{0}(t,0,\xi) =\frac{1}{2}\Delta_{\xi}s_{0}(t,0,\xi)=\frac{1}{2}\left(\frac{\| \xi\|^{2}}{t^{2}}-\frac{d}{t}\right)s_{0}(t,0,\xi)\] \[\partial_{\xi_{i}}s_{0}(t,0,\xi) =-\frac{\xi_{i}}{t}s_{0}(t,0,\xi).\] First for \(I_{1}\), denote \(c_{1}(r)=rt+(1-r)s\) and \(c_{2}(r)=r(z-x)+(1-r)(z-y))=z-(rx+(1-r)y)\) for \(r\in[0,1]\), We have that \[|(\partial_{t}s_{0})(\gamma c_{1}(r),0,c_{2}(r))| \leq\frac{1}{2}\left(\frac{\|c_{2}(r)\|^{2}}{\gamma^{2}c_{1}(r)^ {2}}+\frac{d}{\gamma c_{1}(r)}\right)s_{0}(\gamma c_{1}(r),0,c_{2}(r))\] \[|(\partial_{\xi_{i}}s_{0})(\gamma c_{1}(r),0,c_{2}(r))| \leq\frac{|c_{2}(r)_{i}|}{\gamma c_{1}(r)}s_{0}(\gamma c_{1}(r),0,c_{2}(r))\] Therefore \[I_{1} =\int_{\mathbb{R}^{d}}\left|e^{-\lambda t}s_{0}(\gamma t,0,z-x)-e ^{-\lambda s}s_{0}(\gamma s,0,z-y)\right|\mathrm{d}z\] \[\leq\int_{\mathbb{R}^{d}}\int_{0}^{1}|c_{1}^{\prime}(r)\partial_{ t}(e^{-\lambda t}s_{0}(\gamma t,0,\xi))_{|t=c_{1}(r),\xi=c_{2}(r)}|\mathrm{d}r \mathrm{d}z\] \[\quad+\int_{\mathbb{R}^{d}}\int_{0}^{1}\sum_{i=1}^{d}|c_{2}^{ \prime}(r)_{i}\partial_{\xi_{i}}(e^{-\lambda t}s_{0}(\gamma t,0,\xi))_{|t=c_ {1}(r),\xi=c_{2}(r)}|\mathrm{d}r\mathrm{d}z\] \[\leq I_{1,1}+I_{1,2},\] where we have, by applying Fubini and a change of variable \[I_{1,1} =(t-s)\int_{\mathbb{R}^{d}}\int_{0}^{1}\left[\lambda e^{-\lambda c_{ 1}(r)}+\frac{\gamma}{2}\left(\frac{\|c_{2}(r)\|^{2}}{c_{1}(r)^{2}}+\frac{d}{c_{ 1}(r)}\right)\right]s_{0}(\gamma c_{1}(r),0,c_{2}(r))\mathrm{d}r\mathrm{d}z\] \[\leqslant(t-s)\int_{0}^{1}\left[\lambda e^{-\lambda c_{1}(r)}+ \frac{\gamma}{2}\left(\frac{d\gamma c_{1}(r)}{\gamma^{2}c_{1}(r)^{2}}+\frac{d}{ \gamma c_{1}(r)}\right)\right]\mathrm{d}r\] \[\leqslant(t-s)\int_{0}^{1}\left[\lambda e^{-\lambda c_{1}(r)}+ \frac{d}{c_{1}(r)}\right]\mathrm{d}r=\int_{s}^{t}\left(\lambda e^{-\lambda u} +\frac{d}{u}\right)\mathrm{d}u\] \[=e^{-\lambda s}(1-e^{-\lambda(t-s)})+d\log\left(\frac{t}{s}\right) \leqslant\left(\lambda+\frac{d}{s}\right)(t-s).\] For the term \(I_{1,2}\), we get, using Cauchy-Schwarz, Fubini and a change of variable \[I_{1,2} =\int_{\mathbb{R}^{d}}\int_{0}^{1}\sum_{i=1}^{d}|y_{i}-x_{i}|e^{ -\lambda c_{1}(r)}\frac{|c_{2}(r)_{i}|}{\gamma c_{1}(r)}s_{0}(\gamma c_{1}(r),0,c_{2}(r))\mathrm{d}r\mathrm{d}z\] \[\leqslant\|x-y\|\int_{0}^{1}\frac{e^{-\lambda c_{1}(r)}}{\gamma c _{1}(r)}\int_{\mathbb{R}^{d}}\|c_{2}(r)\|s_{0}(\gamma c_{1}(r),0,c_{2}(r)) \mathrm{d}z\mathrm{d}r=\|x-y\|\int_{0}^{1}\frac{e^{-\lambda c_{1}(r)}}{\sqrt{ \gamma c_{1}(r)}}\mathrm{d}rC_{1}\] \[\leqslant C_{1}\|x-y\|\int_{s}^{t}\frac{e^{-\lambda u}}{\sqrt{ \gamma u}}\frac{\mathrm{d}u}{t-s}\leqslant\frac{C_{1}}{\sqrt{\gamma s}}\|x-y\|\] where \(C_{1}\) is the expectation of the quadratic norm of \(X=(X_{1},X_{2},\ldots X_{d})\) of \(d\) independent standard normal variables: \(C_{1}=\mathbb{E}(\|X\|)\leqslant\mathbb{E}(\|X\|^{2})^{1/2}=\sqrt{d}\) (we also have \(C_{1}=\sqrt{2}\frac{\Gamma((d+1)/2)}{\Gamma(d/2)}\sim\sqrt{d}\)). We get that, for \(s\geqslant\frac{1}{T}\), \[I_{1}\leqslant\left(\lambda+\frac{d}{s}\right)(t-s)+\sqrt{\frac{d}{\gamma s}} \|x-y\|\leqslant\left(\lambda+d\tau\right)(t-s)+\sqrt{\frac{d\tau}{\gamma}}\|x -y\|.\] For \(I_{2}\), we make the same computations with \(c_{1}(r)=r(t-u)+(1-r)(s-u)=rt+(1-r)s-u\) where \(u\in[0,s]\), we have, since \(c_{1}^{\prime}(r)=t-s\), \[I_{2}\leqslant\int_{0}^{s}\int_{s-u}^{t-u}\left(\lambda e^{-\lambda v}+\frac{ d}{v}\right)\mathrm{d}v\mathrm{d}u+C_{1}\|x-y\|\int_{0}^{s}\int_{s-u}^{t-u}\frac{e^{- \lambda v}}{\sqrt{\gamma v}}\frac{\mathrm{d}v}{t-s}\mathrm{d}u\] For the first integral, we have \[\int_{0}^{s}\int_{s-u}^{t-u}\left(\lambda e^{-\lambda v}+\frac{d }{v}\right) \mathrm{d}v\mathrm{d}u=(1-e^{-\lambda(t-s)})\int_{0}^{s}e^{- \lambda(s-u)}\mathrm{d}u+d\int_{0}^{s}\log\left(\frac{t-u}{s-u}\right)\mathrm{ d}u\] \[=\frac{1}{\lambda}(1-e^{-\lambda(t-s)})(1-e^{-\lambda s})+d\left[ t\log(t)-s\log(s)-(t-s)\log(t-s)\right]\] \[\leqslant\lambda s(t-s)+d\left[s(\log(t)-\log(s))+(t-s)(\log(t)- \log(t-s))\right]\] \[\leqslant\lambda s(t-s)+d(t-s)+d(t-s)(|\log(t)|+|\log(t-s))|)\] \[\leqslant(\lambda s+d+|\log(t)|)(t-s)+d(t-s)|\log(t-s)|.\] For the second integral, we get \[\int_{0}^{s}\int_{s-u}^{t-u}\frac{e^{-\lambda v}}{\sqrt{\gamma v}}\mathrm{d}v \mathrm{d}u\leqslant(t-s)\int_{0}^{s}\frac{1}{\sqrt{\gamma(s-u)}}\mathrm{d}u=(t -s)\sqrt{\frac{2s}{\gamma}}.\] Then we obtain, for \(1/\tau\leqslant s<t\leqslant\tau\) \[I_{2} \leqslant(\lambda s+d+|\log(t)|)(t-s)+d(t-s)|\log(t-s)|+C_{1}\|x-y \|\sqrt{\frac{2s}{\gamma}}\] \[\leqslant(\lambda\tau+d+|\log(\tau)|)(t-s)+d(t-s)|\log(t-s)|+ \sqrt{\frac{2d\tau}{\gamma}}\|x-y\|.\] At last, we get the following estimate \[|v(t,x)-v(s,y)|\leqslant\left[(\lambda+d\tau)\left(t-s\right)+ \sqrt{\frac{d\tau}{\gamma}}\|x-y\|\right]\|v_{0}\|_{\infty}\] \[\qquad+\left[(\lambda\tau+|\log(\tau)|+d+1)(t-s)+d(t-s)|\log(t-s )|+\sqrt{\frac{2d\tau}{\gamma}}\|x-y\|\right]\|g\|_{\infty}.\] We modify a little our equation (6.1), both in order to obtain a sharper estimate on the uniform norm of the solution and to get a coherent notation with the solution of the limit equation when \(K\to+\infty\). We define for \(q\in[-1,1]\), \[r_{K}(q):=-\int_{\frac{1}{2}}^{\frac{1}{2}(q+1)}4g_{K}(s)ds. \tag{6.10}\] We have that \[r_{K}^{\prime}(q)=-2g_{K}\left(\frac{1}{2}(q+1)\right)=-(1-q)\mathbb{P}_{ \frac{1-q}{2}}\big{[}X<\kappa(K,T)\big{]}+(1+q)\mathbb{P}_{\frac{1+q}{2}}\big{[} X\leqslant\kappa(K,T)\big{]}. \tag{6.11}\] So we let \(h_{K}\) be \[h_{K}(q)=\begin{cases}-1\text{ for }q<-1\\ -r_{K}^{\prime}(q)+q\text{ for }q\in[-1,1]\\ 1\text{ for }q>1\end{cases} \tag{6.12}\] Since \(r_{K}^{\prime}(1)=0\) and \(r_{K}^{\prime}(-1)=0\), \(h_{K}\) is continuous on \(\mathbb{R}\). We have that, for \(q\in[-1,1]\), \[h_{K}(q)=-r_{K}^{\prime}(q)+q=(1-q)\mathbb{P}_{\frac{1-q}{2}}\big{[}X<\kappa( K,T)\big{]}-(1+q)\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leqslant\kappa(K,T)\big{]}+q.\] We now solve the following equation \[\partial_{t}v(t,x)-2\alpha\Delta v(t,x)+(2\beta+1)v=h_{K}(v(t,x)). \tag{6.13}\] Note that, this equation and (6.3) are exactly the same with the term \(v\) added on both sides if \(\|v\|_{\infty}\leqslant 1\), since for \(q\notin[-1,1]\), \(h_{K}(q)\neq q\). Thus, a solution \(v\) of (6.3) with \(\|v\|_{\infty}\leqslant 1\) will also be a solution of (6.13) and reciprocally. **Proposition 6.2**.: _For \(v_{0}\in L^{\infty}(\mathbb{T}^{d})\) with \(\|v_{0}\|_{\infty}\leqslant 1\), there exists a unique solution \((v(t,x),t\geqslant 0,x\in\mathbb{T}^{d})\) to the problem_ * \(v\) _is continuous from_ \(\mathbb{R}^{*}_{+}\) _to_ \(L^{\infty}(\mathbb{T}^{d})\)__ * \(v\) _satisfies, for all_ \(t>0\) _and_ \(x\in\mathbb{T}^{d}\)__ \[v(t,x)=S_{t}^{2\beta+1,4\alpha}v_{0}(x)+\int_{0}^{t}S_{t-s}^{2\beta+1,4\alpha }[h_{K}(v(s,\cdot))](x)\mathrm{d}s.\] (6.14) _We say that \(v\) is a mild solution to (6.13). We have also that \(\|v\|_{\infty}\leqslant 1\) and \(v\) satisfies_ \[v(t,x)=S_{t}^{2\beta,4\alpha}v_{0}(x)-\int_{0}^{t}S_{t-s}^{2\beta,4\alpha}[r^{ \prime}_{K}(v(s,\cdot))](x)\mathrm{d}s \tag{6.15}\] _and thus is a mild solution of (6.3)._ Proof.: The first part of the proposition comes from a fixed point argument (see also Pazy [20], Theorem 6.1.2) applied to the following functional let \(\tau>0\) and define the functional \(F:C(]0,\tau],L^{\infty}(\mathbb{T}^{d}))\to C(]0,\tau],L^{\infty}(\mathbb{T}^{d}))\) defined by \[F(v)(t,x):=S_{t}^{2\beta+1,4\alpha}v_{0}(x)+\int_{0}^{t}S_{t-s}^{2\beta+1,4 \alpha}[h_{K}(v(s,\cdot))](x)\mathrm{d}s. \tag{6.16}\] We equip \(C(]0,\tau],L^{\infty}(\mathbb{T}^{d}))\) with the uniform topology on all compact subset. We can apply the Banach fixed point Theorem to \(F\) (see the proof of Pazy [20], Theorem 6.1.2. Moreover, the mapping \(v_{0}\mapsto v\) is Lipschitz continuous from \(L^{\infty}\) to \(C(]0,\tau],L^{\infty}(\mathbb{T}^{d}))\). An application of Proposition 6.1 proves that \(\|v\|_{\infty}\leqslant 1\). Since \(h_{K}\) is differentiable, then, if \(v_{0}\in C^{2}(\mathbb{T}^{d})\), \(v_{0}\) is in the domain of \(\Delta\) and thus \(v\) is classical solution of (6.13) (Theorem 6.1.5 [20]). Thus \(v\) is a classical solution of (6.3) since \(\|v\|_{\infty}\leqslant 1\) and a mild solution of (6.3). Now consider an approximating sequence \((v_{0,n})\) in \(C^{2}(\mathbb{T}^{d})\) of \(v_{0}\in L^{\infty}\), and \((v_{n})\) the sequence of mild solutions with initial value \(v_{0,n}\) and \(v\) the mild solution of (6.13) with initial value \(v\). Then, since \(v_{0}\mapsto v\) is Lipschitz continuous, by the dominated convergence theorem, we get that, uniformly on \([\![t_{0},\tau]\times\mathbb{T}^{d}\) for all \(t_{0}>0\), the right hand side of \[v_{n}(t,x)=S_{t}^{2\beta,4\alpha}v_{0,n}(x)-\int_{0}^{t}S_{t-s}^{2\beta,4 \alpha}[r^{\prime}_{K}(v_{n}(s,\cdot))](x)\mathrm{d}s \tag{6.17}\] converges to \(S_{t}^{2\beta,4\alpha}v_{0}(x)-\int_{0}^{t}S_{t-s}^{2\beta,4\alpha}[r^{\prime }_{K}(v(s,\cdot))](x)\mathrm{d}s\), whereas the left hand side converges to \(v\). So we obtain that \(v\) is a mild solution of (6.3). ### Solution of (6.2) #### 6.2.1 Existence of a solution We use the same transform as before, and let \(h_{\infty}\) be the pointwise limit of \(h_{K}\). The equation (6.2) is now formally \[\partial_{t}v(t,x)-2\alpha\Delta v(t,x)+(2\beta+1)v=h_{\infty}(v(t,x)). \tag{6.18}\] For the limiting equation, we prove first that the family \((v_{K})_{K}\) of solutions associated to \(h_{K}\) with common initial value \(v_{0}\in L^{\infty}(\mathbb{T}^{d})\) in \(C(\mathbb{R}^{+}_{*}\times\mathbb{T})\) is compact (with uniform norm on all compact subset). Then, by taking the limit, any accumulation point \(v_{\infty}\) of the sequence satisfy the mild formulation of the limiting equation relaxed as a subdifferential inclusion. In order to prove this, we set some notations: \[r_{\infty}(q):=-\int_{\frac{1}{2}}^{\frac{1}{2}(q+1)}4g_{\infty}(s)\mathrm{d}s \tag{6.19}\] \(h_{\infty}\), the pointwise limit of \(h_{K}\), is the function on \(\mathbb{R}\) \[h_{\infty}(q)=2g_{\infty}\left(\frac{1+q}{2}\right)+q=-\mathbbm{1}_{q\leqslant -2\rho}+q\mathbbm{1}_{-2\rho<q\leqslant 2\rho}+\mathbbm{1}_{q>2\rho}. \tag{6.20}\] Then \(h_{\infty}\) is non-decreasing and is the left-derivative of the convex function \(H_{\infty}\) \[H_{\infty}(q) :=-r_{\infty}(q)+\frac{q^{2}}{2} \tag{6.21}\] \[=[-q+2\rho+2\rho^{2}]\mathbbm{1}_{\{q\leqslant-2\rho\}}+\frac{1}{2 }q^{2}\mathbbm{1}_{\{-2\rho<q\leqslant 2\rho\}}+[q-2\rho+2\rho^{2}]\mathbbm{1}_{\{2 \rho<q\}}\] The subdifferential of \(H_{\infty}\) at \(q\) is defined as \[\partial H_{\infty}(q)=\{p\in\mathbb{R},H_{\infty}(q^{\prime})-H_{\infty}(q) \geqslant p(q^{\prime}-q),\text{ for all }q^{\prime}\in[-1,1]\}.\] In particular we have, \[\partial H_{\infty}(q)=\begin{cases}\{-1\}&\text{for }q<-2\rho,\\ \ [-1,-2\rho]&\text{for }q=-2\rho,\\ \{q\}&\text{for }-2\rho<q<2\rho,\\ \ [2\rho,1]&\text{for }q=2\rho,\\ \{1\}&\text{for }q>2\rho.\end{cases} \tag{6.22}\] We adopt the following definition for a solution of the equation: \[\partial_{t}v(t,x)-2\alpha\Delta v(t,x)+(2\beta+1)v\in\partial H_{\infty}(v( t,x)). \tag{6.23}\] **Definition 6.1**.: _We say that \(v\) is a mild solution of Equation (6.23) if it satisfies, for some \(\tau>0\) and all \(t\leqslant\tau\)_ \[v(t,x)=S_{t}^{2\beta+1,4\alpha}v_{0}(x)+\int_{0}^{t}S_{t-s}^{2\beta+1,4\alpha} [w(s,\cdot)](x)\mathrm{d}s. \tag{6.24}\] _where \(w\in L^{2}([0,\tau]\times\mathbb{T}^{d})\), with \(w(t,x)\in\partial H_{\infty}(v(t,x))\) almost everywhere._ **Proposition 6.3**.: _For \(v_{0}\in L^{\infty}(\mathbb{T}^{d})\), any accumulation point (in \(C(\mathbb{R}^{*}_{+}\times\mathbb{T}^{d})\) equipped with uniform norm on each compact set), of the sequence \((v_{K})\) of solutions given by Proposition 6.2 is a mild solution of (6.23)._ As a consequence of the proposition, there exists \((v(t,x),t\geqslant 0,x\in\mathbb{T}^{d})\) a mild solution of (6.23) such that \(v\) is continuous from \(\mathbb{R}^{*}_{+}\) to \(L^{\infty}(\mathbb{T}^{d})\), and \(\|v\|_{\infty}\leqslant 1\). The existence of a solution for a given initial condition \(v_{0}\) is not difficult and can be prove in different ways. Here we adopt some kind of regularization procedure, since we have a natural family of differentiable functions (namely the \((h_{K})\)) approximating \(h_{\infty}\) and we use the convergence of the sequence \((v_{K})\) in the next section to prove the convergence of the stochastic process. We also present the proof because we need its arguments in order to prove the Theorem 7.1. In the Remark 6.3, we present another construction of solution(s) using the monotonicity of \(h_{\infty}\) which is interesting since it also gives an insight on the problem of non-uniqueness. Proof.: For each \(K\), we have a mild solution \(v_{K}\) from Proposition 6.2. From Proposition 6.1, we have that each solution is uniformly bounded, uniformly continuous on \([1/\tau,\tau]\times\mathbb{T}^{d}\), and the modulus of continuity only depends on \(\tau>1\) (since the others parameters are fixed). Therefore, by the Arzela-Ascoli Theorem, the sequence \((v_{K})_{K}\) is compact on \(C([1/\tau,\tau]\times\mathbb{R}^{d})\) and we can extract a subsequence converging uniformly in \(C([1/\tau,\tau]\times\mathbb{R}^{d})\), and then by a diagonal argument, a sequence converging to a limit \(v_{\infty}\) in \(C(]0,\infty[\times\mathbb{R}^{d})\), uniformly on each compact. Note that since \(\|v_{K}\|\leqslant 1\) for all \(K\), we also have \(\|v_{\infty}\|\leqslant 1\). We show that \(v_{\infty}\) satisfies (6.18). Let us assume that \(\rho>0\). Denote for any \(s>0\), \(\mathcal{A}_{\rho}(s):=\{y\in\mathbb{R}^{d}\colon|v_{\infty}(s,y)\pm 2\rho|=0\}\). Note that since \(h_{\infty}\) is uniformly continuous on \([-1,-2\rho[\), \(]-2\rho,2\rho[\) and \(]2\rho,1]\), we have that for any \(y\notin\mathcal{A}_{\rho}(s)\), \(\lim_{K\to\infty}h_{K}(v_{K}(s,y))=h_{\infty}(v_{\infty}(s,y))\). Let \(y\in\mathcal{A}_{\rho}(s)\) and assume \(v_{\infty}(s,y)=2\rho\) without loss of generality, since \(v_{K}(s,y)\) converges to \(v_{\infty}(s,y)\), for all \(\varepsilon>0\) such that \(2\rho-\varepsilon>0\), there exists \(K_{0}\) such that for all \(K\geq K_{0}\), \(2\rho-\varepsilon<v_{K}(s,y)<2\rho+\varepsilon\). Thus, using Lemma B.5, we have that, for all \(K\geq K_{0}\), \(2\rho-2\varepsilon\leq h_{K}(v_{K}(s,y))\leq 1\). Then taking the limits in \(K\) and \(\varepsilon\to 0\), we get \[h_{\infty}(2\rho)=2\rho\leq\liminf_{K\to\infty}h_{K}(v_{K}(s,y))\leq\limsup_{ K\to\infty}h_{K}(v_{K}(s,y))\leq 1=h_{\infty}(2\rho^{+}). \tag{6.25}\] For \(\rho=0\), we have the same inequality since then \(h_{\infty}(0)=-1\) and \(h_{\infty}(0+)=1\). Let \(w_{+}(s,y)=\limsup_{K\to\infty}h_{K}(v_{K}(s,y))\) and \(w_{-}(s,y)=\liminf_{K\to\infty}h_{K}(v_{K}(s,y))\), thus we have that for all \((s,y)\in]0,+\infty[\times\mathbb{T}^{d}\): \[h_{\infty}(v_{\infty}(s,y))\leq w_{-}(s,y)\leq w_{+}(s,y)\leq h_{\infty}(v_{ \infty}(s,y)^{+}). \tag{6.26}\] Since \(h_{K}(v_{K})\) is bounded, by the Banach Alaoglu Theorem, the sequence is weakly compact in \(L^{2}(]0,\tau[\times\mathbb{T}^{d})\), and we have a subsequence of \((h_{K}(v_{K}))_{K}\) converging weakly to \(w\in L^{2}_{loc}(]0,+\infty[\times\mathbb{T}^{d})\). Since the density of the semigroup \(S^{2\beta+1,4\alpha}\) is in \(L^{2}(]0,\tau]\times\mathbb{R}^{d})\) for all \(T>0\), we have as \(K\to+\infty\), \[v_{\infty}(t,x)=S^{2\beta+1,4\alpha}_{t}v_{0}(x)+\int_{0}^{t}S^{2\beta+1,4 \alpha}_{t-s}w(s,\cdot)(x)\mathrm{d}s. \tag{6.27}\] Moreover, \(w_{-}\) and \(w_{+}\) are bounded and therefore in \(L^{2}(]0,\tau]\times\mathbb{T}^{d})\). Let \(\varphi\in L^{2}(]0,\tau]\times\mathbb{T}^{d})\) and \(\varphi\geq 0\), by the Fatou Lemma we get \[0=\int_{]0,\tau]\times\mathbb{T}^{d}}\lim_{K\to+\infty}(h_{K}(v _{K})-w_{-})\varphi\] \[\leq\liminf_{K\to+\infty}\int_{]0,\tau]\times\mathbb{T}^{d}}(h_{K} (v_{K})-w_{-})\varphi=\int_{]0,\tau]\times\mathbb{T}^{d}}(w-w_{-})\varphi. \tag{6.28}\] We also have \[0=\int_{]0,\tau]\times\mathbb{T}^{d}}\liminf_{K\to+\infty}(w_{+}-h _{K}(v_{K}))\varphi\] \[\leq\liminf_{K\to+\infty}\int_{]0,\tau]\times\mathbb{T}^{d}}(w_{+} -h_{K}(v_{K}))\varphi=\int_{]0,\tau]\times\mathbb{T}^{d}}(w_{+}-w)\varphi. \tag{6.29}\] Thus, almost everywhere on \(]0,+\infty[\times\mathbb{T}^{d}\), we have that \[h_{\infty}(v_{\infty})\leq w_{-}\leq w\leq w_{+}\leq h_{\infty}(v_{\infty}^{+ }). \tag{6.30}\] Therefore, \(w\in\partial H_{\infty}(v_{\infty})\) a.e. #### 6.2.2 Uniqueness of solution of (6.23) The main problem concerns the uniqueness of a solution. We prove first that, we do not have uniqueness for a constant initial condition \(v_{0}(x)=2\rho\) when \(2\rho<\frac{1}{1+2\beta}\), so we are in the case of segregation or metastable segregation described by Figure 1. **Remark 6.2**.: _We describe three possible solutions starting from the initial condition \(v_{0}(x)=2\rho\) when \(2\rho<\frac{1}{1+2\beta}\)._ _Note that_ \[S^{2\beta+1,4\alpha}_{t}v_{0}(x)=\int_{\mathbb{R}^{d}}e^{-(2\beta+1)t}s_{0}(t, x,y)2\rho\mathrm{d}y=2\rho e^{-(2\beta+1)t}. \tag{6.31}\] _Suppose that \(v\) does not depend on \(x\), \(v(t,x)=c(t)\) for all \(x\in\mathbb{T}^{d}\), we have_ \[\int_{0}^{t}S_{t-s}^{2\beta+1,4\alpha}[h_{\infty}(v(s,\cdot))](x) \mathrm{d}s=\int_{0}^{t}h_{\infty}(c(s))e^{-(2\beta+1)(t-s)}\mathrm{d}s. \tag{6.32}\] _Let us consider the functions_ \[v^{1}(t,x)=c^{1}(t) =2\rho e^{-2\beta t} \tag{6.33}\] \[v^{2}(t,x)=c^{2}(t) =2\rho e^{-(2\beta+1)t}+\frac{1}{1+2\beta}\left(1-e^{-(2\beta+1)t }\right)\] \[=2\rho+\left(\frac{1}{1+2\beta}-2\rho\right)\left(1-e^{-(2\beta+ 1)t}\right). \tag{6.34}\] _Since, \(c^{1}(t)\in[0,2\rho[\), for \(t>0\), we have \(h_{\infty}(c^{1}(t))=c^{1}(t)\) and then_ \[\int_{0}^{t}h_{\infty}(c^{1}(s))e^{-(2\beta+1)(t-s)}\mathrm{d}s= \int_{0}^{t}2\rho e^{-2\beta s}e^{-(2\beta+1)(t-s)}\mathrm{d}s=2\rho e^{-(2 \beta+1)t}(e^{t}-1). \tag{6.35}\] _Therefore_ \[S_{t}^{2\beta+1,4\alpha}v_{0}(x)+\int_{0}^{t}S_{t-s}^{2\beta+1, 4\alpha}[h_{\infty}(v^{1}(s,\cdot))](x)\mathrm{d}s =2\rho e^{-(2\beta+1)t}+2\rho e^{-(2\beta+1)t}(e^{t}-1)\] \[=2\rho e^{-2\beta t}=v^{1}(t,x). \tag{6.36}\] _Since \(2\rho<\frac{1}{1+2\beta}\), \(c^{2}(t)\in]2\rho,1]\), for \(t>0\), we have \(h_{\infty}(c^{2}(t))=1\) and then by the same computation,_ \[\int_{0}^{t}h_{\infty}(c^{2}(s))e^{-(2\beta+1)(t-s)}\mathrm{d}s= \frac{1}{1+2\beta}(1-e^{-(2\beta+1)t}). \tag{6.37}\] _Therefore, we also have_ \[S_{t}^{2\beta+1,4\alpha}v_{0}(x)+\int_{0}^{t}S_{t-s}^{2\beta+1,4 \alpha}[h_{\infty}(v^{2}(s,\cdot))](x)\mathrm{d}s=v^{2}(t,x). \tag{6.38}\] _Thus, both \(v^{1}\) and \(v^{2}\) are mild solutions to (6.18) and thus to (6.23) with the same initial conditions. Note that at \(t=0\), we have \(v^{1}(0,x)=v^{2}(0,x)=2\rho\) and_ \[\partial_{t}v^{1}(0,x) =-4\beta\rho=-(2\beta+1)2\rho+2\rho=-(2\beta+1)2\rho+h_{\infty}(2 \rho^{-}) \tag{6.39}\] \[\partial_{t}v^{2}(0,x) =-2\rho(2\beta+1)+1=-(2\beta+1)2\rho+h_{\infty}(2\rho^{+}). \tag{6.40}\] _We see that non uniqueness comes from the fact that at \(t=0\), where \(v(t,x)=2\rho\), we have at least two choices for the derivative due to the fact that \(h_{\infty}\) is not continuous._ _Note that if we consider the mild solution to the subdifferential inclusion (6.23), then we have at least a third solution: \(v^{3}(t,x)=2\rho\). We consider \(w(t,x)=2\rho(2\beta+1)\), we have_ \[S_{t}^{2\beta+1,4\alpha}v_{0}(x)+\int_{0}^{t}S_{t-s}^{2\beta+1, 4\alpha}[w(s,\cdot)](x)\mathrm{d}s =2\rho e^{-(2\beta+1)t}+2\rho(2\beta+1)\int_{0}^{t}e^{-(2\beta+1 )(t-s)}\mathrm{d}s \tag{6.41}\] \[=2\rho.\] _Since, \(2\rho<\frac{1}{1+2\beta}\), we have that \(2\rho<2\rho(2\beta+1)<1\) so \(w(t,x)\in\partial H_{\infty}(2\rho)\)._ Therefore, we cannot expect uniqueness for all initial condition, we have to impose some condition on the initial condition if we want a unique solution. In the literature, we can find different conditions ensuring that the solution of Equation (6.23) is unique. Adapting [10] and [4], we prove that the regularity of the initial condition at the levels where the non-linearity is not continuous is sufficient. **Definition 6.2**.: _A function in is regular at level if for all, such that, we have._ **Proposition 6.4**.: _For, such that is Lipschitz on and regular at levels and, the solution to Equation (6.23) is locally unique. Moreover, the Lebesgue measure of the set is zero._ We adapt two arguments by [10] and [4]. **Lemma 6.5**.: _If is a mild solution of (6.23) with, and such that is Lipschitz on, then, for all, there exists a constant such that, for all_ (6.42) Proof.: Since is a mild solution of (6.23) there exists with since. Thus we get (6.43) Then, we have for the last integral We have also where is the Lipschitz constant of and the third inequality comes from the computation of the upper bound of the quadratic norm of independent random variables with common variance. Therefore we obtain (6.44) For the second bound, we use the fact that, thus we have, using an integration by parts (6.45) Then, we have (6.46) We can treat both integrals as before, for the last integral: \[\left|\int_{0}^{t}\int_{\mathbb{R}^{d}}e^{-(2\beta+1)(t-s)}\partial_ {x_{i}}s_{0}(4\alpha(t-s),x,y)w(s,y)\mathrm{d}y\mathrm{d}s\right|\] \[\qquad\leqslant\int_{0}^{t}\frac{e^{-(2\beta+1)(t-s)}}{4\alpha(t-s )}\int_{\mathbb{R}^{d}}|x_{i}-y_{i}|s_{0}(4\alpha(t-s),x,y)\mathrm{d}y\mathrm{d }s\] \[\qquad\leqslant\int_{0}^{t}\frac{e^{-(2\beta+1)(t-s)}}{4\alpha(t-s )}\sqrt{2\alpha(t-s)}\mathrm{d}s\] \[\qquad\leqslant\frac{1}{\sqrt{2\alpha}}\int_{0}^{\sqrt{t}}e^{-(2 \beta+1)u^{2}}\mathrm{d}u\leqslant\frac{\sqrt{t}}{\sqrt{2\alpha}}.\] For the first integral, we have the same estimates as before \[|S_{t}^{2\beta+1,4\alpha}(\partial_{x_{i}}v_{0})(x)-\partial_{x_{i}}v_{0}(x)| \leqslant L^{\prime}\sqrt{4\alpha d}t^{1/2} \tag{6.47}\] where \(L^{\prime}\) is the maximum of the Lipschitz constants of \((\partial_{x_{i}}v_{0})_{i}\). Thus, we get \[\|\nabla v(t)-\nabla v_{0}\|_{\infty}\leqslant\left(L^{\prime}\sqrt{4\alpha d }+\frac{1}{\sqrt{2\alpha}}\right)t^{1/2} \tag{6.48}\] We now prove Proposition 6.4. Proof.: Let us assume that we have two solutions, \(v_{1}\) and \(v_{2}\) and let \(e(t)=\|v_{1}-v_{2}\|_{L^{\infty}([0,t]\times\mathbb{R}^{d})}\). Note that the previous Lemma entails that, for all \(\tau>0\), there exists \(C\), such that for \(t<\tau\), we have \(e(t)\leqslant C\sqrt{t}\). We define \(I_{s,t}^{+}=\{(s,y),s\leqslant t,|v_{1}(s,y)-2\rho|\leqslant e(t)\}\) and \(I_{s,t}^{-}=\{(s,y),s\leqslant t,|v_{1}(s,y)+2\rho|\leqslant e(t)\}\). Since \(v_{1}\) and \(v_{2}\) are solutions of (6.23), there exists \(w_{1}\) and \(w_{2}\) such that \(w_{1}\in\partial H_{\infty}(v_{1})\) a.e. and \(w_{2}\in\partial H_{\infty}(v_{2})\). We can decompose each \(w_{i}\) as \(w_{i}(t,x)=f_{\infty}(v_{i}(t,x))+g_{i}(t,x)\) where \(f_{\infty}\) is the continuous part of \(\partial H_{\infty}\): \[f_{\infty}(q)=\left\{\begin{aligned} -2\rho&\text{ for }q\in[-1,-2\rho]\\ q&\text{ for }q\in[-2\rho,2\rho]\\ 2\rho&\text{ for }q\in[2\rho,1]\end{aligned}\right. \tag{6.49}\] and \(g_{i}(t,x)=w_{i}(t,x)-f_{\infty}(v_{i}(t,x))\). Note that \[g_{i}(t,x) =-1+2\rho\text{ a.e. on }\{(t,x),v_{i}(t,x)<-2\rho\}, \tag{6.50}\] \[g_{i}(t,x) =0\text{ a.e. on }\{(t,x),-2\rho<v_{i}(t,x)<2\rho\},\] \[g_{i}(t,x) =1-2\rho\text{ a.e. on }\{(t,x),v_{i}(t,x)>2\rho\},\] since \(w_{i}=h_{\infty}(v_{i})\) a.e. on \(\{(t,x),|v_{i}(t,x)|=2\rho\}\). As a consequence we have that, up to a negligible set, \(\{(s,y),s\leqslant t,g_{1}(s,y)\neq g_{2}(s,y)\}\subset I_{s,t}^{+}\cup I_{s,t}^{-}\) since, \(g_{1}(s,y)\neq g_{2}(s,y)\) entails that one of the following inequalities is true \(v_{2}(s,y)<2\rho<v_{1}(s,y)\) or \(v_{2}(s,y)<-2\rho<v_{1}(s,y)\) or \(v_{1}(s,y)<2\rho<v_{2}(s,y)\) or \(v_{1}(s,y)<-2\rho<v_{2}(s,y)\). For each case, the inclusion is true: for the first one for example, if \(s\leqslant t\) and \(v_{2}(s,y)<2\rho<v_{1}(s,y)\) \[e(t)\geqslant v_{1}(s,y)-v_{2}(s,y)=v_{1}(s,y)-2\rho+2\rho-v_{2}(s,y)\geqslant v _{1}(s,y)-2\rho\geqslant 0 \tag{6.51}\] thus \((s,y)\in I_{s,t}^{+}\). The same is true for the other cases. Therefore we obtain the following expression for the difference \(v_{1}-v_{2}\): \[\begin{split} v_{1}(t,x)-v_{2}(t,x)=&\int_{0}^{t}\int _{\mathbb{T}^{d}}e^{-(2\beta+1)(t-s)}s(4\alpha(t-s),x,y)(f_{\infty}(v_{1}(s,y))- f_{\infty}(v_{2}(s,y)))\mathrm{d}y\mathrm{d}s\\ &+\int_{0}^{t}\int_{I_{s,t}^{+}\cup I_{s,t}^{-}}e^{-(2\beta+1)(t- s)}s(4\alpha(t-s),x,y)(g_{1}(s,y)-g_{2}(s,y))\mathrm{d}y\mathrm{d}s.\end{split} \tag{6.52}\] For the first integral in (6.52) we note that \(f_{\infty}\) is \(1\)-Lipschitz, thus \[\begin{split}&\left|\int_{0}^{t}\int_{\mathbb{T}^{d}}e^{-(2\beta+1) (t-s)}s(4\alpha(t-s),x,y)(f_{\infty}(v_{1}(s,y))-f_{\infty}(v_{2}(s,y))) \mathrm{d}y\mathrm{d}s\right|\\ &\leq\int_{0}^{t}\int_{\mathbb{T}^{d}}e^{-(2\beta+1)(t-s)}s(4 \alpha(t-s),x,y)e(t)\mathrm{d}y\mathrm{d}s\leq te(t).\end{split}\] For the second integral in (6.52) we note that \(|g_{i}|\leq 1-2\rho\), then we first have \[\begin{split}&\left|\int_{0}^{t}\int_{I_{s,t}^{+}\cup I_{s,t}^{-}}e^{ -(2\beta+1)(t-s)}s(4\alpha(t-s),x,y)(g_{1}(s,y)-g_{2}(s,y))\mathrm{d}y\mathrm{ d}s\right|\\ &\leq 2(1-2\rho)\int_{0}^{t}\int_{I_{s,t}^{+}\cup I_{s,t}^{-}}e^{-(2 \beta+1)(t-s)}s(4\alpha(t-s),x,y)\mathrm{d}y\mathrm{d}s.\end{split}\] Let \(s\leq t\), since \(v_{0}\) is regular on the level set \(\{v_{0}=2\rho\}\) which is compact (since \(\mathbb{T}^{d}\) is) and \(\nabla v_{0}\) is a Lipschitz function, we can find \(\delta,\eta>0\) such that on \(\{v_{0}=2\rho\}+B_{\delta}(0)\), \(|\nabla v_{0}(x)|>\eta\). Using the second part of Lemma 6.5, and since \(e(t)\leq C\sqrt{t}\), there exists \(T>0\) such that for \(s\leq t\leq T\), \(I_{s,t}^{+}\subset\{v_{0}=2\rho\}+B_{\delta}(0)\) and on \(I_{s,t}^{+}\), \(|\nabla v_{1}(s)|>\eta/2\). Since \(I_{s,t}^{+}\) is compact and \(\nabla v_{1}(s)\neq 0\), by the implicit function theorem, we can find a finite cover by open balls \((B_{i})_{1\leq i\leq N}\) centered on points on \(I_{s,t}^{+}\) such that locally on each ball \(B_{i}\), the level set \(\{v_{1}(s,y)=2\rho\}\) is the graph of a function, e.g \(y_{1}=\varphi(y_{2},\ldots y_{d})\). Note that since \(\{v_{0}=2\rho\}\) is compact, \(N\) is uniform in \(s\leq T\), since by the lemma, we can make the cover of open balls on \(\{v_{0}=2\rho\}\) and take their traces on \(\{v_{1}(s,y)=2\rho\}\). By the mean value theorem on the first coordinate \(y_{1}\) of \(v_{1}(s)\), we have \(I_{s,t}^{+}\cap B_{i}\subset[-2e(t)/\nu,2e(t)/\nu]\times\Pi_{1}(B_{i})\), where \(\Pi_{1}\) is the projection along the first coordinate. Thus, \[\begin{split}&\int_{0}^{t}\int_{I_{s,t}^{+}\cap B_{i}}e^{-(2 \beta+1)(t-s)}s(4\alpha(t-s),x,y)\mathrm{d}y\mathrm{d}s\\ &\quad\leq\int_{0}^{t}e^{-(2\beta+1)(t-s)}\frac{4e(t)}{\nu\sqrt{4 \alpha(t-s)}}\int_{\Pi_{1}(B_{i})}s(4\alpha(t-s),0,(0,y_{2},\ldots y_{d})) \mathrm{d}y_{2}\cdots\mathrm{d}y_{d}\mathrm{d}s\\ &\quad\leq\int_{0}^{t}e^{-(2\beta+1)(t-s)}\frac{2e(t)}{\nu\sqrt{ \alpha(t-s)}}\mathrm{d}s\leq\frac{2e(t)t^{1/2}}{\nu\sqrt{\alpha}}.\end{split}\] Thus \[\int_{0}^{t}\int_{I_{s,t}^{+}}e^{-(2\beta+1)(t-s)}s(4\alpha(t-s),x,y)\mathrm{ d}y\mathrm{d}s\leq\frac{2Ne(t)t^{1/2}}{\nu\sqrt{\alpha}}. \tag{6.53}\] Since the same holds for \(I_{s,t}^{-}\), we obtain, that for some constant \(C>0\), and all \(t<\tau\) \[|v_{1}(t,x)-v_{2}(t,x)|\leq(t+Ct^{1/2})e(t)\leq(\tau+C\tau^{1/2})e(\tau). \tag{6.54}\] Then \(e(\tau)\leq(\tau+C\tau^{1/2})e(\tau)\), and taking \(\tau\) small enough, we obtain \(e(\tau)=0\) thus \(v_{1}=v_{2}\) on \([0,\tau]\times\mathbb{T}^{d}\) **Remark 6.3**.: _Maximal and minimal solutions. Another approach to (6.2) is to use a monotone construction of solutions, which arises from a comparison principle close to the one developed in Proposition 5.1. This was done initially in [2] and also in [4] Define \(\overline{h_{\infty}}\) the right continuous version of \(h_{\infty}\) (Equation 6.20) by_ \[\overline{h_{\infty}}(q):=-\mathbb{1}_{q<-2\rho}+q\mathbb{1}_{-2\rho<q<2\rho}+ \mathbb{1}_{q\geq 2\rho}. \tag{6.55}\] _Note that \(h_{\infty}\) and \(\overline{h_{\infty}}\) are non decreasing (recall that \(2\rho\in[0,1]\)). Recall that \((S_{t})\) is the semigroup on \(L^{1}(\mathbb{T}^{d})\) associated to \(-2\alpha\Delta\), we denote_ \[\underline{F}(v)(t,x) :=e^{-(2\beta+1)t}S_{t}v_{0}(x)+\int_{0}^{t}e^{-(2\beta+1)(t-s)}S _{t-s}[h_{\infty}(v(s,\cdot))](x)\mathrm{d}s \tag{6.56}\] \[\overline{F}(v)(t,x) :=e^{-(2\beta+1)t}S_{t}v_{0}(x)+\int_{0}^{t}e^{-(2\beta+1)(t-s)}S _{t-s}[\overline{h_{\infty}}(v(s,\cdot))](x)\mathrm{d}s. \tag{6.57}\] _Then, fixed points of the maps above are mild solutions of these two formulations of our subdifferential inclusion:_ \[\partial_{t}v(t,x)-2\alpha\Delta v(t,x)+(2\beta+1)v(t,x) =h_{\infty}(v(t,x)) \tag{6.58}\] \[\partial_{t}v(t,x)-2\alpha\Delta v(t,x)+(2\beta+1)v(t,x) =\overline{h_{\infty}}(v(t,x)) \tag{6.59}\] _Since \(h_{\infty}\) (resp. \(\overline{h_{\infty}}\)) is non decreasing and that \(h_{\infty}(p)\leq\overline{h_{\infty}}(p)\) for all \(p\in[-1,1]\), we have that, for \(u,v\) two functions such that \(-1\leq v\leq u\leq 1\),_ 1. \(\underline{F}(v)(t,x)\leq\underline{F}(u)(t,x)\)__ 2. \(\overline{F}(v)(t,x)\leq\overline{F}(u)(t,x)\)__ 3. \(\underline{F}(u)(t,x)\leq\overline{F}(u)(t,x)\)__ _We define the sequences \((V^{n})_{n}\) and \((W^{n})_{n}\) of functions on \(\mathbb{R}^{+}\times\mathbb{T}^{d}\): \(V^{0}(t,x)=1\), \(W^{0}(t,x)=-1\) and for all \(n\geq 1\)_ \[V^{n}(t,x) =e^{-(2\beta+1)t}S_{t}v_{0}(x)+\int_{0}^{t}e^{-(2\beta+1)(t-s)}S_ {t-s}[\overline{f}(V^{n-1}(s,\cdot))](x)\mathrm{d}s \tag{6.60}\] \[W^{n}(t,x) =e^{-(2\beta+1)t}S_{t}v_{0}(x)+\int_{0}^{t}e^{-(2\beta+1)(t-s)}S_ {t-s}[\underline{f}(W^{n-1}(s,\cdot))](x)\mathrm{d}s. \tag{6.61}\] _Thus, for \(-1\leq v_{0}(x)\leq 1\), we can prove by induction that the sequences \((V^{n})\) and \((W^{n})\) satisfy, for all \(n\)_ \[-1\leq W^{1}\leq W^{2}\cdots\leq W^{n}\leq V^{n}\leq\cdots\leq V^{2}\leq V^{1 }\leq 1. \tag{6.62}\] _By a compactness and monotony argument, one can prove that \((W^{n})\) and \((V^{n})\) converge to functions \(w\) and \(w\) which are mild solutions of the subdifferential inclusion. These are the minimal and maximal solutions of the subdifferential inclusion, in the sense that any other solution (Definition 6.1) must be bounded below by \(w\) and above by \(v\). Uniqueness follows if one can prove that \(w=v\)._ ## 7 Convergence of the discrete PDE In analogy with the continuous setting, we define \(v^{N}=2u^{N}-1\) where \(u^{N}\) is the solution of the discretized Equation (4.10) and \(H(i,v^{N})=2G(i,\frac{v^{N}+1}{2})+v^{N}(i)\). In such a way (4.10) becomes \[\begin{cases}\partial_{t}v^{N}(t,i)=2\alpha N^{2}\Delta u^{N}(t,i)-(2\beta+1)v^{N }(t,i)+H(i,v^{N})\\ v^{N}(0,i)=2u_{0}^{N}(i)-1,\end{cases} \tag{7.1}\] We main goal of this section is to prove the following result which states the convergence of \(v^{N}\). **Theorem 7.1**.: _Let \(v^{N}\) be the solution of (7.1). Then \((v^{N})\) is pre-compact for the uniform convergence on each compact sets of \(\mathbb{T}^{d}\times]0,+\infty[\) and any accumulation points \(v_{\infty}\) is a solution of (6.23). In particular, whenever the solution of (7.1) is a.e. unique, \(v_{\infty}\) is also (the) mild solution of (6.2) and the whole sequence \(v^{N}\) converges to \(v_{\infty}\), uniformly on all compact sets of \(\mathbb{T}^{d}\times]0,+\infty[\)._ To prove Theorem 7.1 we need some technical results. Let consider the semigroup of the discrete Laplacian \(\frac{1}{2}N^{2}\Delta^{N}\) on \(\mathbb{T}^{d}_{N}\) and \(\mathbb{Z}^{d}\), denoted by \(s^{N}(t,i,j)\) and \(s^{N}_{0}(\gamma t,i,j)\) respectively. In particular we have that \(s^{N}(t,i,j)=p_{N}(t,i,j)\), where \(p_{N}(t,i,j)\) is heat kernel of discrete Laplacian on the discrete torus, cf. (5.2). For any \(\lambda,\gamma\geqslant 0\) and \(f:\frac{1}{N}\mathbb{T}^{d}_{N}\to\mathbb{R}\), we let \((S^{N,\lambda,\gamma}_{t})\) be the semigroup defined by \[S^{N,\lambda,\gamma}_{t}f(x)=\sum_{y\in\frac{1}{N}\mathbb{T}^{d}_{N}}e^{- \lambda t}s^{N}(\gamma t,Nx,Ny)f(y)=\sum_{y\in\frac{1}{N}\mathbb{Z}^{d}}e^{- \lambda t}s^{N}_{0}(\gamma t,Nx,Ny)\widetilde{f}(y) \tag{7.2}\] where, as in (6.6), \(\widetilde{f}\) is the periodic extension of \(f\) on \(\frac{1}{N}\mathbb{Z}^{d}\). In the remaining part of this article we will consider \(S^{N,\lambda,\gamma}_{t}f(x)\) with \(f\in\mathcal{C}(\mathbb{T}^{d})\). In that case we mean that the function \(f\) is restricted on \(\frac{1}{N}\mathbb{T}^{d}_{N}\subset\mathbb{T}^{d}\), which is equivalent to consider \(f^{N}(x):=f(|Nx|/N)\). We observe that if \(f\) is also Lipschitz, then \(\|f-f^{N}\|_{\mathbb{T}^{d}}\leqslant\frac{c}{N^{d}}\) for some \(c>0\) and \(\|f^{N}\|_{\mathbb{T}^{d}}\leqslant\|f\|_{\mathbb{T}^{d}}\). Then, with the same extension to \(\mathbb{T}^{d}\) for \(s^{N}\) we can write, for any \(x\in\mathbb{T}^{d}\) \[S^{N,\lambda,\gamma}_{t}f(x)=\int_{\mathbb{T}^{d}}e^{-\lambda t}N^{d}s^{N}( \gamma t,Nx,Ny)f^{N}(y)\mathrm{d}y. \tag{7.3}\] By a slight abuse of notation, we still denote by \(v^{N}\) the linear interpolation on \(\mathbb{T}^{d}\) such that \(v^{N}(t,\frac{i}{N})=v^{N}_{i}(t)\). We also redefine the function \(H\) on the torus \(\mathbb{T}^{d}\) by the linear interpolation such that \(H(\frac{i}{N},v^{N})=H(i,v^{N})\) and we define \(H^{N}\) as \(f^{N}\) in (7.3). **Definition 7.1**.: _Let \(N\in\mathbb{N}\) and \(v^{N}_{0}\in L^{\infty}(\mathbb{T}^{d})\). We say that \((v^{N}(t,x),t\geqslant 0,x\in\mathbb{T}^{d})\) is a mild solution of (7.1) if_ * _for any_ \(N\)_,_ \(v^{N}\) _is continuous from_ \(\mathbb{R}^{*}_{+}\) _to_ \(L^{\infty}(\mathbb{T}^{d})\)_,_ * _for all_ \(t>0\) _and_ \(x\in\mathbb{T}^{d}\)__ \[v^{N}(t,x)=S^{N,2\beta+1,4N^{2}\alpha}_{t}(v^{N}_{0})(x)+\int_{0}^{t}S^{N,2 \beta+1,4N^{2}\alpha}_{t-s}\Big{[}H^{N}(\cdot,v^{N}(s,\cdot))\Big{]}(x)\mathrm{ d}s\,.\] (7.4) Let \(u^{N}\) be the unique solution of (4.10), so that \(v^{N}=2u^{N}-1\) satisfies (7.1). Of course, for any \(N\) the solution \(v^{N}\) of (7.4) exists and it is unique. The proof of Theorem 7.1 is based on the representation of \(v^{N}\) as in (7.4). We define \(\widetilde{v}^{N}\) as a slight modification of (7.4), that is, \[\widetilde{v}^{N}(t,x):=S^{2\beta+1,4\alpha}_{t}(v_{0})(x)+\int_{0}^{t}S^{2 \beta+1,4\alpha}_{t-s}\Big{[}H(\cdot,v^{N}(s,\cdot))\Big{]}(x)\mathrm{d}s\,. \tag{7.5}\] **Lemma 7.2**.: _For any \(\tau>0\)_ \[\lim_{N\to+\infty}\|\widehat{v}^{N}-v^{N}\|_{[1/\tau,\tau]\times\mathbb{T}^{d}}=0. \tag{7.6}\] Proof.: Let \(\tau>0\), then \[\sup_{t\in[\frac{1}{\tau},\tau],\,x\in\mathbb{T}^{d}}\Big{|}\widehat {v}^{N}(t,x)-v^{N}(t,x)\Big{|}\leqslant\sup_{t\in[\frac{1}{\tau},\tau],\,x\in \mathbb{T}^{d}}\Big{|}S^{N,\lambda,N^{2}\gamma}_{t}v^{N}_{0}(x)-S^{\lambda, \gamma}_{t}v_{0}(x)\Big{|}\\ +\sup_{t\in[\frac{1}{\tau},\tau],\,x\in\mathbb{T}^{d}}\Big{|}\int _{0}^{t}\Big{\{}S^{N,2\beta+1,4N^{2}\alpha}_{t-s}\Big{[}H^{N}(\cdot,v^{N}(s, \cdot))\Big{]}(x)-S^{2\beta+1,4\alpha}_{t-s}\Big{[}H(\cdot,v^{N}(s,\cdot)) \Big{]}(x)\Big{\}}\mathrm{d}s\Big{|}. \tag{7.7}\] We show that the right hand side of (7.7) converges to \(0\). We detail the convergence of the second term, which is more delicate. The argument can be adapted to the first term by using Assumption 2 which ensures that \(v^{N}_{0}\) converges to \(v_{0}\) in \(C(\mathbb{T}^{d})\). We fix \(\varepsilon\in(0,\frac{1}{\tau})\) and we get that \[\Big{|}\int_{0}^{t}S^{N,2\beta+1,4N^{2}\alpha}_{t-s}\Big{[}H^{N}( \cdot,v^{N}(s,\cdot))\Big{]}(x)-S^{2\beta+1,4\alpha}_{t-s}\Big{[}H(\cdot,v^{N} (s,\cdot))\Big{]}(x)\mathrm{d}s\Big{|}\\ \leqslant\Big{|}\int_{0}^{t-\varepsilon}S^{N,2\beta+1,4N^{2} \alpha}_{t-s}\Big{[}H^{N}(\cdot,v^{N}(s,\cdot))\Big{]}(x)-S^{2\beta+1,4\alpha }_{t-s}\Big{[}H(\cdot,v^{N}(s,\cdot))\Big{]}(x)\mathrm{d}s\Big{|}+C\varepsilon\,, \tag{7.8}\] where we used that Lemma B.6 which implies that \(H(\cdot,v^{N})\) is bounded by \(1\) uniformly on \(N\). The integral on the right hand side of (7.8) is bounded from above by \[\int_{0}^{t-\varepsilon}e^{-(2\beta+1)(t-s)}\Big{|}\int_{\mathbb{ T}^{d}}\Big{(}N^{d}s^{N}(4N^{2}\alpha(t-s),Nx,Ny)-s(4\alpha(t-s),x,y)\Big{)}H^{N}(y,v ^{N}(s,y))\,\mathrm{d}y\,\Big{|}\,\mathrm{d}s\\ +\int_{0}^{t-\varepsilon}e^{-(2\beta+1)(t-s)}\int_{\mathbb{T}^{d} }s(4\alpha(t-s),x,y)\Big{|}H^{N}(y,v^{N}(s,y))-H(y,v^{N}(s,y))\Big{|}\, \mathrm{d}y\mathrm{d}s\,. \tag{7.9}\] Since \(\sup_{s\geqslant 0,\,y\in\mathbb{T}^{d}}\Big{|}H^{N}(y,v^{N}(s,y))-H(y,v^{N}(s,y ))\Big{|}\leqslant\frac{1}{N}\), the second integral is smaller than \(\frac{c_{\alpha,\beta}}{N}\). For the first integral, we first use that \(H(\cdot,v^{N})\) is bounded by \(1\) uniformly on \(N\) and then we operate the change of variable \(u=t-s\) which gives that it is bounded from above by \[\int_{\varepsilon}^{t}e^{-(2\beta+1)u}\int_{\mathbb{T}^{d}}\Big{|}\,N^{d}s^{N }(4N^{2}\alpha u,Nx,Ny)-s(4\alpha u,x,y)\Big{|}\,\mathrm{d}y\,\mathrm{d}u \tag{7.10}\] We now use the local central limit theorem (cf. Theorem 2.1.1 and (2.5) in [16]): let \(\rho\) be the Gaussian Kernel and \(\rho(u,x,y)=\frac{1}{u^{d/2}}\rho\Big{(}\frac{x-y}{u^{d/2}}\Big{)}\), then \[\psi_{\varepsilon,\tau,N}:=\sup_{u\in[\varepsilon,\tau],\,y\in\frac{1}{N} \mathbb{Z}^{d}}\Big{|}N^{d}p(uN^{2},Nx,Ny)-\rho(u,x,y)\Big{|}\xrightarrow{N\to+ \infty}0. \tag{7.11}\] We also observe that by symmetry the supremum in (7.11) is independent of \(x\). Moreover, by Proposition 2.4.6 in [16] we have that there exist \(c_{1},c_{2}>0\) independent of \(x,y,u\) such that \[\Big{|}N^{d}p(uN^{2},Nx,Ny)-\rho(u,x,y)\Big{|}\leqslant\frac{c_{1}}{u^{\frac{ d}{2}}}e^{-c_{2}\frac{|x-y|^{2}}{u}}. \tag{7.12}\] In such a way, for any \(M>0\) fixed we write \(\mathbb{T}^{d}=B_{x}(M)\cup B_{x}(M)^{c}\) and we get that (7.10) is smaller than \[c_{d}\int_{\varepsilon}^{t}e^{-(2\beta+1)u}\Big{(}M^{d}\psi_{N,\varepsilon, \tau}+\frac{c_{1}}{M^{d}u^{\frac{d}{2}}}e^{-c_{2}\frac{M^{2d}}{u}}\Big{)}\, \mathrm{d}s\leqslant C_{d}\Big{(}M^{d}\psi_{N,\varepsilon,\tau}+\frac{1}{M^{d} }\Big{)}, \tag{7.13}\] where \(c_{d},C_{d}>0\) are two positive constants that depend only on the dimension \(d\). We conclude that the right hand side of (7.8) is bounded by \(\frac{c_{\alpha,\beta}}{N}+cM^{d}\psi_{N,\varepsilon,\tau}+\frac{c}{M^{d}}+C\varepsilon\), uniformly on \(x\in\mathbb{T}^{d}\) and \(t\in[\frac{1}{\tau},\tau]\). Therefore, by taking the limit on \(N\to+\infty\) and then on \(M\to+\infty\) and \(\varepsilon\to 0\) we conclude the proof. Proof of Theorem 7.1.: We control \(\widehat{v}^{N}\) to get the convergence of \(v^{N}\). We observe that since \(H(\cdot,v^{N}(s,\cdot))\) is uniformly bounded so that by Proposition 6.1\(\widehat{v}^{N}\) is uniformly bounded in \(N\), uniformly continuous on \([1/\tau,\tau]\times\mathbb{T}^{d}\), and the modulus of continuity only depends on \(\tau>1\). By the Ascoli-Arzela Theorem, the sequence \((\widehat{v}^{N})_{N}\) is pre-compact on \(C([1/\tau,\tau]\times\mathbb{R}^{d})\) and therefore, by Lemma (7.2), \((v^{N})\) also. By a diagonal argument, we can extract from \((v^{N})\) a subsequence converging uniformly to a limit \(v_{\infty}\) in \(C(]0,\infty[\times\mathbb{R}^{d})\), uniformly on each compact. Using Corollary B.4, we can adapt the argument used in the proof of Proposition 6.3 (6.25-6.30) to get that each accumulation point \(v_{\infty}\) is a mild solution of (6.23), we omit the details. ### Proof of Theorem 3.1 Theorem 3.1 is now a consequence of Theorems 4.1 and 7.1. ## Appendix A Concentration inequalities We follow the definitions in Jara and Menezes [13] and [14] and Boucheron, Lugosi, Massart [1], Section 2.3. We omit the proofs since there are present in the references. **Definition A.1** ([13] and [1], Section 2.3).: _Let \(X\) be a real random variable. \(X\) is said to be sub-Gaussian with variance parameter \(\sigma^{2}\) if, for all \(t\in\mathbb{R}\)_ \[\psi_{X}(t):=\log\mathbb{E}(\exp(tX))\leq\sigma^{2}\frac{t^{2}}{2}.\] (A.1) _We denote \(\mathcal{G}(\sigma^{2})\) the set of real sub-Gaussian random variables with variance parameter \(\sigma^{2}\)._ **Proposition A.1**.: _[_1_]_ _and [14], Proposition F.7] The following statements are equivalent:_ 1. \(X\in\mathcal{G}(\sigma^{2})\)__ 2. \(\mathbb{P}(|X|>t)\leq 2\exp(-\frac{t^{2}}{2\sigma^{2}})\)__ 3. \(\mathbb{E}(\exp(\gamma X^{2}))\leq 3\) _for all_ \(0\leq\gamma<\frac{1}{4\sigma^{2}}\)_._ Let us complete our family of inequality: **Lemma A.2**.: _[_14_]__, Proposition F.8]_ _Let \(X\in\mathcal{G}(\sigma_{1}^{2})\) and \(Y\in\mathcal{G}(\sigma_{2}^{2})\), then for all \(0\leq\gamma<\frac{1}{4\sigma_{1}\sigma_{2}}\),_ \[\mathbb{E}(\exp(\gamma XY))\leq 3\] **Lemma A.3**.: _[_14_]__, Proposition F.12]_ _Let \(X_{1},\ldots,X_{n}\) be random variables with \(X_{i}\in\mathcal{G}(\sigma_{i}^{2})\), such that there is a partition of \(K\) subsets \(P_{1},\ldots,P_{K}\) each containing \(L\) variables that \(\Sigma_{k}=\sigma(X_{i},i\in P_{k})\) are independent \(\sigma\)-algebra then, for all real \(\alpha_{1},\ldots,\alpha_{n}\), the random variable \(Y=\sum_{i}\alpha_{i}X_{i}\) is sub-Gaussian with variance parameter \(L\sum_{i}\alpha_{i}^{2}\sigma_{i}^{2}\)._ Note that if \(L=1\), the variables are independent. **Lemma A.4** (Hoeffeding Inequality, [1], Section 2.3).: _Let \(X\) be a bounded random variable with \(X\in[a,b]\), then \(X-\mathbb{E}X\in\mathcal{G}\left(\frac{(b-a)^{2}}{4}\right)\)._ ## Appendix B Controls for the non-linearities In this section, we collect some results about the specifics of our model. Recall the notations: for \(\eta\in\{0,1\}^{\mathbb{T}^{d}_{N}}\) \[c_{0}^{+}(\eta) :=\mathbbm{1}_{\{\rho_{0}(\eta)\geqslant 1-\frac{\kappa_{N}}{K_{N}} \}}, c_{0}^{-}(\eta) :=\mathbbm{1}_{\{\rho_{0}(\eta)\leqslant\frac{\kappa_{N}}{K_{N}}\}},\] and \(\kappa_{N}=\kappa_{N}(T):=\min\left\{[K_{N}T]-1;\left[K_{N}(1-T)\right]\right\}\). For any function \(u=(u_{i})_{i\in\mathbb{T}^{d}_{N}}\), we define \(\upsilon_{u}(\mathrm{d}\eta)=\upsilon_{u}^{u}(\mathrm{d}\eta):=\bigotimes_{i \in\mathbb{T}^{d}_{N}}\mathrm{B}(u_{i})\) where \(B(u_{i})\) denote a Bernoulli distribution with parameter \(u_{i}\). and we let \(c_{0}^{+}(u)\) and \(c_{0}^{-}(u)\) be the expectations of \(c_{0}^{+}(\eta)\) and \(c_{0}^{-}(\eta)\) under \(\upsilon_{u}\). We set \((\tau_{i}\eta)_{j}=\eta_{i+j}\), and likewise \(\tau_{i}\) acts on \(u\). Then, \[G(u):=(1-u_{0})c_{0}^{+}(u)-u_{0}c_{0}^{-}(u)=(1-u_{0})\mathbb{P}_{\upsilon_{u }}\left[\rho_{0}(\eta)\geqslant 1-\frac{\kappa_{N}}{K_{N}}\right]-u_{0} \mathbb{P}_{\upsilon_{u}}\left[\rho_{0}(\eta)\leqslant\frac{\kappa_{N}}{K_{N}}\right]\] (B.1) and \(G(i,u):=G(\tau_{i}u)\). We start with some results on the non linearity \(G\): **Proposition B.1**.: _Let \(u,v\in[0,1]^{\mathbb{T}^{d}_{N}}\) such that \(u_{i}\geqslant v_{i}\) for all \(i\in\mathcal{V}_{N}\) and \(u_{0}=v_{0}\). Then \(G(0,u)\geqslant G(0,v)\)._ Proof.: We construct a coupling between \(\upsilon_{u}\) and \(\upsilon_{v}\): let \((U_{i})\) be independent and identically distributed random variables uniform on \([0,1]\). Define \(\eta_{i}=\mathbbm{1}_{\{U_{i}\leqslant u_{i}\}}\) and \(\eta^{\prime}_{i}=\mathbbm{1}_{\{U_{i}\leqslant v_{i}\}}\). We have \(\eta_{i}\geqslant\eta^{\prime}_{i}\) for all \(i\in\mathcal{V}_{N}\), therefore \(\rho_{0}(\eta)\geqslant\rho_{0}(\eta^{\prime})\). This proves that \(c_{0}^{+}(u)\geqslant c_{0}^{+}(v)\) and \(c_{0}^{-}(u)\leqslant c_{0}^{-}(v)\). The results follows since \(u_{0}=v_{0}\). For \(p\in[0,1]\), \(T\in[0,1]\), \(K\in\mathbb{N}^{*}\), we let \(\kappa(K,T)=\min\left\{[KT]-1;\left[K(1-T)\right]\right\}\leqslant K/2\), and we define \(g_{K}(p)\) as \[g_{K}(p):=(1-p)\mathbb{P}_{p}\big{[}X>K-\kappa(K,T)\big{]}-p\mathbb{P}_{p} \big{[}X\leqslant\kappa(K,T)\big{]}\] where \(X\) is a random variable with binomial distribution with parameter \((K,p)\) under \(\mathbb{P}_{p}\). In particular, we have that \(g_{K_{N}}(p)=G(i,u)\) for \(u(t,i)=p\) for all \(i\in\mathbb{T}^{d}_{N}\). Note that \(g_{K}\) is \(C^{\infty}([0,1])\). We also have \[g_{K}(p) =(1-p)\mathbb{P}_{1-p}\big{[}X<\kappa(K,T)\big{]}-p\mathbb{P}_{p} \big{[}X\leqslant\kappa(K,T)\big{]}\] (B.2) \[=\sum_{k=0}^{\kappa(K,T)-1}\binom{K}{k}\left[(1-p)^{k+1}p^{K-k}-p^ {k+1}(1-p)^{K-k}\right]\] \[\quad-\binom{K}{\kappa(K,T)}p^{\kappa(K,T)+1}(1-p)^{K-\kappa(K,T)}.\] We recall \(g_{\infty}(p):=(1-p)\mathbbm{1}_{\{1-p<p_{0}(T)\}}-p\mathbbm{1}_{\{p\leqslant p _{0}(T)\}}\), where \(p_{0}(T)=\min(T,1-T)\). The following proposition estimates the convergence of \(g_{K}\) to \(g_{\infty}\), in particular we prove that close to \(p=0\) (resp. \(p=1\)), \(g_{K}\) is negative (resp. positive). **Proposition B.2**.: _For all \(K\in\mathbb{N}^{*}\cup\{\infty\}\), we have \(g_{K}(0)=g_{K}(1)=0\). For all \(K\in\mathbb{N}^{*}\) and \(p\in[0,1]\), we have, for \(|p-p_{0}(T)|>\frac{1}{K}\) and \(|(1-p)-p_{0}(T)|>\frac{1}{K}\)_ \[|g_{K}(p)-g_{\infty}(p)|\leqslant 2e\exp(-2K(p_{0}(T)-p)^{2})+2e\exp(-2K(p_{0}(T )-(1-p))^{2})\] (B.3) _In particular, for any \(0<\delta<\frac{1}{4e}\), \(T\in[4\delta,1-4\delta]\), and \(K>\frac{|\log(\delta)|}{4\delta^{2}}\), we have that \(g_{K}(2\delta)<-\delta\) and \(g_{K}(1-2\delta)>\delta\)._ Proof.: We consider \(g_{K}\) written as in (B.2). Then, the values at \(p=0\) and \(p=1\) are obvious. Note that, under \(\mathbb{P}_{p}\), \(\frac{X}{K}-p\) converges to \(0\) (in \(L^{2}(\Omega_{N})\)) and is sub-Gaussian with variance parameter \(\frac{1}{4K}\), thus, for any \(t>0\), \[\mathbb{P}_{p}\left(\left|\frac{X}{K}-p\right|>t\right)\leq 2\exp(-2Kt^{2}).\] We also have that \[0\leq p_{0}(T)-\frac{\kappa(K,T)}{K}\leq\frac{1}{K}.\] Then, for \(p>p_{0}(T)\), \[\mathbb{P}_{p}\big{[}X\leq\kappa(K,T)\big{]} =\mathbb{P}_{p}\left[p-\frac{X}{K}\geq p-p_{0}(T)+p_{0}(T)-\frac{ \kappa(K,T)}{K}\right]\leq\mathbb{P}_{p}\left[p-\frac{X}{K}\geq p-p_{0}(T)\right]\] \[\leq 2\exp(-2K(p-p_{0}(T))^{2}),\] and for \(p<p_{0}(T)-\frac{1}{K}\), \[\mathbb{P}_{p}[X>\kappa(K,T)] =\mathbb{P}_{p}\left[\frac{X}{K}-p>\frac{\kappa(K,T)}{K}-p_{0}(T )+p_{0}(T)-p\right]\] \[\leq\mathbb{P}_{p}\left[\frac{X}{K}-p\geq p_{0}(T)-p-\frac{1}{K} \right]\leq 2\exp(-2K(p_{0}(T)-p-1/K)^{2})\] \[\leq 2e^{2(p_{0}(T)-p)}\exp(-2K(p_{0}(T)-p)^{2})\leq 2e\exp(-2K (p_{0}(T)-p)^{2}).\] Thus, we have the following, for \(|p-p_{0}(T)|>\frac{1}{K}\), \[\big{|}\mathbb{P}_{p}[X\leq\kappa(K,T)]-\mathbb{1}_{\{p\leq p_{0}(T)\}}\big{|} <2e\exp(-2K(p_{0}(T)-p)^{2}).\] The results follows with the same estimates with \((1-p)\) instead of \(p\). This prove (B.3). Let \(\delta>0\) and set \(p=2\delta\), and \(T\in[4\delta,1-4\delta]\), then \(|2\delta-p_{0}(T)|\leq 2\delta\) for \(K>\frac{1}{2\delta}\) and we have the same for \(1-p=1-2\delta\). Applying the result, we have \[|g_{K}(2\delta)-g_{\infty}(2\delta)|=|g_{K}(2\delta)+2\delta|\leq 4e\exp(-8K \delta^{2})\] Then, if \(4e\exp(-8K\delta^{2})<\delta\), we get the result. This happens if \(\exp(-8K\delta^{2})<\delta^{2}\) and \(\delta<\frac{1}{4e}\), which gives the condition on \(K\). **Proposition B.3**.: _Let \(v:\mathbb{T}^{d}\to[0,1]\) be a continuous fixed density on the torus. For any \(i\in\mathbb{T}^{d}_{N}\) we let \(u_{i}:=v_{i/N}\). Then as \(i/N\to x\), we have that \(u_{i}\) converges to \(v_{x}\). Let \(v_{u}=\bigotimes_{i\in\mathbb{T}^{d}_{N}}\mathrm{B}\big{(}u_{i}\big{)}\). Then, \(\rho_{0}(\eta)\) converges to \(v_{0}\) in probability._ Proof.: We use the coupling introduced in Lemma B.1: we let \((U_{i})\) be i.i.d uniform random variables on \([0,1]\) so that under \(v_{u}\) we have that \(\rho_{0}(\eta)\) and \(\frac{1}{K_{N}}\sum_{i\in\mathcal{V}_{N}}\mathbb{1}_{\{U_{i}<u_{i}\}}\) are equal in law. By the Tchebychev inequality, we have that for any \(\varepsilon>0\) \[\mathbb{P}\Bigg{(}\Big{|}\frac{\sum_{i\in\mathcal{V}_{N}}\mathbb{1}_{\{U_{i}< u_{i}\}}}{K_{N}}-\frac{\sum_{i\in\mathcal{V}_{N}}u_{i}}{K_{N}}\Big{|}>\varepsilon \Bigg{)}\leq\frac{1}{4K_{N}\varepsilon^{2}}.\] Moreover, the sequence \(K_{N}^{-1}\sum_{i\in\mathcal{V}_{N}}u_{i}\) converges to \(v_{0}\) because for any \(i\in\mathcal{V}_{N}\), \(i/N\to 0\). We have the following corollary. **Corollary B.4**.: _Let \(u=(u_{i})_{i\in\mathbb{T}_{N}^{d}}\) as in Proposition B.3. Then, \(G(u)\) and \(g_{K_{N}}(v_{0})\) converge both to \(g_{\infty}(v_{0})\) as \(N\to+\infty\)._ Recall that, for \(q\in[-1,1]\), \[h_{K}(q)=2g_{K}\left(\frac{1}{2}(q+1)\right)+q=(1-q)\mathbb{P}_{\frac{1-q}{2}} \big{[}X<\kappa(K,T)\big{]}-(1+q)\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq\kappa( K,T)\big{]}+q.\] Recall the critical parameter \(\rho:=\rho(T)=\big{|}T-\frac{1}{2}\big{|}=\frac{1}{2}-p_{0}(T)\in\big{[}0, \frac{1}{2}\big{]}\). Note that \(h_{K}\) converges pointwise to the function \(h_{\infty}(q)=2g_{\infty}\left(\frac{1+q}{2}\right)+q=-\mathbb{1}_{q\leq-2 \rho}+q\mathbb{1}_{-2\rho<q\leq 2\rho}+\mathbb{1}_{q>2\rho}\). The points \(q=\pm 2\rho\) are the discontinuities of \(h_{\infty}\), and compare the Lemma to the fact that \(2\rho=h_{\infty}(2\rho^{-})\) and \(1=h_{\infty}(2\rho^{+})\). A similar estimate holds at \(q=-2\rho\). **Lemma B.5**.: _For all \(q\in[-1,1]\), \(|h_{K}(q)|\leq 1\). Moreover, for all \(\varepsilon>0\) such that \(2\rho-\varepsilon>0\), there is \(K_{0}>0\) such that for \(K\geq 0\),_ \[2\rho-2\varepsilon=h_{\infty}(2\rho^{-})-2\varepsilon\leq h_{K}(q)\leq h_{\infty}(2\rho^{+})=1 \text{for }q\in[2\rho-\varepsilon,2\rho+\varepsilon]\] \[-1=h_{\infty}(-2\rho^{-})\leq h_{K}(q)\leq h_{\infty}(-2\rho^{+})+2 \varepsilon=-2\rho+2\varepsilon\text{\quad\quad for }q\in[-2\rho-\varepsilon,-2\rho+\varepsilon]\] Proof.: We start by observing that for all \(q\in[-1,1]\), \(\mathbb{P}_{\frac{1-q}{2}}\big{[}X<\kappa(K,T)\big{]}=\mathbb{P}_{\frac{1+q}{2 }}\big{[}X>K-\kappa(K,T)\big{]}\). We also have that \[1=\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq\kappa(K,T)\big{]}+\mathbb{P}_{\frac{1 +q}{2}}\big{[}\kappa(K,T)<X\leq K-\kappa(K,T)\big{]}+\mathbb{P}_{\frac{1+q}{2 }}\big{[}X>K-\kappa(K,T)\big{]}.\] Thus, using the definiton of \(h_{K}\), we get \[h_{K}(q)= -\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq\kappa(K,T)\big{]}+q \mathbb{P}_{\frac{1+q}{2}}\big{[}\kappa(K,T)<X\leq K-\kappa(K,T)\big{]}\] \[+\mathbb{P}_{\frac{1+q}{2}}\big{[}X>K-\kappa(K,T)\big{]}.\] This gives us the result. Indeed, for \(q\in[0,1]\), we have \[h_{K}(q) \leq q\mathbb{P}_{\frac{1+q}{2}}\big{[}\kappa(K,T)<X\leq K- \kappa(K,T)\big{]}+\mathbb{P}_{\frac{1+q}{2}}\big{[}X>K-\kappa(K,T)\big{]}\] \[\leq\mathbb{P}_{\frac{1+q}{2}}\big{[}\kappa(K,T)<X\big{]}\leq 1\] and \[h_{K}(q) \geq-\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq\kappa(K,T)\big{]}+ \mathbb{P}_{\frac{1+q}{2}}\big{[}X>K-\kappa(K,T)\big{]}\] \[\geq\mathbb{P}_{\frac{1-q}{2}}\big{[}X<\kappa(K,T)\big{]}-\mathbb{ P}_{\frac{1+q}{2}}\big{[}X\leq\kappa(K,T)\big{]}\geq-\mathbb{P}_{\frac{1+q}{2 }}\big{[}X=\kappa(K,T)\big{]}.\] The last inequality comes form the fact that, by a coupling argument, \(p\mapsto\mathbb{P}_{p}\big{[}X<\kappa(K,T)\big{]}\) is decreasing on \([0,1]\), and since \(q\geq 0\), \(\frac{1-q}{2}\leq\frac{1+q}{2}\). In particular, for \(q\in[2\rho-\varepsilon,2\rho+\varepsilon]\) such that \(2\rho-\varepsilon>0\), we have the same upper bound as before for \(h_{K}(q)\), and for the lower bound: \[h_{K}(q) \geq-\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq\kappa(K,T)\big{]}+q \mathbb{P}_{\frac{1+q}{2}}\big{[}X>\kappa(K,T)\big{]}\] \[\geq 2\rho-\varepsilon-2\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq \kappa(K,T)\big{]}.\] From the proof of Proposition B.2, we have that, for \(K\geq K_{0}\): \[\mathbb{P}_{\frac{1+q}{2}}\big{[}X\leq\kappa(K,T)\big{]}\leq 2\exp\big{(}-K(q+2 \rho)^{2}/2\big{)}\leq 2e^{-2K_{0}\rho^{2}}\] Choosing \(K_{0}\) large enough such that the right hand side is less than \(\varepsilon/2\), we get the result. For \(q\in[-1,0]\), the proof is completely similar. **Lemma B.6**.: _Let \(u=(u_{i})_{i\in\mathbb{T}_{N}^{d}}\), with \(u_{i}\in[0,1]\) and let \(v=2u-1\). Then, \(|H(i,v)|\leqslant 1\), uniformly on \(i\in\mathbb{T}_{N}^{d}\)._ Proof.: We recall (4.9), in particular that \(G(i,u)=G(\tau_{i}u)\). So that we only prove that \(|H(v)|\leqslant 1\), where \(H(v)=G((v+1)/2)+v_{0}\). The proof is similar to Lemma B.5. Indeed, again by (4.9) we have that \[2G\left(\frac{v+1}{2}\right)+v_{0} =c_{0}^{+}\left(\frac{v+1}{2}\right)(1-v_{0})-c_{0}^{-}\left( \frac{v+1}{2}\right)(1+v_{0})+v_{0}\] \[=c_{0}^{+}\left(\frac{v+1}{2}\right)-c_{0}^{-}\left(\frac{v+1}{2 }\right)+v_{0}\left(1-c_{0}^{+}\left(\frac{v+1}{2}\right)-c_{0}^{-}\left( \frac{v+1}{2}\right)\right).\] We observe that, by definition, \(1-c_{0}^{+}\left(\frac{v+1}{2}\right)-c_{0}^{-}\left(\frac{v+1}{2}\right)>0\). This implies that \(-1\leqslant H(v)\leqslant 1\).
2307.06353
High-energy electromagnetic, neutrino, and cosmic-ray emission by stellar-mass black holes in disks of active galactic nuclei
Some Seyfert galaxies are detected in high-energy gamma rays, but the mechanism and site of gamma-ray emission are unknown. Also, the origins of the cosmic high-energy neutrino and MeV gamma-ray backgrounds have been veiled in mystery since their discoveries. We propose emission from stellar-mass BHs (sBHs) embedded in disks of active galactic nuclei (AGN) as their possible sources. These sBHs are predicted to launch jets due to the Blandford-Znajek mechanism, which can produce intense electromagnetic, neutrino, and cosmic-ray emissions. We investigate whether these emissions can be the sources of cosmic high-energy particles. We find that emission from internal shocks in the jets can explain gamma rays from nearby radio-quiet Seyfert galaxies including NGC1068, if the Lorentz factor of the jets ($\Gamma_{\rm j}$) is high. On the other hand, for moderate $\Gamma_{\rm j}$, the emission can significantly contribute to the background gamma-ray and neutrino intensities in the $\sim {\rm MeV}$ and $\lesssim {\rm PeV}$ bands, respectively. Furthermore, for moderate $\Gamma_{\rm j}$ with efficient amplification of the magnetic field and cosmic-ray acceleration, the neutrino emission from NGC1068 and the ultrahigh-energy cosmic rays can be explained. These results suggest that the neutrino flux from NGC1068 as well as the background intensities of ${\rm MeV}$ gamma rays, neutrinos, and the ultrahigh-energy cosmic rays can be explained by a unified model. Future MeV gamma-ray satellites will test our scenario for neutrino emission.
Hiromichi Tagawa, Shigeo S. Kimura, Zoltán Haiman
2023-07-12T18:00:00Z
http://arxiv.org/abs/2307.06353v1
High-Energy Electromagnetic, Neutrino, and Cosmic-Ray Emission by Stellar-Mass Black Holes in Disks of Active Galactic Nuclei ###### Abstract Some Seyfert galaxies are detected in high-energy gamma rays, but the mechanism and site of gamma-ray emission are unknown. Also, the origins of the cosmic high-energy neutrino and MeV gamma-ray backgrounds have been veiled in mystery since their discoveries. We propose emission from stellar-mass BHs (sBHs) embedded in disks of active galactic nuclei (AGN) as their possible sources. These sBHs are predicted to launch jets due to the Blandford-Znajek mechanism, which can produce intense electromagnetic, neutrino, and cosmic-ray emissions. We investigate whether these emissions can be the sources of cosmic high-energy particles. We find that emission from internal shocks in the jets can explain gamma rays from nearby radio-quiet Seyfert galaxies including NGC1068, if the Lorentz factor of the jets (\(\Gamma_{\rm j}\)) is high. On the other hand, for moderate \(\Gamma_{\rm j}\), the emission can significantly contribute to the background gamma-ray and neutrino intensities in the \(\sim\) MeV and \(\lesssim\) PeV bands, respectively. Furthermore, for moderate \(\Gamma_{\rm j}\) with efficient amplification of the magnetic field and cosmic-ray acceleration, the neutrino emission from NGC1068 and the ultrahigh-energy cosmic rays can be explained. These results suggest that the neutrino flux from NGC1068 as well as the background intensities of MeV gamma rays, neutrinos, and the ultrahigh-energy cosmic rays can be explained by a unified model. Future MeV gamma-ray satellites will test our scenario for neutrino emission. Subject headings:Stellar mass black holes (1611), Active galactic nuclei (16), Accretion (14), Black hole physics (159), Jets (870), Galactic center (565) ## 1. Introduction Our Universe is filled with high-energy particles, including charged particles (cosmic rays; CRs), neutrinos, and gamma rays, but the origins of these cosmic high-energy particles are unknown. Recently, significant progress have been made in high-energy neutrino astrophysics. IceCube reported the detection of extraterrestrial neutrinos in 2013 (Aartsen et al., 2013), and has been improving the measurement of the cosmic high-energy neutrino background for 10 TeV-100 PeV (Aartsen et al., 2015, 2020). They also identified a nearby Seyfert galaxy, NGC 1068, as a cosmic neutrino source (Aartsen et al., 2020); IceCube Collaboration et al. 2022). In addition, they reported a hint of association between neutrino signals and radio-quiet active galactic nuclei (AGN) (Abbasi et al., 2022). However, possible mechanisms for CR acceleration and subsequent neutrino production sites in radio quiet AGN remain unclear. This has motivated investigations of non-thermal phenomena in hot accretion flows (Kimura et al., 2021; Gutierrez et al., 2021), hot coronae (Murase et al., 2020; Eichmann et al., 2022), accretion shocks (Inoue et al., 2020), disk winds (Inoue et al., 2022), and jets from accreting binaries (Sridhar et al., 2022). High-energy gamma rays (\(>100\) MeV) have also been detected from radio-quiet AGNs (Wojaczynski et al., 2015), but the origin of these gamma-rays are also controversial. The majority of these gamma-ray detected AGNs exhibit signatures of starburst activity, which causes gamma-ray production via hadronuclear interactions (e.g., Ajello et al., 2020). The neutrino-emitting AGN, NGC 1068, is also detected in high-energy gamma rays and shows starburst activity. On the other hand, the gamma-ray signals from NGC 4945 show spectral variation correlated with its X-ray flux, which implies that gamma rays may be associated with AGN activity (Wojaczynski and Niedzwiecki, 2017). AGN wind interacting with the dusty torus can be a possible site of high-energy gamma-ray production (Inoue et al., 2022). Finally, the origins of the unresolved cosmic MeV gamma-ray background and ultrahigh-energy cosmic rays (UHECRs) have been unknown for a long time (Inoue, 2014; Alves Batista et al., 2019). Radio-quiet AGNs have been proposed as candidates for both the cosmic MeV gamma-ray background (e.g., Inoue et al., 2013; Kimura et al., 2021) and a source of UHECRs (PeEr et al., 2009). Here, we propose a novel scenario for high-energy emission from radio-quiet AGNs, where we consider relativistic jets launched from stellar-mass black holes (sBHs) embedded in AGN disks. It has been predicted that stars and compact objects (COs) including stellar-mass black holes (sBHs) are embedded in AGN disks due to capture by dynamical interactions of nuclear star clusters (Miralda-Escude & Gould, 2000; Lu et al., 2013) with the AGN disk (Ostriker, 1983; Syer et al., 1991) and in-situ star formation (Levin & Beloborodov, 2003; Goodman & Tan, 2004; Thompson et al., 2005; Levin, 2007). There are several observations supporting this picture (Artymowicz et al., 1993; Levin & Beloborodov, 2003; Milosavljevic & Loeb, 2004; Tagawa et al., 2020). Recently, the evolution of COs in AGN disks has attracted significant attention as these are promising environments for some of the sBH-sBH (e.g. Bartos et al., 2017; Stone et al., 2017; McKernan et al., 2018; Yang et al., 2019; Tagawa et al., 2020) and sBH-neutron star (NS) mergers (McKernan et al., 2020; Tagawa et al., 2021) reported as gravitational wave (GW) events by LIGO/Virgo/KAGRA (Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2021). Many recent studies in the wake of the LIGO/Virgo/KAGRA discoveries have investigated emission from transients emerging in AGN disks (McKernan et al., 2019; Graham et al., 2020; Perna et al., 2021, 2021; Zhu et al., 2021, 2021; Yang et al., 2022; Moranchel-Basurto et al., 2021; Grishin et al., 2021; Kimura et al., 2021; Wang et al., 2021, 2022). Closest to the present study, Wang et al. (2021) considered emission from shocks emerging from interactions of Blandford-Znajek (BZ) jets, (Blandford & Znajek, 1977) launched from accreting sBHs, with gas in the AGN's broad line region. In Tagawa et al. (2022) (Paper I), we estimated the structure of the cavity created by the BZ jet and the dynamical evolution of gas around the BHs. In Tagawa et al. (2023) (Paper II), we investigated the properties of emission released when a jet, launched from merging sBHs embedded in an AGN disk, breaks out from the disk. In this paper, we consider in more detail the high-energy radiation from jets launched from solitary sBHs in AGN disks. Non-thermal electrons accelerated at the internal shock emit broadband electromagnetic radiation via synchrotron and inverse Compton scattering. Non-thermal protons, accelerated together with electrons, will produce neutrinos via hadronic interactions and might become cosmic rays after escaping from the system. We evaluate the possibility that gamma rays and neutrinos from nearby Seyfert galaxies, including NGC1068, are produced in such jets. We also estimate their contributions to the diffuse cosmic gamma-ray, neutrino, and UHECR background intensities. ## 2. Model To assess the observability of electromagnetic (EM), neutrino, and CR emission produced in shocks around the BZ jets, we first model the properties of the jets launched from rapidly accreting and spinning sBHs in AGN disks (see Fig. 1 for a schematic illustration). The jet is predicted to be launched from an accreting sBH embedded in an AGN disk (Appendix A.1). We focus on sBHs, which are expected to be common in AGN disks and presumably dominate the total jet luminosity, and we adopt the same values for our model parameters as in the fiducial model of Paper I. For these parameters, through Eq. (A1) and Eq. (1) of Paper I, the jet power of \(L_{\rm j}\sim 2\times 10^{42}\) erg/s is derived, which we adopt as a fiducial value. In the fiducial model (model M1), we set the fraction of postshock energy carried by the post-shock magnetic field and by electrons and protons to \(\epsilon_{\rm B}=0.01\), \(\epsilon_{\rm e}=0.02\), \(\epsilon_{\rm CR}=0.05\)(e.g., Panaitescu & Kumar, 2001; Santana et al., 2014; Spitkovsky, 2008; Sironi et al., 2013; Tomita et al., 2019), respectively; the power-law slope for injected electrons and protons accelerated by the first order Fermi process to \(p=2.5\); the Lorentz factor of the jet to \(\Gamma_{\rm j}=30\) as derived in Appendix A.2; the variability timescale of the jet to \(T_{\rm vari}=10^{-3}\) s; and the opening angle of the injected jet to \(\theta_{\rm j}=0.2\)(e.g. Pushkarev et al., 2009; Hada et al., 2013, 2018; Berger, 2014). Here note that the parameters \(\epsilon_{\rm B}\), \(\epsilon_{\rm e}\), \(\epsilon_{\rm CR}\), \(p\), and \(\Gamma_{\rm j}\) are highly uncertain and expected to be distributed in wide ranges of values as \(\epsilon_{\rm B}\sim 10^{-5}\)-\(0.3\), \(\epsilon_{\rm e}\sim 10^{-2}\)-\(0.5\), \(\epsilon_{\rm CR}\sim 0.05\)-\(0.2\), \(p\sim 2\)-\(3\), and \(\Gamma_{\rm j}\sim 2\)-\(100\) depending on sources (e.g. Panaitescu & Kumar, 2001; Sironi et al., 2013; Santana et al., 2014; Caprioli & Spitkovsky, 2014; Troja et al., 2019; Matsumoto et al., 2020; Caprioli et al., 2020). We also show the results with \(\Gamma_{\rm j}=4\), \(\epsilon_{\rm e}=0.05\), \(\epsilon_{\rm B}=0.005\), \(\epsilon_{\rm CR}=0.05\), and \(p=2.2\) (model M2), and those with \(\Gamma_{\rm j}=4\), \(\epsilon_{\rm e}=0.05\), \(\epsilon_{\rm B}=0.3\), \(\epsilon_{\rm CR}=0.15\), \(p=2.2\), and \(L_{\rm j}=10^{43}\) erg/s (model M3), as results are sensitive to these parameters (see Appendix A.5 for the parameter space where our non-thermal emission models are applicable. ) Note that \(L_{\rm j}=2\times 10^{42}\) erg/s and \(L_{\rm j}=10^{43}\) erg/s adopted in models M1-M2 and M3 are, respectively, predicted for the jets from sBHs at the distance from the SMBH being 1 and 0.01 pc (Paper I), where sBHs are typically accumulated due to a long migration timescale and gap formation (Tagawa et al., 2020; Perna et al., 2021). ## 3. Gamma rays During the propagation of the jet, its kinetic energy is considered to be dissipated. We assume that the fraction, \(\epsilon_{\rm CR}\) and \(\epsilon_{\rm e}\), of the postshock energy of the jet is used to accelerate protons and electrons, respectively, via the first-order Fermi process (Bell, 1978; Blandford Figure 1.— A schematic picture of emission from internal shocks in a jet launched from an sBH accreting gas in an AGN disk. & Eichler, 1987). Then, the non-thermal electrons emit broadband radiation via synchrotron and inverse Compton scattering. The spectral shapes of the non-thermal emission produced at internal shocks of the jet are presented in Fig. 2 (see Appendix A.3 for their derivation). In the fiducial model (M1), synchrotron emission and synchrotron-self Compton scattering, respectively, produce the bright emission in optical-MeV and in X-ray-GeV bands. In addition, synchrotron self-absorption and \(\gamma\gamma\) annihilation create upper and lower cutoffs, respectively. The emission by hadronic processes is also significantly absorbed by \(\gamma\gamma\) annihilation as discussed in Section SS 5.2. Below we discuss whether gamma-ray emission from the jets can be detected from nearby Seyfert galaxies. In addition, we discuss the contribution of the jets to the diffuse gamma-ray background. ### Emission from radio-quiet Seyfert galaxies We consider the possibility that the gamma rays from radio-quiet galaxies are originated from the BZ jets launched from sBHs in AGN disks. According to Fig. 2, the isotropic-equivalent gamma-ray luminosity from the jet at \(\sim 1\)-10 GeV (Wojaczynski et al., 2015) for model M1 is \(L_{\rm GeV,iso}\sim 3\times 10^{40}\) erg s\({}^{-1}\). Then, the intrinsic gamma-ray luminosity is \[L_{\rm GeV}=p_{\theta}L_{\rm GeV,iso}=\] \[\sim 5\times 10^{38}\ {\rm erg\ s^{-1}}\left(\frac{L_{\rm GeV,iso}}{3 \times 10^{40}\ {\rm erg\ s^{-1}}}\right)\left(\frac{\theta_{\rm j}}{0.2}\right)^{2}, \tag{1}\] where \(p_{\theta}=\theta_{\rm j}^{2}/2\) is the probability that the jet is directed towards an observer. By using _Fermi_-LAT with the sensitivity of \((E_{\gamma}F_{E_{\gamma}})_{\rm LAT}\sim 3\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\) at \(\sim\) GeV (Atwood et al., 2009; Funk et al., 2013), the detectable distance for the gamma rays is \[d_{L,{\rm det}}=\left(\frac{L_{\rm GeV,iso}}{4\pi(E_{\gamma}F_{E_{\gamma}})_{ \rm sens}}\right)^{1/2}\] \[\sim 27\,{\rm Mpc}\left(\frac{L_{\rm GeV,iso}}{3\times 10^{40}\,{ \rm erg/s}}\right)^{1/2}\left(\frac{(E_{\gamma}F_{E_{\gamma}})_{\rm LAT}}{3 \times 10^{-13}\ {\rm erg\ cm^{-2}s^{-1}}}\right)^{-1/2} \tag{2}\] The detectable number of such gamma-ray sources within the distance \(d_{L}\) is roughly estimated as \[N_{\rm det}(d_{L})=p_{\theta}\frac{4\pi[\min(d_{L},d_{L,{\rm det}})]^{3}}{3}n _{\rm rotBH,acc}\] \[\sim 8\left(\frac{\theta_{\rm j}}{0.2}\right)^{2}\left(\frac{\min(d _{L},d_{L,{\rm det}})}{27\,{\rm Mpc}}\right)^{3}\left(\frac{n_{\rm rotBH,acc}} {5\times 10^{-3}\,{\rm Mpc^{-3}}}\right) \tag{3}\] where \(n_{\rm rotBH,acc}=n_{\rm AGN}N_{\rm rotBH,acc}\) is the number density of rapidly accreting and spinning sBHs in AGN disks, \(N_{\rm rotBH,acc}\) is their typical number in a single AGN disk and \(n_{\rm AGN}\) is the AGN space density. In Eq. (3), we assume that the jets (and the sBH spins) are randomly oriented (Tagawa et al., 2020,a), and we adopt \(n_{\rm AGN}=5\times 10^{-4}\,{\rm Mpc^{-3}}\) considering AGNs with X-ray luminosity of \(L_{X}\gtrsim 10^{42}\,{\rm erg/s}\)(Ueda et al., 2014). Assuming that at any given time, the active fraction of sBHs is \(f_{\rm active}\sim 0.1\) as roughly estimated in Paper I, we adopt \[N_{\rm rotBH,acc}=N_{\rm rotBH,AGN}f_{\rm active}\] \[\sim 10\left(\frac{N_{\rm rotBH,AGN}}{100}\right)\left(\frac{f_{ \rm active}}{0.2}\right), \tag{4}\] Figure 3.— The cumulative number of gamma-ray emitting sources predicted to be detectable by _Fermi_-LAT as a function of the luminosity distance for the fiducial model (solid black line), a model with a smaller number of accreting sBHs per AGN disk (\(N_{\rm BH,acc}=5\), dashed orange line), and that with a weaker gamma-ray luminosity (\(L_{\gamma,{\rm iso}}=10^{40}\,{\rm erg/s}\), dashed blue line). The number of gamma-ray sources, inferred from observations, in nearby radio-quiet Seyfert galaxies for moderate and optimistic cases are shown in dotted red and brown lines. Figure 2.— The spectral energy distribution for non-thermal emission from internal shocks of the jet for models M1 (solid), M2 (dashed), and M3 (dotted). where \(N_{\rm rotBH,AGN}\sim 100\) is based on typical numbers found in models for the AGN-embedded sBH population (Tagawa et al., 2020, 2021). Although the distribution of the positions of sBHs in an AGN disk is uncertain and is influenced by phases and properties of AGNs, we assume that most sBHs are at 1 pc (models M1 and M2) and \(\sim 10\%\) of them are at 0.01 pc (model M3). Here, the former component corresponds to the sBHs captured by an AGN disk before considerable migration, while the latter corresponds to the sBHs accumulated at gaps, which are predicted to form in low opacity regions of an AGN disk (Thompson et al., 2005; Tagawa et al., 2020). The low number density of active sBHs at \(\sim 0.01\) pc is required so that the neutrino flux for model M3 does not exceed the background neutrino flux (SS 4.2), while if it is lowered further, the probability to detect neutrinos from NGC1068 by model M3 becomes low (SS 4.3). Fig. 3 compares the detectable number of gamma-ray sources predicted by the models to the number of detected gamma-ray sources from nearby radio-quiet Seyfert galaxies observed by _Fermi_-LAT. As fiducial candidates (dotted red line), we include NGC4151, NGC6814, and NGC4258, for which the likelihood ratio of the non-detection to the detection of gamma rays are \(\sim 2\times 10^{-4}\), \(\sim 10^{-7}\), and \(\sim 9\times 10^{-3}\), respectively. In the "optimistic" distribution (dotted brown), we additionally include three Seyfert galaxies with starbursts, Circinus, NGC1068, and NGC4945, as the gamma rays from these galaxies are not fully explained by starbursts (Hayashida et al., 2013; Eichmann & Becker Tjus, 2016; Wojaczynski & Niedzwiecki, 2017). From the figure, we find that the observed distribution of gamma-ray sources is consistent with our models with \(n_{\rm rotBH,acc}\sim 2\times 10^{-3}\) - \(5\times 10^{-3}\) Mpc\({}^{-3}\) and \(L_{\rm GeV,iso}\sim 10^{40}\) - \(3\times 10^{40}\) erg/s (solid black, dashed blue, and dashed orange lines). Conversely, if the gamma rays from the radio-quiet Seyfert galaxies are not originated from sBHs (e.g., \(N_{\rm det}\sim 0\)), \(n_{\rm rotBH,acc}\) can be roughly constrained to be \[n_{\rm rotBH,acc}\lesssim 5\times 10^{-4}\,{\rm Mpc}^{-3}\left( \frac{N_{\rm det}}{1}\right)\left(\frac{\theta_{\rm j}}{0.2}\right)^{-2}\] \[\left(\frac{L_{\rm GeV,iso}}{3\times 10^{40}\ {\rm erg/s}}\right)^{-3/2} \left(\frac{(E_{\gamma}E_{\gamma})_{\rm LAT}}{3\times 10^{-13}\ {\rm erg\ cm^{-2}s^{-1}}}\right)^{3/2} \tag{5}\] Note that the gamma-ray luminosity at \(\sim\) GeV bands is obscured by \(\gamma\gamma\) annihilation for low Lorentz factors (e.g. dashed and dotted lines with \(\Gamma_{\rm j}=4\) in Fig. 2). In this case, the jets from sBHs in AGN disks cannot explain the GeV gamma rays from Seyfert galaxies, and another explanation would be required. ### Cosmic gamma-ray background intensity We next estimate the contribution of gamma-ray emission from the global population of jets to the diffuse background intensity in MeV bands as \[E_{\gamma}^{2}\Phi_{\gamma} \sim f_{z}L_{\rm MeV,iso}p_{\theta}n_{\rm rotBH,acc}\frac{c}{4 \pi H_{0}}\] \[\sim 5\times 10^{-6}{\rm GeV\ cm^{-2}\ s^{-1}\ sr^{-1}}\] \[\times\left(\frac{L_{\rm MeV,iso}p_{\theta}}{2\times 10^{40}\,{ \rm erg/s}}\right)\left(\frac{n_{\rm rotBH,acc}}{5\times 10^{-3}\,{\rm Mpc}^{-3}} \right)\left(\frac{f_{z}}{2}\right) \tag{6}\] (Fig. 4a), where \(L_{\rm MeV,iso}\) is the isotropic-equivalent gamma-ray luminosity around MeV bands, \(H_{0}=67.8\) km s\({}^{-1}\) Mpc\({}^{-1}\) is the Hubble constant (Planck Collaboration et al., 2016), and \(f_{z}=2\) is a correction factor for redshift evolution (Appendix A.6). We find that the gamma-ray flux in our fiducial model is generally an order of magnitude or more below the observed background intensity. However, model M2 can explain the gamma-ray background intensity in the narrow \(\sim\) 1-10 MeV bands (dashed black line and brown points in Fig. 4a, Figure 4.— The contribution to the gamma-ray flux from NGC1068 (panel b) and the background gamma-ray intensity (panel a) by the internal shocks for models M1 (solid black), M2 (dashed black), and M3 (dotted black). In panel a, solid points represent the intensity observed by Fermi-LAT (red, Ackermann et al., 2015), COMPTEL (teal, Weidenspointner et al., 2000), SMM (brown, Watanabe et al., 1997), Swift BAT (cyan, Ajello et al., 2008), and RXTE (purple, Revnivtsev et al., 2003). Thin solid, dashed, and dotted brown lines are the intensities predicted by models for Seyfert galaxies (Gilli et al., 2007), blazars (Giommi & Padovani, 2015), and star-forming galaxies (Lacki et al., 2014), presented in De Angelis et al. (2021). In panel b, the red, blue, and purple points are the observed gamma-ray intensities adopted from Abdollahi et al. (2020), Ajello et al. (2017), and (Acciani et al., 2019), respectively. The sensitivities for future telescopes in \(\sim\) MeV bands are presented by colored dashed lines. Ackermann et al., 2015). The origin of the background in this energy range has not been understood, as other, previously proposed astrophysical contributions significantly underpredict the level of the background (see thin solid, dashed, and dotted brown lines in Fig. 4a). It is notable that our model can also explain the neutrino background intensities as shown in Section SS 4 below. ### Emission from NGC1068 We next consider whether the emission from internal shocks of the jets can explain the gamma-ray emission from NGC1068. Note that NGC1068 is a type II AGN, and its intrinsic X-ray luminosity is \(\sim 7\times 10^{43}\) erg/s (Marinucci et al., 2016), which is significantly brighter than the X-ray emission by the jets from sBHs in the AGN disk (Fig. 2). Fig. 4b compares the observed gamma-ray flux from NGC1068 and the predicted fluxes in models M1-M3. Model M1 (solid line) significantly contributes to the gamma-ray emission from NGC1068 between \(\sim\)100 MeV to \(\sim\)100 GeV because model M1 avoids \(\gamma\gamma\) annihilation owing to the high Lorentz factor. On the other hand, models M2 and M3 have not been constrained by the current gamma-ray observations. Also, the emission in infrared to X-ray bands is significantly absorbed by dust (although this effect is not incorporated in the predictions in Fig. 4 b) and the fluxes in the \(\sim\) keV - MeV bands receive significant contributions from coronae in the AGN. Thus, to test models M2 and M3, MeV gamma rays are useful. MeV gamma-rays can be detected with future gamma-ray telescopes, such as the Compton Spectrometer and Imager (COSI) (Tomsick et al., 2019), the All-sky Medium Energy Gamma-ray Observatory eX-plorer (AMEGO)(Caputo et al., 2022), Gamma-Ray and AntiMatter Survey (GRAMS) (Aramaki et al., 2020), eASTROGAM(De Angelis et al., 2021) and the Lunar Occultation eXplorer (LOX)(Miller et al., 2019). ## 4. UheCRs and Neutrinos In this section, we estimate whether high-energy protons and neutrinos produced from the jets can explain the observed background fluxes and the neutrino flux from NGC1068. ### UheCRs Since the adiabatic expansion is the most efficient cooling process for protons accelerated in the jets in the models, the maximum proton energy accelerated in internal shocks of the jets is given by the comparison between the acceleration timescale in the Bohm limit (in which the particle mean free path is assumed to be equal to the Larmor radius) and the expansion timescale as \[E_{p,\rm max}=E^{\prime}_{p,\rm max}\Gamma_{j}=eB^{\prime}_{j}Z_{\rm diss}\] \[\sim 10^{19}\rm eV\left(\frac{B^{\prime}_{j}}{4\times 10^{7}~{}G}\right) \left(\frac{Z_{\rm diss}}{1\times 10^{9}~{}cm}\right) \tag{7}\] where \(B^{\prime}_{j}\) is the magnetic field strength of the shocked jet, \(Z_{\rm diss}=2T_{\rm vari}\rm C^{2}_{j}\) is the typical dissipation radius for internal shocks, \(e\) is the electric charge, and the primes denote quantities in the fluid comoving frame. The maximum proton energies for models M1, M2, and M3 are, respectively, \(1\times 10^{17}\), \(7\times 10^{17}\) and \(1\times 10^{19}\) eV. It is notable that the CR production at a few \(10^{17}\) eV predicted in model M1 may be related to the observations of CRs around this energy band (Buitink et al., 2016). On the other hand, to produce high \(E_{p,\rm max}\), low \(\Gamma_{j}\) and high \(\epsilon_{\rm B}\) are required, and model M3 can explain the energy of the UHECRs. The background intensity of UHECRs produced in the jets from sBHs in AGN disks is estimated as \[E_{p}^{2}\Phi_{p}\sim\frac{\epsilon_{\rm CR}L_{j}}{R_{p}}n_{\rm rot BH,acc}f_{ z}\frac{c}{4\pi H_{0}}\] \[\sim 1\times 10^{-7}\rm GeV~{}cm^{-2}~{}s^{-1}~{}sr^{-1}\] \[\times\left(\frac{\epsilon_{\rm CR}}{0.15}\right)\left(\frac{L_{\rm j}}{10^{43 }\rm\,erg/s}\right)\] \[\times\left(\frac{n_{\rm rotBH,acc}}{5\times 10^{-4}\,\rm Mpc^{-3}}\right) \left(\frac{R_{p}}{300}\right)^{-1}\left(\frac{f_{z}}{2}\right), \tag{8}\] Figure 5.— The contribution to the the background intensity (panel a) and the flux from NGC1068 (panel b) for a single neutrino flavor by internal shocks for models M1 (solid black), M2 (dashed black), and M3 (dotted black). The observed neutrino intensity (Aartsen et al., 2020) is presented by red points (panel a), and the blue shaded region represents the 1, 2, and 3 \(\sigma\) uncertainty on the spectrum measured by Aartsen et al. (2020) (panel b). where \(E_{p}\) is proton energy, \(R_{p}^{-1}=(E_{p}/E_{p,\rm{min}})^{2-p}(p-2)/[1-(E_{p,\rm{min}}/E_{p,\rm{max}})^{p -2}]\), and \(E_{p,\rm{min}}=\Gamma_{\rm{j}}m_{p}c^{2}\) is the minimum proton energy. This value is consistent with the observed value for the cosmic-ray intensity of \(\sim 10^{-7}\)\(\rm{GeV~{}cm^{-2}~{}sr^{-1}~{}s^{-1}}\) at \(E_{p}\sim 10^{18}\) eV (Blasi, 2013; Zweibel, 2013), while it is not obvious whether extragalactic CRs of \(E_{p}<10^{18}\) eV can propagate to the Earth within the Hubble time (Kimura et al., 2015). Since the observed cosmic-ray background intensity of \(\lesssim 10^{-8}\)\(\rm{GeV~{}cm^{-2}~{}sr^{-1}~{}s^{-1}}\) at \(E_{p}\sim 10^{20}\) eV (e.g. Zweibel, 2013) is lower by about an order of magnitude compared to the production rate in models M1-M3, and the energy can be predicted by model M3, the UHECRs could be explained by model M3. Also, the high density of sBHs in AGN disks (\(\sim 5\times 10^{-4}\)-\(5\times 10^{-3}\)\(\rm{Mpc^{-3}}\)) is consistent with the source density expected for producing the UHECRs (Takami and Sato, 2009). On the other hand, since the heavy metal abundance ratio of the UHECR compositions (Aab et al., 2014) is higher than that of AGNs (Shields, 1996; Xu et al., 2018), we need to consider some heavy-element enhancement process, such as re-acceleration of galactic CRs (Caprioli, 2015; Kimura et al., 2018; Zirakashvili et al., 2022). Since the cosmic-ray flux and abundance ratio are expected to be changed during propagating to the Earth, detailed consistency of the flux and the metal abundance distribution expected in this model are worth investigating in the future. ### Cosmic neutrino background Interactions of high-energy protons lead to the production of pions, which generate neutrinos via decay processes. We discuss the contribution of such neutrino emission to the background neutrino flux and the neutrino flux from NGC1068. As cooling processes for baryons, we consider adiabatic cooling, \(pp\) reaction, and \(p\gamma\) reaction, where pions could be produced by inelastic nuclear \(pp\) reactions and photohadronic (\(p\gamma\)) reactions, respectively, and the pions subsequently decay into neutrinos. Using the fraction of protons producing pions through \(pp\) and \(p\gamma\) reactions (\(f_{pp}\) and \(f_{p\gamma}\), Appendix A.4), the jet power, and the suppression factor due to pion cooling \(f_{\pi,\rm{sup}}\)(Kimura, 2022), the diffuse neutrino flux produced from internal shocks of jets can be roughly estimated as (e.g., Razzaque et al., 2004; Murase et al., 2016) \[\epsilon_{\nu}^{2}\Phi_{\nu}\sim\frac{3K}{4(1+K)}\frac{\rm{min}[1,max(f_{p\gamma},f_{pp})]f_{\pi,\rm{sup}}\epsilon_{\rm{CR}}L_{\rm{j}}}{R_{p}}\] \[n_{\rm{rotBH,acc}}f_{z}\frac{c}{4\pi H_{0}}\] \[\left(\frac{\epsilon_{\rm{CR}}}{0.05}\right)\left(\frac{f_{pp}}{3 \times 10^{-6}}\right)\left(\frac{f_{\pi,\rm{sup}}}{1}\right)\left(\frac{L_{ \rm{j}}}{2\times 10^{42}\,\rm{erg/s}}\right)\] \[\left(\frac{n_{\rm{rotBH,acc}}}{5\times 10^{-3}\,\rm{Mpc^{-3}}} \right)\left(\frac{R_{p}}{500}\right)^{-1}\left(\frac{f_{z}}{2}\right) \tag{9}\] (solid black lines in Fig. 5), where \(K=1\) and \(K=2\), respectively, denote the average ratio of charged to neutral pion for photohadronic (\(p\gamma\)) and inelastic hadronuclear (\(pp\)) reaction, and \(f_{\pi,\rm{sup}}\) is the fraction of the energy loss before pions decay. Since the neutrino background intensity at \(\sim 10^{5}\)\(\rm{GeV}\) is observed to be \(\sim 10^{-7}\)-\(\rm{10^{-8}~{}GeV~{}cm^{-2}~{}s^{-1}~{}sr^{-1}}\) (red points in Fig. 5a, Aartsen et al., 2015, 2020), neutrinos produced by the internal shocks make only a minor contribution to the background neutrino intensity in the fiducial model. On the other hand, for low \(\Gamma_{\rm{j}}\), \(f_{pp}\) and \(f_{p\gamma}\) are high, and then the internal shocks could significantly contribute to the background intensity (dashed and dotted black lines in Fig. 5a). As AGNs are expected to be major production sites for neutrinos (Barros et al., 2021) especially around \(E_{\nu}\sim 100\) TeV (Abbasi et al., 2022), the model with the jets from sBHs in AGN disks may be a promising scenario for producing the background neutrinos in the Universe. ### Neutrino emission from NGC1068 We here discuss the possibility that the neutrinos from NGC1068 are produced at internal shocks of jets launched from sBHs embedded in AGN disks. Fig. 5b shows the neutrino flux from one jet in our models (models M2 and M3) and the observed neutrino flux from NGC1068. The neutrino flux predicted by model M2 is not high enough to explain the observed flux (dashed black line). In the model with high \(\epsilon_{\rm{B}}\) (model M3), pion cooling via synchrotron radiation is efficient, and therefore the neutrino flux at high energies in \(E_{\nu}\gtrsim 10^{4}\) GeV are suppressed. On the other hand, the neutrino flux at lower energies in \(E_{\nu}\lesssim 10^{4}\) GeV, where pion cooling is inefficient, is high, which is consistent with the observed neutrino flux from NGC1068. Additionally, to reproduce the neutrino flux from NGC1068, high \(\epsilon_{\rm{CR}}\) and \(L_{\rm{j}}\) are required. In the high energy ranges around \(\sim 10^{4}\) GeV, where the atmospheric backgrounds are much smaller, the observed flux is roughly consistent with the prediction by model M3. Although different values of \(\epsilon_{\rm{B}}\) and \(\epsilon_{\rm{CR}}\) (in addition to \(L_{\rm{j}}\)) are required to explain the cosmic neutrino background and the neutrino emission from NGC 1068, this is not unreasonable, since we expect a broad distribution of \(\epsilon_{\rm{B}}\) and \(\epsilon_{\rm{CR}}\). Multi-wavelength fits of gamma-ray burst afterglows revealed that the distribution of \(\epsilon_{\rm{B}}\) is indeed very broad (Panaitescu and Kumar, 2001; Santana et al., 2014) and \(\epsilon_{\rm{CR}}\) is observationally less constrained.1 If the peaks of the \(\epsilon_{\rm{B}}\) and \(\epsilon_{\rm{CR}}\) distributions are as low as in model M2 and the dispersions are large enough to contain a value as high as in model M3, the neutrino emission from NC1068, and the background intensities of gamma rays in \(\sim\) MeV bands, neutrinos in \(\lesssim\) PeV bands, and UHECRs are all explained by a single unified model. This has not been proposed so far, although we need to change the several parameters between models M2 and M3. Also, the high value for \(\epsilon_{\rm{B}}\), required to explain the neutrino flux from NGC1068, may be difficult to be achieved. A possible issue is that the jet produced by an sBH at \(\sim 0.01\) pc in NGC1068 needs to be directed to us. Since the SMBH mass and the accretion rate of NGC1068 are high (Pier et al., 1994; Greenhill and Gwinn, 1997) and the number of sBHs in an AGN disk is roughly proportional to the square root of the SMBH mass and the accretion rate (Tagawa et al., 2020, 2021), the number of sBHs in an AGN disk in NGC1068 is higher by a factor of \(\sim 14\) compared to that in the fiducial model. Then, the probability that jets produced from sBHs at \(\sim 0.01\) pc in NGC1068 are directed to us is \(\sim 0.3\), which is viable. On the other hand, this also means that jets from \(\sim 3\) sBHs at \(\sim 1\) pc are directed to us, which may be problematic to be consistent with the X-ray flux from NGC1068. This is because the X-ray emission from corona in NGC1068 may be mostly absorbed at sub-parsec scales, instead of several parsec scales, by considering the amount of mass estimated in parsec scales (Garcia-Burillo et al., 2016; Imanishi et al., 2018, 2020). In that case, the X-ray emission from jets at \(\sim 1\) pc may not be fully absorbed and significantly exceed the observed X-ray flux. Hence, dedicated estimates may be required to clarify whether or not this scenario is consistent with hard X-ray observations by NuSTAR (Marinucci et al., 2016; Zaino et al., 2020). ## 5 Discussions ### Fermi bubble Two large gamma-ray bubbles, the so-called Fermi bubbles, have been discovered above and below the center of our Galaxy by gamma-ray (Su et al., 2010), X-ray (Bland-Hawthorn and Cohen, 2003), and radio telescopes (Finkbeiner, 2004). The size of the bubble is \(\sim 10\) kpc, its shape is almost mirror-symmetric with respect to the Galactic plane, and its origin is unknown (Yang et al., 2018). We discuss whether the Fermi bubble may be related to sBHs in an AGN disk. One of promising models for the origin of the Fermi bubble is the leptonic jet model, in which a jet is estimated to be launched from the central SMBH 1-3 Myr ago with the duration of \(\sim 0.1\)-0.5 Myr (Yang et al., 2012; Guo and Mathews, 2012; Yang et al., 2022), and the power of the jet is roughly estimated to be \(\sim 10^{43}\)-\(10^{44}\) erg/s (Guo and Mathews, 2012; Yang et al., 2022). We here propose that it is possible to produce a similar bubble by jets launched from sBHs embedded in an AGN disk. In our fiducial model, the power of the jet launched from an sBH is \(L_{\rm j}\sim 10^{42}\)-\(10^{43}\) erg/s, and the active number of the jets is \(N_{\rm rotBH,AGN}\sim 10\) (Eq. 4). Thus, the total power of the jets launched from the sBHs is consistent with the power required to explain the properties of the Fermi bubble. Here we note that the symmetry of the Fermi bubble is perhaps surprising, given that SMBH jets in general do not point perpendicular to the galaxy disk (Kinney et al., 2000; Hopkins et al., 2012). In the jets from the sBHs, the shocked gas is predicted to be symmetric with respect to the galactic plane, if a pc-scale disk is aligned to the galactic plane (Tagawa et al., 2020). Since the angular momentum direction of the pc-scale disks tend to be aligned to that of the nuclear star clusters due to vector resonant relaxation (Kocsis and Tremaine, 2011; Impellizzeri et al., 2019; Levin, 2022), the alignment of the pc-scale disk presumably occurs if nuclear star clusters rotate in the same direction with the host galaxies, which is often observed including our Galaxy (Levin and Beloborodov, 2003; Yelda et al., 2014; Do et al., 2020; Neumayer et al., 2020, but see also Kormendy and Ho 2013 for NGC4258 as a counter-example). Also, the duration of each jet from an sBH is \(\lesssim 10^{3}\) yr (Paper I), which may be consistent with the uniform haze emission, as Yang et al. (2022) suggested that multiple jets may be required to explain the spatial distribution of the microwave emission without being suppressed by magnetic pressure in the vicinity of the Galactic center. Thus, jets from sBHs embedded in an AGN disk may be responsible for producing such bubbles. Note that due to their intermittence and short duration, the jets likely pass through high-pressure regions produced by previous jets and could be significantly decelerated before interacting with the inter-stellar medium. In this case, non-thermal emission from external shocks of the jets might be inefficient. Such emission merits investigating in the future. ### Hadronic gamma-ray emission When hadronuclear and photohadronic processes produce high-energy neutrinos in astrophysical environments, these processes inevitably produce gamma rays whose energy and luminosity are comparable to those of neutrinos. If these gamma rays escape from the source, they are not absorbed during their propagation to the Earth, and we should observe gamma rays of GeV-TeV energies. However, the cosmic neutrino background intensity at \(\sim 10\) TeV is higher than the cosmic gamma-ray background intensity at \(\sim 100\) GeV, which implies that the cosmic neutrino source should be opaque to gamma rays of \(\gtrsim 100\) GeV (Murase et al., 2016; Kimura et al., 2021). The same arguments can be applied to the gamma-ray and neutrino fluxes from NGC1068, and the neutrino emission region needs to be opaque to gamma rays of \(\gtrsim 100\) MeV (Murase et al., 2020; Inoue et al., 2020). In our models M2 and M3, gamma rays in \(\gtrsim\) MeV are significantly suppressed by \(\gamma\gamma\) annihilation, and thus, these models are consistent with the gamma-ray and neutrino data for cosmic high-energy backgrounds and NGC 1068. The typical energy of escaping gamma-rays should be around MeV energies, and future MeV telescopes will be useful to probe the hadron-induced electromagnetic cascade emission from these systems. ### Caveats In this section, we discuss caveats in our model. First, we fixed the values of the parameters describing the AGN disks and the sBHs. However, in reality, these parameter should vary from source to source, affecting their contributions to the background. Additionally, we have not taken into account the growth of sBHs due to gas accretion (Paper I, Section SS 5.4) and mergers (Tagawa et al., 2021), which is a promising pathway for sBHsBH mergers reported by LIGO/Virgo/KAGRA (Abbott et al., 2021). The evolution of the sBH mass influences the jet power (\(L_{\rm j}\)) and \(T_{\rm vari}\) (the dependence of the spectral energy distribution on these quantities is presented in Appendix B). More precise estimates for the background intensities considering these effects are desired to be conducted in the future. Meanwhile, in our model, the AGN disk's role was considered to be just to feed gas to sBHs - emission from shocks emerging from the AGN disk or from the broad line regions is not considered unlike paper II, Tagawa et al. (2023, paper III), or Wang et al. (2021). In this paper we considered persistent high-energy emission, while in paper II and paper III, respectively, we considered transient breakout emission from shocks emerging due to collision between AGN disk gas and jets produced from merger remnants and solitary sBHs. Note that we estimated that the transient emission from shocks emerging by collision between the jets from sBHs and AGN disks cannot significantly contribute to the gamma-ray and neutrino background intensities. In model M1, a high Lorentz factor of 200 is assumed for fast shells (Appendix A.2). On the other hand, it is unclear whether the shells in the jets can be accelerated to such a high Lorentz factor via the Blandford-Znajek process (e.g. Xiong and Zhang, 2014). Additionally, we assume a high contrast in the Lorentz factors between fast and slow shells (e.g. 200 and 20 in model M1). However, it is unclear whether such a high contrast is commonly realized (e.g. Curd and Narayan, 2022). If either a high value or a high contrast in the Lorentz factor is not achieved, the dissipation rate of the kinetic energy is correspondingly overestimated in our model. Also, note that the equations in Appendix A.2 and A.3 are approximated assuming that the Lorentz factor of the shocked gas is much higher than 1. If the Lorentz factor of the shocked gas is close to 1, the properties of electromagnetic emission and the jet structure need to be appropriately modified. We note that the structures of the jets and their dissipation are simply prescribed in this study. To quantitatively discuss the dissipation of the jets and their emission, detailed numerical simulations would be required. The neutrino spectrum in our model might be affected by a few processes that is not included in our calculations. First, we have not considered suppression of neutrino production due to the Bethe-Heitler process, which is determined by photon spectra. The photon spectra is also modified if we take into account the electromagnetic cascade emission initiated by two-photon interactions and hadronic processes. In models M2 and M3, such emission likely produces flatter spectral energy distribution below the energy limited by \(\gamma\gamma\) annihilation. We confirmed by performing numerical computations2 that the suppression is not significant in our models because of the flat photon spectral produced by the cascade emission. The muon cooling is also not taken into account, which may reduce a neutrino flux by a factor of 2. In model M3, this process is important at \(E_{\nu}\sim 1-10\) TeV. Footnote 2: We modify the codes used in Kimura et al. (2019); Kimura and Toma (2020) to match the physical condition in the current model. ### Evolution of sBHs Here, we discuss how much sBHs grow by accretion and their possible fate (see also Levin, 2007; McKernan et al., 2012), as their growth is almost inevitable if they are to explain the large power required to produce the various background intensities discussed in this paper. In the fiducial model, the jet power is set to \(2\times 10^{42}\) erg/s, corresponding to the accretion rate of \(3\times 10^{-4}\) M\({}_{\odot}\)/yr(\(\eta_{\rm j}/0.1)^{-1}\). Assuming that each sBH accretes for \(\sim 4\) Myr per AGN phase and the accretion rate is constant, the mass of an sBH increases to \(m\sim 10^{3}\) M\({}_{\odot}\). The timescale for these (modestly) intermediate-mass BHs to migrate and inspiral to the central SMBH only by gaseous and GW torques is longer than the active AGN phase (Tagawa et al., 2022), but we consider migration by stellar torques during longer quiescent phases. Since the relaxation timescale of the nuclear star cluster is \(t_{\rm rel}\lesssim 10^{11}\) yr (Merritt, 2010; Kocsis and Tremaine, 2011), and the migration timescale of an sBH with the mass \(m\) is roughly given by \(\sim t_{\rm rel}m_{\rm ave}/m\sim 10^{8}\) yr(\(t_{\rm rel}/10^{11}\) yr)[\((m/m_{\rm ave})/10^{3}]^{-1}\), where \(m_{\rm ave}\) is the average mass of stars in a nuclear star cluster, the grown BHs migrate towards and accrete onto the central SMBH during quiescent phases, whose duration is roughly given as \(\sim t_{\rm H}/N_{\rm AGN,H}\sim 10^{9}\) yr(\(t_{\rm H}/10^{10}\) yr)(\(N_{\rm AGN,H}/10)^{-1}\), where \(N_{\rm AGN,H}\) is the number of AGN phases per galaxy during the Hubble timescale, \(t_{\rm H}\). During the Salpeter timescale of \(\sim 40\) Myr, inspirals of a hundred sBHs with the mass \(10^{3}\) M\({}_{\odot}\) enhances the SMBH mass (\(10^{6}\) M\({}_{\odot}\) in the fiducial model) by \(\sim 10\%\). Thus, if the high-energy phenomena are caused by sBHs in AGN disks and the number of sBHs in an AGN disk is not significantly underestimated in our model, sBHs grow to intermediate-mass BHs, and then, they merge with SMBHs during quiescent phases, and can be observed as intermediate-mass ratio inspirals (IMRIs) by the Laser Interferometer Space Antenna (Amaro-Seoane et al., 2022). ## 6. Conclusions In this paper, we considered high-energy EM, neutrino, and cosmic-ray emissions arising from BZ jets launched from rapidly accreting and spinning sBHs embedded in AGN disks. Our main results are summarized as follows: 1. GeV gamma-ray emission from nearby radio-quiet Seyfert galaxies can be explained by emission from the jets launched from sBHs if the Lorentz factor of the jets is high (\(\Gamma_{\rm j}\gtrsim 30\)). This model can contribute \(\gtrsim 50\%\) of the gamma-ray emission from NGC1068 and \(\sim 10-20\%\) of the background gamma-ray intensity in the \(\sim\) MeV-GeV bands. 2. For low \(\Gamma_{\rm j}\) (\(\lesssim 4\)), neutrino emission from the jets can explain the cosmic neutrino background intensity at neutrino energies \(\lesssim 10^{6}\) GeV. The background in the \(\sim\) MeV bands can also be reproduced in this model if the efficiency of magnetic field amplification (\(\epsilon_{\rm B}\)) is low. In this case, we predict that future gamma-ray telescopes in \(\sim\) MeV bands can detect MeV gamma-rays from nearby radio-quiet AGNs, providing a test of our model. 3. Neutrino emission from NGC1068 can be explained by the jet model with moderate \(\Gamma_{\rm j}\) and high \(\epsilon_{\rm B}\), \(\epsilon_{\rm CR}\), and \(L_{\rm j}\). With this parameter set, jets by sBHs in AGNs can accelerate protons up to energies of \(\sim 10^{19}\) eV and account for the observed intensity of UHECRs at these energies. 4. If \(\epsilon_{\rm B}\) and \(\epsilon_{\rm CR}\) have a broad distribution from source to source, our model can simultaneously explain the neutrino flux from NGC1068 as well as the background intensities of gamma rays in \(\sim\) MeV bands, neutrinos in \(\lesssim\) PeV bands, and UHECRs, for the first time. While AGN are known to have relativistic jets driven by their central SMBH, our results in this paper suggest that the population of stellar-mass BHs, embedded in the accretion disk fueling the central SMBH, can collectively produce similar phenomena with energetics similar to that from the SMBH due to the larger number of jets from sBHs. We found that the jets from sBHs in AGN disks can explain various phenomena, which are observed to be unrelated to jets from SMBHs. Our scenario is consistent with the observations that the background neutrinos (Murase and Waxman, 2016) and UHECR (Takami et al., 2016) emission are produced by the sources with a high local density. We thank Kohta Murase for useful comments. This work was financially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant Number JP21J00794 (HT) and 22K14028 (SSK). S.S.K. acknowledges the support by the Tohoku Initiative for Fostering Global Researchers for Interdisciplinary Sciences (TIFRIS) of MEXT's Strategic Professional Development Program for Young Researchers. Z.H. was supported by NASA grant NNX15AB19G and NSF grants AST-2006176 and AST-1715661. ## Appendix A Emissions We here describe the properties of the internal shocks in the jets and non-thermal emission produced from them. ### Accretion onto sBHs We outline how the jet is launched from an accreting sBH embedded in an AGN disk (see Paper I, for details). In the AGN disk, the gas accreting onto sBH forms a circum-sBH disk (CsBD). When the CsBD is advection dominated, a magnetically dominated state can be realized owing to the accumulation of the magnetic flux in the vicinity of the sBH (e.g. Meier, 2001; Cao, 2011; Kimura et al., 2021). Even if the magnetic flux is initially weak, the outflow from the disk converts the toroidal magnetic field generated by the shear motion into a poloidal field (Liska et al., 2020). Such advection-dominated flows are expected for super-Eddington accretion rates (Abramowicz et al., 1988) or low accretion rates (e.g. Narayan and Yi, 1994; Blandford and Begelman, 1999). In these cases, the jets from spinning sBHs can be launched through the BZ process (Blandford and Znajek, 1977; Curd and Narayan, 2022). Because super-Eddington accretion is predicted in an inner CsBD (Paper I), the BZ jet is expected to be launched from rapidly accreting and spinning sBHs in AGN disks. In the process, the jet power (\(L_{\rm j}\)) is proportional to the mass accretion rate onto the sBH (\(M_{\rm sBH}\)) as \[L_{\rm j}=\eta_{\rm j}\dot{M}_{\rm sBH}c^{2},\] (A1) where \(\eta_{\rm j}\) is the jet conversion efficiency to the kinetic energy, which is approximated as \(\eta_{\rm j}\sim a_{\rm BH}^{2}\) for a magnetically dominated state (e.g. Tchekhovskoy et al., 2010; Narayan et al., 2022), \(a_{\rm sBH}\) is the dimensionless spin of the sBH, and \(c\) is the speed of light. We assume that the accretion rate onto sBHs in the AGN disk is given by the Bondi-Hoyle-Lyttletonen (BHL) rate as used in Eq. (1) of Paper I. In the formula, the accretion rate is determined by the AGN disk density and temperature, and the Hill radius of the sBH. ### Properties of jets To dissipate the kinetic energy of jets, we assume multiple shells with different Lorentz factors collide with each other at some dissipation radius (\(Z_{\rm diss}\)), which is widely assumed to explain the prompt emission in gamma-ray bursts. The relative Lorents factor between the slower (with the Lorents factor \(\Gamma_{\rm s}=(1-\beta_{\rm s}^{2})^{-1/2}\)) and faster shells (with the Lorents factor \(\Gamma_{\rm r}=(\tilde{1}-\beta_{\rm r}^{2})^{-1/2}\)) is \[\Gamma_{\rm rel}=\Gamma_{\rm s}\Gamma_{\rm r}(1-\beta_{\rm s}\beta_{\rm r})\] (A2) The relative Lorents factor \(\Gamma_{\rm rel}\) is also related to the relative Lorents factors between the unshocked faster and slower shells in the rest frame of the shocked fluid (\(\Gamma_{12}\) and \(\Gamma_{34}\)) as \[\Gamma_{\rm rel}=\Gamma_{12}\Gamma_{34}+\sqrt{\Gamma_{12}^{2}-1}\sqrt{\Gamma_ {34}^{2}-1},\] (A3) and the ratio of the number densities of the faster (\(n_{\rm r}^{\prime}\)) and slower (\(n_{\rm s}^{\prime}\)) shells in the fluid rest frame is related to them as \[\frac{n_{\rm r}^{\prime}}{n_{\rm s}^{\prime}}=\frac{(\Gamma_{34}-1)(4\Gamma_{ 34}+3)}{(\Gamma_{12}-1)(4\Gamma_{12}+3)}\] (A4) The Lorents factor of the shocked jet in the rest frame of the sBH is approximately given as \[\Gamma_{\rm j}\approx\Gamma_{\rm r}(\Gamma_{12}-\sqrt{\Gamma_{12}^{2}-1}).\] (A5) If we assume \(\Gamma_{\rm r}=200\), \(\Gamma_{\rm s}=20\), and \(\frac{n_{\rm r}^{\prime}}{n_{\rm s}^{\prime}}=\Gamma_{\rm s}^{2}/\Gamma_{\rm r }^{2}\), then \(\Gamma_{12}\sim 3.6\), and \(\Gamma_{\rm j}\sim 28\), which we assume as fiducial values. Since the results are sensitive to \(\Gamma_{\rm j}\), we also consider the models (M2, M3) with \(\Gamma_{\rm r}=30\) and \(\Gamma_{\rm s}=3\), in which \(\Gamma_{12}\sim 3.7\), and \(\Gamma_{\rm j}=4.2\). ### Non-thermal photons We assume that the fraction \(\epsilon_{e}\) of the kinetic energy of the shock is used to accelerate electrons in a collisionless shock. Here, the plasma and/or MHD instabilities are considered to amplify the magnetic field to \(\epsilon_{\rm B}\lesssim 10^{-3}\)-\(10^{-1}\) and electrons are accelerated via the first-order Fermi process with the energy fraction of \(\epsilon_{e}\lesssim 10^{-2}\)-0.3 (from observations, e.g., Waxman and Loeb, 1999; Panaitescu and Kumar, 2001; Frail et al., 2005; Uchiyama et al., 2007; Santana et al., 2014, and theoretical studies, e.g., Medvedev and Loeb, 1999; Chang et al., 2008; Spitkovsky, 2008; Martins et al., 2009; Keshet et al., 2009; Sironi et al., 2013; Tomita et al., 2019). Assuming the fast cooling regime (Fan and Piran, 2008), which is adequate for the internal shocks, the synchrotron luminosity is calculated as \[L_{\rm syn}\sim\frac{L_{\rm kin}\epsilon_{e}f_{\gamma\gamma}}{1+Y_{\rm SSC}},\] (A6) (e.g. Fan and Piran, 2008), where \(f_{\gamma\gamma}\) is the attenuation fraction by the \(\gamma\gamma\) annihilation given below, \(Y_{\rm SSC}\) is the powers of synchrotron self-Compton scattering compared to that of synchrotron emission, and is calculated as \(Y_{\rm SSC}=(-1+\sqrt{1+4\epsilon_{e}/\epsilon_{\rm B}})/2\)(Fan and Piran, 2008). We assume that electrons are accelerated in the shock to a power-law distribution of Lorentz factor \(\gamma_{e}^{\prime}\) as \(N(\gamma_{e}^{\prime})d\gamma_{e}^{\prime}\propto\gamma_{e}^{\prime-p}d\gamma_{e}^ {\prime}\) with a minimum (\(\gamma_{\rm m}^{\prime}\)) and maximum Lorentz factors (\(\gamma_{\rm max}^{\prime}\)). The minimum Lorentz fac tor \(\gamma^{\prime}_{\rm m}\) is \[\gamma^{\prime}_{\rm m} \sim \epsilon_{e}\left(\frac{p-2}{p-1}\right)\frac{m_{p}}{m_{e}}(\Gamma_{ 12}-1)\] (A7) \[\sim 30\ \left(\frac{\epsilon_{e}}{0.02}\right)\left(\frac{\Gamma_{12} -1}{2.6}\right)\] for \(p=2.5\), where \(m_{e}\) is the electron mass. By comparing the cooling by the synchrotron radiation and the acceleration by the first-order Fermi acceleration mechanism, the maximum Lorentz factor of electrons is \[\gamma^{\prime}_{\rm max}=\left(\frac{6\pi e}{\sigma_{\rm T}B^{ \prime}_{\rm j}\xi}\right)^{1/2}\sim 1\times 10^{6}\ \xi^{-1/2}\left(\frac{\Gamma_{\rm j}-1}{27}\right)^{-1/4}\] \[\left(\frac{\epsilon_{\rm B}}{0.01}\right)^{-1/4}\left(\frac{n^{ \prime}_{p}}{9\times 10^{10}\,{\rm cm}^{-3}}\right)^{-1/4}\] (A8) where \(\xi\) is the parameter representing the ratio of the mean free path to the Larmor radius of electrons, which is adopted to be \(\xi=1\) in this paper, \[B^{\prime}_{\rm j}=(8\pi\epsilon_{\rm B}e^{\prime}_{\rm j})^{1 /2}\sim 1\times 10^{4}\ {\rm G} \ \left(\frac{\Gamma_{12}-1}{2.6}\right)^{1/2}\left(\frac{\epsilon_{\rm B}}{0.01}\right)^{1/2}\] (A9) \[\left(\frac{n^{\prime}_{p}}{9\times 10^{10}\,{\rm cm}^{-3}}\right)^{ 1/2}\] is the magnetic field, \[e^{\prime}_{\rm j}=(\Gamma_{12}-1)n^{\prime}_{p}m_{p}c^{2}\] (A10) is the internal energy density of the shocked jet, and \(n^{\prime}_{p}\) is the proton number density of the shocked jet. In the fiducial model, non-thermal emission is characterized by fast cooling regimes (e.g. Sari et al., 1998; Fan & Piran, 2008). The cooling timescale for electrons with \(\gamma^{\prime}_{\rm m}\) is \[t_{\rm c}(\gamma^{\prime}_{\rm m})\sim 0.009\,{\rm s}\ \left( \frac{\epsilon_{\rm B}}{0.01}\right)^{-1}\left(\frac{\gamma_{12}-1}{2.6} \right)^{-1}\] \[\left(\frac{n^{\prime}_{p}}{9\times 10^{10}\,{\rm cm}^{-3}} \right)^{-1}\left(\frac{\gamma^{\prime}_{\rm m}}{30}\right)^{-1}\left(\frac{ \Gamma_{\rm j}}{30}\right)^{-1}.\] (A11) From \(t_{\rm c}(\gamma^{\prime}_{\rm m})\), the typical shell width of electrons with \(\gamma^{\prime}_{\rm m}\) emitting the synchrotron photons is approximated as \(\Delta_{\rm shell}(\gamma^{\prime}_{\rm m})\sim t_{\rm c}(\gamma^{\prime}_{\rm m })c\sim 3\times 10^{8}\,{\rm cm}\ [t_{\rm c}(\gamma^{\prime}_{\rm m})/0.009\ {\rm s}]\). The Lorentz factor at which self-absorption becomes effective is \[\gamma^{\prime}_{\rm a}=\gamma^{\prime}_{\rm m}\times\left(\tau_{\rm q}C_{q+1} \right)^{1/(q+4)}\] (A12) (Rybicki & Lightman, 1979; Fouka & Ouichaoui, 2009, 2011), where \[\tau_{q}=\frac{\pi}{3\sqrt{2}}\frac{(q^{2}+q-2)\gamma^{\prime-5}_{1}}{1-( \gamma^{\prime}_{2}/\gamma^{\prime}_{1})^{-q+1}}\frac{en^{\prime}_{\rm p} \Delta_{\rm shell}(\gamma^{\prime}_{\rm a})}{\Gamma_{\rm j}B^{\prime}_{\rm j}},\] (A13) \(\gamma_{2}\) and \(\gamma_{1}\) are the maximum and minimum Lorentz factors of non-thermal electrons with the power-law index of \(q\), respectively, \[C_{q}=\frac{2^{(q+1)/2}}{q+1}\Gamma\left(\frac{q}{4}-\frac{1}{12}\right)\Gamma \left(\frac{q}{4}+\frac{19}{12}\right),\] (A14) \(\Gamma\) is the Gamma function, and we assume that electrons are randomly oriented in the frame of the shocked fluid. We set \(\gamma^{\prime}_{2}=\gamma^{\prime}_{\rm max}\), \(\gamma^{\prime}_{1}=\gamma^{\prime}_{\rm m}\), and \(q=p_{\rm e}\) for \(\gamma^{\prime}_{\rm m}<\gamma^{\prime}_{\rm a}\), and \(\gamma^{\prime}_{2}=\gamma^{\prime}_{\rm m}\), \(\gamma^{\prime}_{1}=\gamma^{\prime}_{\rm a}\), and \(q=2\) for \(\gamma^{\prime}_{\rm a}<\gamma^{\prime}_{\rm m}\). Using the variables derived above, we assume the spectra shapes for synchrotron emission and synchrotron-self Compton scattering as prescribed in Paper II, while we additionally take into account the bolometric correction as prescribed by \(R_{p}\) in Eq. (8). The spectra shapes for models M1-M3 are presented in Fig. 2. In the fiducial model, we do not consider the second order inverse Compton scattering as the Klein-Nishina effect is effective. When the optical depth to the \(\gamma\gamma\) annihilation (\(\tau_{\gamma\gamma}\)) exceeds \(\gtrsim 1\), the high energy photons with \(h\nu^{\prime}>m_{e}c^{2}=511\) keV can be absorbed, where \(h\) is the Plank constant, and \(\nu\) is the frequency of photon. To take into account \(\gamma\gamma\) attenuation, we refer to Eq. (6) of Kimura (2022) for \(\tau_{\gamma\gamma}\). We assume that the attenuation factor by the \(\gamma\gamma\) annihilation is given by \(f_{\gamma\gamma}(\nu)=\exp[1-\tau_{\gamma\gamma}(h\nu)]/\tau_{\gamma\gamma}(h\nu)\). The peak luminosity for attenuating photons is assumed to be Eq. (A6), and the peak frequency to be the maximum of the minimum and absorption frequencies of the synchrotron radiation. ### Pion production efficiency We calculate the efficiency of the \(p\gamma\) reaction (\(f_{p\gamma}\)) and the pion cooling suppression factor (\(f_{\pi,sup}\)) referring to Eq. (28) and two asymptotes stated below Eq. (33) of Kimura (2022), respectively. Comparing the dynamical timescale (\(t^{\prime}_{\rm dyn}\)) to the \(pp\) cooling timescale (\(t^{\prime}_{pp}\)), the efficiency of \(pp\) interaction is calculated as \[f_{pp}\approx n^{\prime}_{p}\kappa_{p}\sigma_{pp}Z_{\rm diss}/\Gamma_{j}\] \[\sim 3\times 10^{-6}\left(\frac{L_{\rm j}/p_{\theta}}{1\times 10^{ 44}\ {\rm erg}}\right)\] \[\left(\frac{\Gamma_{j}}{30}\right)^{-3}\left(\frac{Z_{\rm diss}}{5 \times 10^{10}\ {\rm cm}}\right)^{-1}\] (A15) (e.g. Murase et al., 2014), where \[n^{\prime}_{p}=L_{\rm j}/4\pi\Gamma_{j}^{2}p_{\theta}Z_{\rm diss}^{2}m_{p}c^{3}\] (A16) is the proton number density in the jet, \(\kappa_{p}\approx 0.5\) is the proton inelasticity, and \(\sigma_{pp}\approx 4\times 10^{-26}\ {\rm cm}^{2}\) is the cross section of the \(pp\) interactions at \(\sim 10\)-\(100\) TeV. To derive the efficiency of \(p\gamma\) interaction, we adopt the delta function approximation. We assume that most interactions occur via \(\Delta\)-resonance process at the photon energy around the resonance peak (\(\bar{\epsilon}_{\rm pk}\simeq 0.3\) GeV), and approximate the cross section and inelasticity to be \[\sigma_{p\gamma}\kappa_{p\gamma}\simeq\sigma_{\Delta}\kappa_{\Delta}\Delta\bar{ \epsilon}_{\rm pk}\delta(\bar{\epsilon}_{\gamma}-\bar{\epsilon}_{\rm pk})\] (A17) where \(\sigma_{\Delta}\sim 5\times 10^{-28}\ {\rm cm}^{-2}\), \(\kappa_{\Delta}\simeq 0.2\), and \(\bar{\epsilon}_{\rm pk}\simeq 0.3\) GeV are the cross section, inelasticity, and the photon energy at the resonance peak, \(\Delta\bar{\epsilon}_{\rm pk}\sim 0.2\) GeV is the peak width, \(\delta(x)\) is the Dirac delta function, and \(\bar{\epsilon}_{\gamma}\) is the photon energy in the proton rest frame. For \(E_{p}<E_{p,\rm br}\), where \[E_{p,\rm br}=\frac{\Gamma_{j}^{2}\bar{\epsilon}_{\rm pk}m_{p}c^{2}}{2E_{r,\rm br }},\] (A18) \(E_{\gamma,\rm br}\) is the peak photon energy in the BH rest frame, the fraction of cosmic-ray protons producing pions through the photomeson production is approximated as \[f_{p\gamma}\approx\frac{\sigma_{\Delta}\kappa_{\Delta}\Delta\bar{\epsilon}_{\rm pk }L_{\gamma,\rm iso,br}}{2\pi c\bar{c}_{\rm pk}\Gamma_{j}^{2}Z_{\rm diss}E_{\gamma, \rm br}(3-s)}(E_{p}/E_{p,\rm br})^{1-s}\] (A19) (e.g. Kimura, 2022), where \(L_{\gamma,\rm iso,br}\) is the isotropic synchrotron luminosity at \(E_{\gamma}=E_{\gamma,\rm br}\), and \(s\) is the power-law slope for the synchrotron spectrum for \(E_{\gamma}>E_{\gamma,\rm br}\). In model M2, \[f_{p\gamma}\sim 7\left(\frac{L_{\gamma,\rm iso,br}}{3\times 10^{41 }\,\rm erg/s}\right)\left(\frac{Z_{\rm diss}}{10^{9}\,\rm cm}\right)^{-1}\] \[\left(\frac{\Gamma_{\rm j}}{4}\right)^{-3}\left(\frac{E_{\gamma, \rm br}}{200\,\rm eV}\right)^{-1}\] (A20) at \(E_{p}=E_{p,\rm br}\) and \(E_{p,\rm br}\sim 3\times 10^{6}\,\rm\ GeV(\Gamma_{j}/4)^{2}(E_{\gamma,\rm br}/ 200\,\rm eV)^{-1}\). Since the delta function approximation cannot predict \(f_{p\gamma}\) above \(E_{p,\rm br}\), we do not present the neutrino luminosities in these regions in Fig. 5. This approximation does not affect the detectability of neutrinos as the pion cooling suppresses the neutrino fluence in the high-energy ranges. In models M1-M3, the \(pp\) reaction dominates the neutrino reaction in lower energy ranges, while the \(p\gamma\) reaction dominates over the \(pp\) reaction in middle energy ranges of \(10^{4}\,\rm GeV\lesssim E_{\nu}\lesssim 10^{6}\,\rm GeV\) (Fig. 5). ### Parameter space for particle acceleration Here we discuss the parameter space in which electrons and protons can be efficiently accelerated in internal shocks of the jets. If the upstream gas is optically thick to incident photons, the shocks are mediated by interactions with the photons (e.g. Ito et al., 2020). In this case, the velocity jump at the shock is gradual, and particle acceleration should be inefficient. The optical depth of the upstream shell is estimated as \[\tau_{\rm u}=n_{\rm r}^{\prime}\sigma_{\rm T}\frac{Z_{\rm diss}}{ \Gamma_{\rm r}}\] \[=\frac{L_{\rm j,iso}\sigma_{\rm T}}{4\pi\Gamma_{\rm j}^{3}Z_{\rm diss }m_{p}c^{3}}\] \[\simeq 0.004\left(\frac{L_{\rm j,iso}}{10^{44}\,\rm erg/s}\right) \left(\frac{\Gamma_{12}}{4}\right)^{-3}\left(\frac{\Gamma_{\rm j}}{4}\right)^{ -5}\left(\frac{T_{\rm vari}}{10^{-3}\,\rm s}\right)^{-1}\] (A21) where \(\sigma_{\rm T}\) is the Thomson scattering cross section, and we used the approximation of \(\Gamma_{\rm r}\sim 2\Gamma_{12}\Gamma_{\rm j}\) in the last transformation. Hence, in the models we investigated (M1-M3), internal shocks are collisionless and particles can be efficiently accelerated (see Fig. 6). ### Correction for redshift evolution We here discuss the correction factor (\(f_{z}\)) due to the redshift evolution of the number density of sources (Waxman and Bahcall, 1998), which needs to be taken into account to estimate the diffuse background intensity. For instance, \(f_{z}\sim 3\) and \(f_{z}\sim 0.6\), when the number density of sources is proportional to \((z+1)^{3}\) and \((z+1)^{0}\), respectively. The value of \(f_{z}\) in our scenario should depend on two things: the number of sBHs captured by AGN disks and the typical accretion rate onto the sBHs. The accretion rate onto sBHs at a fixed radius is higher for highly-accreting AGNs, but sBHs around heavier SMBHs tend to be captured in outer regions, where the accretion rate onto sBHs is lower. These two effects likely compensate each other, and thus, we assume that the accretion rate onto sBHs is independent of the AGN luminosity. On the other hand, the number of sBHs is higher for high-luminosity AGNs (\(L_{X}\sim 10^{43}\) erg/s) (Tagawa et al., 2021), whose number density strongly depends on redshift, \(\sim(1+z)^{3}-(1+z)^{4}\)(Ueda et al., 2014). If we instead assume the number of sBHs is independent of the AGN luminosity, sBHs in lower-luminosity AGNs should also provide a significant contribution, whose number density is almost independent of redshift, \(\sim(1+z)^{0}\)(Ueda et al., 2014). Thus, the source evolution in our scenario should be between \((1+z)^{3}\) and \((1+z)^{0}\). In this study, we adopt \(f_{z}=2\). ## Appendix B Parameter dependence In this section, we present the parameter dependence of the contribution of emission from internal shocks in jets to the gamma-ray (Fig. 7) and neutrino background intensities (Fig. 8). Fig. 7 shows the dependence of the gamma-ray flux from internal shocks on the parameters, \(p\), \(\epsilon_{e}\), \(T_{\rm vari}\), and \(L_{\rm j}\). All of these parameters influence the minimum synchrotron frequency (\(\nu_{\rm m}=\gamma_{\rm m}^{2}\nu_{\rm syn}\), where \(\nu_{\rm syn}\) is the synchrotron frequency). This is obvious that \(\gamma_{\rm m}\) depends on \(\epsilon_{e}\) and \(p\) (Eq. A7), while \(\nu_{\rm syn2}\) depends on \(T_{\rm vari}\) and \(L_{\rm j}\) as \(\nu_{\rm syn}\propto B_{\rm j}^{\prime}\propto n_{\rm p}^{\prime 1/2}\propto L _{\rm j}^{\prime 1}T_{\rm vari}^{-1}\). In addition, \(p\) influences the power-law slope above \(\nu_{\rm m}\) (dashed black line in Fig. 7a), and \(\epsilon_{e}\) and \(L_{\rm j}\) influence the total radiation energy by non-thermal emission (dotted black lines in Fig. 7). Note that \(n_{\rm rotBH,acc}\) also linearly affects the intensity (\(E_{\gamma}^{2}\Phi_{\gamma}\propto n_{\rm rotBH,acc}\)). Thus, the gamma-ray intensity depends on various parameters. Nevertheless, Figure 6.— The parameter range for efficient particle acceleration on \(\Gamma_{\rm j}\)–\(L_{\rm j,iso}\) plane for \(\Gamma_{12}=4\) and \(T_{\rm vari}=10^{-3}\,\rm s\). The parameters adopted for models M1, M2, and M3 are presented by orange, cyan, and red circles, respectively. gamma-ray emission is not eliminated by these parameters, unlike the model with different \(\Gamma_{\rm j}\) as shown in the main manuscript (Fig. 4). Fig. 8 shows the dependence of the neutrino flux from the internal shocks on several parameters. The neutrino intensity in low energy ranges is low and high for the long-\(T_{\rm vari}\) or high-\(L_{\rm j}\) models (dotted black and solid orange lines) since \(f_{pp}\) is proportional to the high jet density (\(f_{pp}\propto n_{p}^{\prime}\propto L_{\rm j}T_{\rm vari}^{-2}\)). Also, for long \(T_{\rm vari}\) or low \(L_{\rm j}\), cooling by synchrotron radiation during pion phases is inefficient, due to weak magnetic fields in shocks, and then the neutrino intensities are high in high energy ranges. Similar to the gamma-ray intensity, \(n_{\rm rotBH,acc}\) linearly influences the neutrino intensity (\(E_{\nu}^{2}\Phi_{\nu}\propto n_{\rm rotBH,acc}\)).
2306.09349
UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video
We present UrbanIR (Urban Scene Inverse Rendering), a new inverse graphics model that enables realistic, free-viewpoint renderings of scenes under various lighting conditions with a single video. It accurately infers shape, albedo, visibility, and sun and sky illumination from wide-baseline videos, such as those from car-mounted cameras, differing from NeRF's dense view settings. In this context, standard methods often yield subpar geometry and material estimates, such as inaccurate roof representations and numerous 'floaters'. UrbanIR addresses these issues with novel losses that reduce errors in inverse graphics inference and rendering artifacts. Its techniques allow for precise shadow volume estimation in the original scene. The model's outputs support controllable editing, enabling photorealistic free-viewpoint renderings of night simulations, relit scenes, and inserted objects, marking a significant improvement over existing state-of-the-art methods.
Zhi-Hao Lin, Bohan Liu, Yi-Ting Chen, Kuan-Sheng Chen, David Forsyth, Jia-Bin Huang, Anand Bhattad, Shenlong Wang
2023-06-15T17:59:59Z
http://arxiv.org/abs/2306.09349v3
# UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video ###### Abstract We show how to build a model that allows realistic, free-viewpoint renderings of a scene under novel lighting conditions from video. Our method - UrbanIR (**Urban Scene I**mverse R**n**ndering) - computes an inverse graphics representation from the video. UrbanIR jointly infers shape, albedo, visibility, and sun and sky illumination from a single video of unbounded outdoor scenes with unknown lighting. UrbanIR uses videos from cameras mounted on cars (in contrast to many views of the same points in typical NeRF-style estimation). As a result, standard methods produce poor geometry estimates (for example, roofs), and there are numerous "floaters". Errors in inverse graphics inference can result in strong rendering artifacts. UrbanIR uses novel losses to control these and other sources of error. UrbanIR uses a novel loss to make very good estimates of shadow volumes in the original scene. The resulting representations facilitate controllable editing, delivering photorealistic free-viewpoint renderings of relit scenes and inserted objects. Qualitative evaluation demonstrates strong improvements over the state-of-the-art. ## 1 Introduction We show how to build a model that allows realistic, free-viewpoint renderings of a scene under novel lighting conditions from video. So, for example, a sunny afternoon video of a large urban scene can be shown at different times of day or night (as in Fig. 1), viewed from novel viewpoints, and shown with inserted objects. Our method - _UrbanIR_ (**Urban** Scene I**nverse R**n**ndering) - computes an inverse graphics representation from the video. UrbanIR jointly infers shape, albedo, specularity, visibility, and sun and sky illumination _from a single video of unbounded outdoor scenes_ with _unknown lighting_. Inferring these _inverse graphics_ maps is challenging because doing so is ill-posed - there isn't enough of the right kind of data to recover canonical inferences, so errors are guaranteed. Errors in inverse graphics inference can result in strong rendering artifacts, including NeRF "floaters" casting shadows and shadow boundaries preserved in albedo maps. Our goal is realistic rendering (rather than canonical inference), and UrbanIR uses novel losses to control these and other sources of error. Further, UrbanIR uses a novel loss to make very good estimates of shadow volumes in the original scene. The resulting representations facilitate controllable editing, delivering photorealistic free-viewpoint renderings of relit scenes and inserted objects, as demonstrated in Fig. 1. UrbanIR combines monocular intrinsic decomposition and inverse rendering with key innovations to control error in renderings. UrbanIR uses videos from cameras mounted on cars (in contrast to many views of the same points in typical NeRF-style estimation). As a result, standard methods produce poor geometry estimates (for example, roofs) and numerous "floaters". Our key contributions are: * We use novel losses to control errors in geometric estimation and show significant improvements in the rendered images over alternative methods. * We use a novel visibility rendering procedure to ensure consistency between detected shadows and scene geometry, significantly improving predicted shadows. * We use monocular estimates of surface normal and shadows to supervise neural fields, and show that these estimates improve inverse graphics estimates. ## 2 Related Works Inverse Graphicsinvolves inferring illumination and intrinsic properties of a scene. It is difficult to achieve accurate and reliable solutions, and there is much reliance on priors [27, 20, 21, 2, 70, 1, 52, 43, 65] or on managed lighting conditions [19, 1, 15, 19, 1, 68], known geometry [51, 28, 11, 25], or material simplifications [75, 42, 70]. Recent methods use deep learning techniques to reason about material properties [37, 38, 39, 67, 73, 47]. Models trained on synthetic data [34] or pair-wise annotated data [3] have shown promising results. Some approaches focus on learning to predict monocular properties, such as albedo or shading, as demonstrated in several works [55, 14]. Others learn neural representations of materials and illumination [39, 32, 31, 29, 30]. In line with these methods, our proposed approach also leverages monocular cues, such as shadows and surface normals. In contrast, we combine learning-based monocular cues and model-based relightable NeRF optimization to infer the scene's intrinsic properties and illumination. Reliphable Neural Fieldsare now an active topic of research. Neural fields can capture complex and non-parametric scene structures by learning implicit representations, enabling more flexible and accurate modeling of geometry and producing visually realistic renderings. Relightable neural radiance field methods [69, 6, 72, 5, 46, 18, 61, 67] aim to factor the neural field into multiple intrinsic components and leverage neural shading equations for illumination and material modeling. These methods allow for the realistic and controllable rendering of scenes with varying lighting conditions and materials. However, most relightable NeRF methods focus on objects with surrounding views or small bounded indoor environments. There are two notable exceptions: NeRF-OSR [49], which assumes multiple lighting sources for decomposition, and Nerf meet explicit geometry [61], which either uses multiple lighting or exploits depth sensing, such as LiDAR. In contrast, our proposed approach only requires a single video captured under the same illumination, making it more applicable to a broader range of scenes. Differentiable rendering techniques enable gradient propagation throughout the entire forward rendering process, making inverse graphics tasks more flexible and convenient. There are fast but not physically based rasterization-based methods as they assume Lambertian or simple lighting models [36, 7, 47]. Furthermore, most of these methods are based on meshes and are suitable for object-level rendering, and are difficult to apply to large urban scenes. In contrast, we leverage neural radiance fields (NeRF) [44] in conjunction with physically based differentiable rendering techniques. Shadow modelingusing images poses a significant challenge. Methods that are trained to produce cast shadows from images [60, 35, 71] work for very specific objects (pedestrians, cars, etc) and do not generalize beyond training categories. Some works have also employed deep learning techniques to detect and remove shadows from 2D images [16, 17, 59]. However, relying on 2D shadows alone does not fulfill the requirements of inverse graphics, as it requires modeling the full 3D geometry, scene properties, and ensuring temporal consistency. Model-based optimization methods have been used to infer shadows, including expensive multiple-bounce ray tracing or explicit geometry-based shadow casting, which relies on accurate scene geometry [57, 26, 63]. Other approaches utilize visibility fields to model shadows, but they often struggle to provide consistent shadows in relation to the underlying geometry [56, 64, 49, 74]. In contrast, our method combines the strengths of both learning-based monocular shadow prediction and removal and model-based inverse graphics. By blending these approaches, we achieve realistic, controllable, and consistent visibility representations that align with the underlying scene properties and the image observation. This combination allows us to overcome the limitations of relying solely on 2D shadows or visibility fields alone, offering a more comprehensive and accurate solution for modeling shadows in inverse graphics. ## 3 Method UrbanIR takes as input multiple frames of video of a fixed scene under fixed illumination; the camera moves, and its motion is known. Write \(\{I_{i},E_{i},K_{i}\}\), where \(I_{i}\in\mathbb{R}^{H\times W\times 3}\) is the RGB image; \(E_{i}\in\text{SE}(3)\) is the camera pose; and is camera intrinsic matrix. We produce a neural field model that can be viewed from _novel camera viewpoints_ under _novel lighting conditions_. We do so by constructing a neural scene model that encodes albedo, normal, transmittance, and visibility in a unified manner (Sec. 3.1). This model is rendered from a given camera pose with given illumination using an end-to-end differentiable volume renderer (Sec. 3.2). The inference is by optimization of all properties jointly (Sec. 3.3). Applications include changing the sun angle, day-to-night transitions, and object insertion (Sec. 3.4). Fig. 3 provides an overview of our proposed inverse graphics and simulation framework. ### Relightable Neural Scene Model The scene representation is built on Instant-NGP [45, 48], a spatial hash-based voxel NeRF representation. Instant-NGP offers numerous advantages, including: low memory consumption; high efficiency in training and rendering; and compatibility with expansive outdoor scenes. Write \(\mathbf{x}\in\mathbb{R}^{3}\) for position in 3D, \(\mathbf{d}\) for query ray direction, \(\theta\) for learnable scene parameters; NeRF models, including Instant-NGP, learn a radiance field \(\mathbf{c},\sigma=F(\mathbf{x},\mathbf{d};\theta)\), where \(\mathbf{c}\in\mathbb{R}^{3}\) and \(\sigma\in\mathbb{R}\) represent color and opacity respectively. In contrast, UrbanIR learns a model of intrinsic scene attributes. Write albedo \(\mathbf{a}\), surface normal \(\mathbf{n}\), semantic vector \(\mathbf{s}\), and density \(\sigma\); then UrbanIR learns: \[(\text{appearance},\text{geometry})=(F_{a}(\mathbf{x};\theta_{a});F_{g}( \mathbf{x},\theta_{g}))=F(\mathbf{x};\theta) \tag{1}\] where \(\theta=\{\theta_{g},\theta_{a}\}\) are learnable parameters. The appearance module \((\mathbf{a},\mathbf{n},\mathbf{s})=F_{a}(\mathbf{x},\theta_{a})\) and the geometry module \(\sigma=F_{g}(\mathbf{x},\theta_{g})\) are each linked to a separate feature hash table, and an individual MLP header is used to encode each attribute within the semantic module. In contrast to current relightable outdoor scene models that demand coupled explicit geometry [49, 62], our scene model is purely neural field-based, providing both compactness and consistency. The lighting modelis a parametric sun-sky model, after [22]. This encodes outdoor illumination as: \[\mathbf{L}=\{(\mathbf{L}_{\text{sun}},\psi_{\text{sun}},\phi_{\text{sun}}), \mathbf{L}_{\text{sky}}\}. \tag{2}\] The sun model is a directional 5-DoF representation, encoding sun color \(\mathbf{L}_{\text{sun}}\) along with the azimuth and zenith \(\psi_{\text{sun}},\phi_{\text{sun}}\). The \(\mathbf{L}_{\text{sky}}\) model is represented as a 3-DoF ambient color. Although simple and low-dimensional, this minimalist representation has proven to be highly effective in modeling various outdoor illumination effects [22]. ### Rendering Rendering a query ray \(\mathbf{r}\) through a pixel takes two steps: first, we compute the intrinsics that project to that pixel; and second, we shade the pixel using intrinsics and illumination model, yielding: \[\mathbf{C},W=\texttt{Shade}(\texttt{Intrinsics}(\mathbf{r};\theta_{a}, \theta_{g}),\mathbf{x},\mathbf{L}) \tag{3}\] where \(\mathbf{L}\) is the illumination model, \(\mathbf{C}\) is the final RGB color and \(W\) is the alpha transparency. Intrinsare obtained by volume rendering. We accumulate predictions from \(F(\cdot;\theta)\) along the query ray. Multiple points are sampled along the ray, and intrinsics at the query Figure 2: **Rendering Pipeline.** UrbanIR retrieves scene intrinsics (normal \(N\), semantics \(S\), albedo \(A\)) from camera rays, and estimate visibility \(V\) from tracing rays to the light source. Shading model computes diffuse and specular reflection with function \(G(N,S,\mathbf{l}_{\text{sun}},\mathbf{L}_{\text{sun}})\), and add sky light \(\mathbf{L}_{\text{sky}}\) for final shading map. Final rendering is computed by multiplying shading and albedo \(A\). Please refer to Eq. 5 for more details. pixel are obtained by volume rendering [23, 44] along the ray. In particular, albedo \(\mathbf{A}\), normal \(\mathbf{N}\) and semantics \(\mathbf{S}\) are predicted as: \[\mathbf{A}=\sum_{i=1}^{N}w_{i}\mathbf{a}_{i},\quad\mathbf{N}=\sum_{i=1}^{N}w_{i} \mathbf{n}_{i},\quad\mathbf{S}=\sum_{i=1}^{N}w_{i}\mathbf{s}_{i}, \tag{4}\] where \(w_{i}=\text{exp}(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{j})\left(1-\text{exp}(- \sigma_{i}\delta_{i})\right)\) is alpha-composition weight, \(\delta_{i}=t_{i}-t_{i-1}\), and the intrinsic attributes follow from the neural scene model, so \((\mathbf{a}_{i},\mathbf{n}_{i},\mathbf{s}_{i},\sigma_{i})=F_{\theta}(\mathbf{ x}_{i})\). Shadingis by a local shading model (cf Blinn-Phong [4]) that incorporates sun and sky terms. We assume the sky is visible at every point, and compute \[\mathbf{C} =\mathbf{A}\odot(\text{sun}+\text{sky}) \tag{5}\] \[=\mathbf{A}\odot(\mathbf{L}_{\text{sun}}\left[\phi(\mathbf{N}, \mathbf{l}_{\text{sun}})\mathbb{V}(\mathbf{x},\text{sun})\right]+\mathbf{L}_{ \text{sky}})\] where \(\mathbf{x}\) is an estimate of the 3D position of the point being shaded (below), \(\phi(\mathbf{N},\mathbf{l}_{\text{sun}})=\text{max}(\mathbf{N}\cdot\mathbf{l}_ {\text{sun}},0)\) is the cosine foreshortening at the surface, \(\mathbf{l}_{\text{sun}}\) is the unit vector toward the sun (derived from \(\psi_{\text{sun}},\phi_{\text{sun}}\)). The visibility \(V(\mathbf{x},\text{sun})\) is \(1\) if \(\mathbf{x}\) can see the sun, and \(0\) otherwise. This shading model is capable of producing a realistic appearance with shadows in accordance with varying lighting conditions. The model can readily be extended. Accurate visibilityestimates are essential for obtaining realistic-looking shadows. Modeling the visibility of the sun with an MLP head (as in [74, 72]) is impractical because we need to change the sun's position but can learn from only one position. An alternative is to construct an explicit geometry model to cast shadows, but this model might not be consistent with the other neural fields, and imposing consistency is difficult. Instead, we first compute an estimate \(\mathbf{x}\) of the 3D point being shaded, then estimate a smoothed \(V(\mathbf{x},\text{sun})\) We obtain \(\mathbf{x}\) by volume rendering depth (so substitute \(\hat{t}=\sum w_{i}t_{i}\) into the equation for the ray being rendered). Now to check whether \(\mathbf{x}\) is visible to the light source, we evaluate the transmittance along the ray segment between \(\mathbf{x}\) and the light source using volume rendering, obtaining: \[V(\mathbf{x},\text{sun})=\text{exp}\left(-\sum_{i}\sigma_{i}(\mathbf{x}_{i}) \delta_{i}\right)\ \ \text{where}\ \ \ \mathbf{x}_{i}=\mathbf{x}+t_{i}\mathbf{l}_{\text{sun}} \tag{6}\] Lower transmittance along a ray from a surface point to a light source suggests fewer obstacles between the point and the light source. Eq. 6 establishes a strong link between transmittance, lighting, and visibility fields, which we use in training. In particular, a point in a training image that is known to be shadowed (resp. out of shadow) should have large (resp. small) accumulated transmittance. This constraint adjusts quite distant geometry during training. Figure 3: **Training Pipeline.** UrbanIR retrieves scene intrinsics with volume rendering from camera rays, which is guided by semantic and normal priors. Transmittance along tracing rays are supervised with shadow masks. Shading model (illustrated in Fig. 2) is performed **with** and **without** visibility term, and enforce reconstruction loss with original and deshadowed images, respectively. Please refer to Section 3.3 for more details. ### Inverse graphics We train the scene model \(F(\cdot;\theta)\) (Eq. 1) and the lighting model \(\mathbf{L}\) (Eq. 2) jointly using a loss: \[\mathtt{argmin}_{\theta,\mathbf{L}}\mathcal{L}_{\text{render}}+\mathcal{L}_{ \text{debahow}}+\mathcal{L}_{\text{visibility}}+\mathcal{L}_{\text{normal}}+ \mathcal{L}_{\text{semantics}}, \tag{7}\] where individual loss terms are described below. The rendering lossmeasures the agreement between observed images and images rendered from the model using the training view and lighting, yielding \(\mathcal{L}_{\text{render}}=\sum_{\mathbf{r}}\|\mathbf{C}_{\text{gt}}(\mathbf{ r})-\mathbf{C}(\mathbf{r})\|_{2}^{2}\), where \(\mathbf{C}\) is rendered color per ray, as defined in Eq. 3, and \(\mathbf{C}_{\text{gt}}\) is the observed "ground-truth" color. Minimizing the rendering loss ensures our scene model can reproduce observed images. _The deshadowed rendering loss_ forces shadow effects out of the estimated albedo. In particular, we compute a shadow-free version of an image using an off-the-shelf shadow detection and removal network [16, 8] to obtain \(\mathbf{C}_{\text{deshadow}}\). We then render that image from the model using the training view and lighting, but assuming that every point can see the sun (equivalently \(V(\mathbf{x},\text{sun})=1\) for every \(\mathbf{x}\)). This yields \(\mathbf{C}^{\prime}(\theta)\). We then measure the agreement between the two to obtain \(\mathcal{L}_{\text{debahow}}=\sum_{\mathbf{r}}|\mathbf{C}_{\text{debahow}}- \mathbf{C}^{\prime}(\theta)|^{2}\). The combination of this loss and the original rendering loss directly gauges how the visibility map influences rendering, and helps disentangling albedo and shadows. The visibility lossexploits shadow detection to improve geometry estimates. A pixel that is known to be in shadow must be at a point that cannot see the sun, so constraining geometry along a ray from that pixel to the sun. This loss could be computed by simply comparing visibility \(V(,\text{sun};\theta)\) with the shadow masks used for \(\mathcal{L}_{\text{deshadow}}\). However, there are challenges: first, computing visibility requires another volume rendering per sample point; second, back-propagation through volume rendering, shading, and visibility computation forms a long, non-linear gradient chain, and optimization becomes difficult. Instead, we construct an intermediate "guidance" visibility estimate \(V_{i}(\mathbf{x};\theta_{i})\) which is an MLP head trained to reproduce the shadow masks, and compute \[\mathcal{L}_{\text{visibility}}=\sum_{\mathbf{r}\in\mathcal{R}}\text{CE}\left(M (\mathbf{r}),V_{i}(\mathbf{r};\theta_{i})\right)+\text{CE}\left(V(\mathbf{r}; \theta),V_{i}(\mathbf{r};\theta_{i})\right), \tag{8}\] where \(M(\mathbf{r})\) is the shadow mask at pixel \(\mathbf{r}\),, and \(\text{CE}(.,.)\) is a cross-entropy loss. Here the first term forces the (relatively easily trained) \(V_{i}\) to agree with the shadow masks, and the second forces \(V\) to agree with \(V_{i}\). The normal lossis computed by comparing results \(N_{\text{gt}}\) obtained from an off-the-shelf normal estimator [12, 24] to the output of the normal MLP. Recall the camera is known for training scenes and write \(\mathbf{r}\) for the pixel corresponding to 3D point \(\mathbf{x}(\mathbf{r})\). An alternate estimate of the normal follows from the density field: \(\hat{\mathbf{n}}(\mathbf{r})=-\frac{\nabla\sigma(\mathbf{x})}{\|\nabla\sigma( \mathbf{x})\|}\). Then the normal loss is: \[\mathcal{L}_{\text{normal}}=\sum_{\mathbf{r}\in\mathcal{R}}\left(\|N_{\text{gt }}(\mathbf{r})-N(\mathbf{r})\|^{2}+\|\mathbf{n}(\mathbf{x}(\mathbf{r}))-\hat{ \mathbf{n}}(\mathbf{x}(\mathbf{r}))\|^{2}\right). \tag{9}\] We also adopt normal regularization from Ref-NeRF [58], producing a better density field. The semantic lossis computed by comparing predicted semantics \(\mathbf{s}\) with labels in the dataset [33]. We use an additional loss to encourage high depth values in the sky region, yielding: \(\mathcal{L}_{\text{semantics}}=\sum_{\mathbf{r}\in\mathcal{R}}\text{CE}\left(S _{\text{gt}}(\mathbf{r}),S(\mathbf{r})\right)-\sum_{\mathbf{r}\in\text{sky}}D (\mathbf{r})\). ### Applications Because we recover intrinsics, we can render the UrbanIR model with whatever source model appeals. Natural uses are showing scenes with different sun configurations and simulating nighttime. _Outdoor relighting_ proceeds by simply adjusting lighting parameters (position or color of the sun; sky color) then re-rendering the scene using Eq. 3. Additionally, we utilize semantics to interpret specular surfaces (cars) and emulate their reflectance during the simulation process. Simulating nighttimeproceeds by defining a spotlight model for headlights and street lights, then illuminating with that model. The spotlight we used is given by the center \(\mathbf{o}_{L}\in\mathbb{R}^{3}\) and direction \(\mathbf{d}_{L}\in\mathbb{R}^{3}\) of the light. This spotlight produces a radiance at \(\mathbf{x}\) given by \[\mathbf{L}_{\text{diffuse}}^{\text{spot}}(\mathbf{x})=\frac{1}{\|\mathbf{o}_{L }-\mathbf{x}\|^{2}}\left(l\cdot\mathbf{d}_{L}\right)^{k},l=\frac{\mathbf{o}_{L }-\mathbf{x}}{\|\mathbf{o}_{L}-\mathbf{x}\|}, \tag{10}\] Intensity of spot light is brightest on the central ray \(\mathbf{r}(t)=\mathbf{o}_{L}-t\mathbf{d}_{L}\), and decays with distance from the ray \(\mathbf{r}(t)\) and angle, which is modulated with constant \(k\). _Object insertion_ proceeds by a hybrid strategy. We first cast rays from the camera and estimate ray-mesh intersections [10]. If the ray hits the mesh and distance is shorter than volume rendering depth, the albedo \(A(\mathbf{r})\), normal \(N(\mathbf{r})\) and depth \(D(\mathbf{r})\) are replaced with the object attributes. In the shadow pass, we calculate visibility from surface points to the light source (Eq. 6), and also estimate the ray-mesh intersection for the tracing rays. If the rays hit the mesh (meaning occlusion by the object), the visibility is also updated : \(V(\mathbf{r})=0\). With updated \(A(\mathbf{r}),N(\mathbf{r}),V(\mathbf{r})\), shading (Eq. 5) is applied to render images with virtual objects. Our method not only casts object shadows in the scene but also casts _scene shadows_ on the object, enhancing realism significantly. Figure 4: **Rendering and relighting comparison**. We show a set of two scenes comparing different methods. Each column shows a different sun position, the first column showing original images. For each set and from top to bottom, we have (a) NeRF-OSR [50] (b) COLMAP [54] + Blender [9] (c) Luma AI Unreal Engine Plugin [13] (d) Mesh-based visibility (Mesh-vis) and (e) UrbanIR (Ours). NeRF-OSR achieves good scene reconstruction but lacks the ability to perform relighting with a single lighting condition as it assumes access to images captured under multiple lighting conditions. In COLMAP, Shadows are present in the scene; however, they are “baked-in” and cannot be manipulated or relit independently. Luma has the same difficulty. For Mesh-based visibility, weak observations of the scene geometry result in poor quality visibility estimation during direct observation-based reconstruction. In contrast, for our approach, UrbanIR, our visibility optimization enables realistic and controllable relighting effects. ## 4 Experiment Results ### Datasets We evaluate UrbanIR on the KITTI-360 dataset [33], which contains numerous video sequences of urban scenes. We pick 7 non-overlapping sequences for the experiments. These cover various light directions, vehicle trajectories, and layouts of buildings and vegetation. The data include RGB images from stereo cameras, semantic labels, camera poses, and RTK-GPS poses. We use Ominidata [12, 24] for monocular normal estimation from images, and use MTMT [8], ShadowFormer [16] to detect shadow masks and partially remove shadow patterns in the training images. ### Baselines We compare UrbanIR with the following methods: NeRF-OSR [49] is recent work for outdoor scene reconstruction and relighting. We use the open-source project provided by the author for running this baseline. This method represents lighting as spherical harmonics parameters. For a fair comparison, we rotate the spherical vectors to simulate different light conditions. Mesh + Blender [9]We compare our method with an explicit geometry-based baseline. For this, we utilize COLMAP for dense scene reconstruction [53, 54] and import the resulting scene into Blender [9] for relighting simulation. Luma AI Unreal Engine Plugin [41, 13]Luma AI is a mobile app that employs NeRF-based reconstruction. With its Unreal Engine plugin [41, 13], models constructed by Luma AI using NeRF can be imported into the Unreal Engine. This allows for relighting and object insertion within the engine. Luma AI performs its own custom scale-normalization, and the camera extrinsics are not readily accessible. As a result, we manually adjust its viewpoint transformation to align with other baselines. This adjustment could lead to minor misalignments in the qualitative results. Mesh-based visibilityThe recent work FEGR [62] explores the relighting of outdoor scenes under singular or multiple illumination sources. However, due to the absence of open-source access to their method, we implement our own baseline model that incorporates similar visibility modeling strategies. Specifically, we employ the marching cubes technique [40] to extract a mesh from our model, excluding our proposed visibility optimization (as per Eq.8). In alignment with the shadow mapping approach adopted by FEGR[62], we cast shadows by estimating two intersections: the first between the camera rays and the mesh, and the second by tracing rays from the surface to the light source. ### Relighting Quality Relighting under various sunlight conditions are evaluated and compared in Fig. 4. NeRF-OSR [49] cannot simulate shadows under novel light conditions. While Blender [9] and Luma Field [41, 13] can change the lighting parameters explicitly, they either cast bad shadows due to incomplete geometry or do not cast new shadows at all; further, the original shadow remains unchanged in the image. Mesh-based visibility generates different shadows according to light conditions, but the mesh on the edges of and outside the training views is poor because there are few observations. This leads to noisy and incomplete shadows on the ground. Figure 5: **Nighttime rendering. The scene is transformed from daytime (top) to night-time (bottom) by introducing new light sources: a headlight on a car and a street lamp. The dark shadows with sharp boundaries that are present during the daytime (top) are successfully removed, resulting in a more evenly illuminated scene during the nighttime rendering (bottom).** UrbanIR synthesizes sharp shadows and varying surface shading following sun direction; further, the original scene shadows are largely absent. This allows synthesizing images at night (Fig. 5) by inserting car headlights and streetlights, without distracting effects from the original shadows. ### Decomposition Quality We compare with NeRF-OSR [49] and RelightNet [66] in Fig. 6. NeRF-OSR reconstructs a noisy normal map, and cannot capture the scene shadows in their shadow generation model, leaving dark shadow patterns in the albedo field. RelightNet predicts better normals but still bakes shadows into the albedo. UrbanIR generates clean and sharp albedo and normal fields and also produces a geometry-aware shadow field from the input video sequence. In Fig. 7, we compare the learned albedo with the output of shadow removal network [16]. While ShadowFormer [16] recovers albedo well on the ground, but it cannot estimate the correct albedo for the building and vehicles. Our optimization process leverages desshadowed image as guidance (\(\mathcal{L}_{\text{deshadow}}\)) and further recovers a clean albedo field in most scene surfaces. ### Object Insertion The object insertion pipeline is described in Sect. 3.4. In Fig. 8, we compare with baselines by inserting a yellow cube and moving along the road. Mesh+Blender cannot synthesize complete geometry and shadow, and LumaAI Unreal plugin cannot cast scene shadows on the object. Without visibility optimization in row (C), the scene shadow on objects is noisy. Our complete model has an accurate shadow volume so that shadows cast on the object by the environment are well represented. More complex objects appear in Fig. 9. ### Ablation Fig. 10 shows an ablation study. (A) **No deshadow loss \(\mathcal{L}_{\text{deshadow}}\)**: the model does not recover the albedo in the shadow region accurately, because reconstruction is ill-posed under single illumination. (B)**No intermediate visibility MLP**: If the visibility map \(V(\mathbf{r})\) is supervised directly with shadow masks, the optimization is unstable and the model cannot decompose albedo successfully. (C)**No visibility loss \(\mathcal{L}_{\text{visibility}}\)**: the density field outside the training views is not constrained and cannot produce accurate and sharp shadows. (D)**Mesh-based visibility**: here sun visibility is calculated from a mesh, leading to sharp but inaccurate shadows. Figure 6: **Intrinsic Decomposition Quality Comparison. (A) NeRF-OSR [50] (B) RelightNet [66] (2D-based). Note weak normals and shadow maps from NeRF-OSR; a tendency for RelightNet to make albedo _and_ shadow dark in shadow regions; and dark shadows in NeRF-OSR albedo.** Figure 7: **Shadow Removal in Albedo. Our albedo representation is guided by shadow removal model [16], which performs well on ground but cannot remove shadow on buildings and vehicles. Our method correctly recover albedo under shadow thanks to multi-view supervision and joint optimization.** ### Limitations While UrbanIR provides high quality intrinsic decomposition, relighting, and insertion, it relies on multiple 2D priors during optimization. On occasion, shadow patterns cannot be removed completely in the albedo field, and appear in images. Visibility optimization refines only the geometry along the light direction in training so that large changes in sun direction can lead to poor shadows when poor geometry estimates cast shadows. Currently, we assume that light direction is known, we leave joint optimization of more complex light models and more general material parameters for future work. ## 5 Conclusion We have introduced UrbanIR (Urban Scene Inverse Rendering), a novel scene model that enables realistic renderings of a scene from various viewpoints under new lighting conditions, using video as a basis. This model jointly determines shape, albedo, visibility, as well as sun and sky illumination from a single video of unbounded outdoor scenes with undefined lighting. Key innovation includes a unique visibility loss function, facilitating highly accurate shadow volume estimates within the original scene. Consequently, this allows for precise editing control, ultimately providing photorealistic renderings of relit scenes and seamlessly inserted objects from any viewpoint. Figure 8: **Dynamic Object Insertion with Shadow Volume. Our method produces accurate estimates of shadow volumes where others cannot. This is exposed by inserting a simple object into the scene, then looking at shadows cast onto that object. Visibility optimization makes an important contribution to accuracy. (A) COLMAP dense reconstruction [53, 54] + Blender [9] (B) Luma Unreal Engine Plugin [13]. (C) Ours without visibility optimization.** Figure 9: **Diverse Virtual Object Insertion. Accurate shadow volume produces good looking object insertions for complex CGI objects.**
2305.10348
Data-Driven Modeling of Directly-Modulated Lasers
The end-to-end optimization of links based on directly-modulated lasers may require an analytically differentiable channel. We overcome this problem by developing and comparing differentiable laser models based on machine learning techniques.
Sergio Hernandez Fernandez, Christophe Peucheret, Ognjen Jovanovic, Francesco Da Ros, Darko Zibar
2023-05-15T12:38:24Z
http://arxiv.org/abs/2305.10348v1
# Data-Driven Modeling of Directly-Modulated Lasers ###### Abstract The end-to-end optimization of links based on directly-modulated lasers may require an analytically differentiable channel. We overcome this problem by developing and comparing differentiable laser models based on machine learning techniques. ## Introduction Directly-modulated lasers (DMLs) are at the core of short-reach communication links thanks to their efficiency in terms of power and cost[1, 2]. Their potential in terms of transmission distance and line rate is however hindered by their characteristics, such as limited modulation bandwidth, frequency chirping and low extinction ratio. Equalization is an effective method to compensate the DML-introduced distortion, but previous solutions have relied on experimental data to drive their models[3, 4]. Further throughput improvements could be achieved by jointly optimizing the transmitter and receiver using end-to-end (E2E) learning, a method that has gained traction as an optimization approach for optical communication systems[5, 6]. This approach usually relies on gradient-based optimization algorithms, that require a differentiable channel model[7]. However, the large-signal DML dynamics are governed by nonlinear differential equations for which analytical differentiation cannot be performed[8] making it challenging to have a differentiable channel. Alternative optimization methods based on reinforcement learning[9] and gradient-free optimization[10] have been proposed, but they could be often impractical due to their computational overhead[11]. A locally-accurate DML surrogate channel enables E2E learning and allows simultaneous optimization of several functions within the communication system[12]. Previous work using Transformer-based modeling of communication channels has proven the potential of such approaches in the inference of complex dynamical systems, yielding performance gains compared to feed-forward networks and Long-Short Term Memory (LSTMs)[13, 14]. In this paper, we propose the use of machine learning approaches to learn an accurate differentiable data-driven laser model. The proposed Transformer method is compared to three other common function estimators in dynamical system analysis (Volterra series, time-delay neural networks (TDNNs) and LSTMs). The Transformer model is able to outperform its counterparts while maintaining comparable training and testing time. ## Data-driven DML modeling The overall goal is to emulate the response of any DML laser as closely as possible based only on I/O sequences, as shown in Fig. 1. Transformers are machine learning structures designed for the parallel processing of numerical sequences, avoiding the use of recurrent elements. In this work, we propose the use of Convolutional-Attention Transformers (CATs)[15]. CATs make use of convolutions to model the dependencies between temporal sequences. The advantages of this approach are threefold: (i) it limits the amount of past sequence samples used in the prediction, (ii) it is able to capture waveform patterns rather than individual relations between samples; (iii) it takes into account the order of the samples. The training data acquisition setup is based on numerical simulations obtained from the general laser rate equations[16] but varying the symbol rate of the driving signal. The solution to the rate equations is obtained using a 5th-order Runge-Kutta (RIK4,5) solver. The solution from the solver is then used as ground truth to the CAT, establishing the relation between input modulation current and optical output (power) of the laser. For the data-driven model to be accurate throughout a wide variety of scenarios, the input data must contain a wide range of waveforms and amplitudes, thus providing an exhaustive picture of the behaviour of the laser. This was addressed by switching between two kinds of pulse shapes: super-Gaussian pulses and random pulses, where the latter are sampled from a folded normal distribution \(\mathcal{N}(0.5,1)\). The \(e^{-2}\) temporal full width \(T_{0}\) and the order \(n\) of the super-Gaussian pulses are stochastic too, fol lowing the folded \(\mathcal{N}(0.25T_{sym},T_{sym})\) and uniform \(\mathcal{U}(1,6)\) distributions, respectively. The amplitude of the pulses is modulated according to equiprobable 4PAM symbols. The pulses are then min-max normalized and low-pass filtered to avoid out-of-band leakage. The pulse shaping is re-randomized every 8 symbols (with 32 samples per symbol) until completing a 1024-sample sequence of mixed pulse shapes. The training dataset includes \(2^{13}\) sequences for a total of \(2^{23}\) samples, while the validation set is composed of \(2^{17}\) samples. The proposed CAT model is based on a decoder-only structure. The network is built around 3 blocks: learned positional embeddings (LPEs), convolutional attention sublayers and 2-layer multi-layer perceptrons (MLP) with ReLU hidden activation, as shown in Fig. 2. The implemented residual connections are based on the RK2 ordinary differential equation (ODE) Transformer structure[17], and every sublayer output is then layer-normalized. The reduction of the hidden dimensionality is handled by a linear layer. For the sake of comparison, three additional models have been studied, namely a 2nd order Volterra filter with 16-sample memory, a TDNN and a LSTM[18]. The corresponding value for each of the network hyperparameters are gathered in Table 1. ### Numerical results Due to the nature of the laser, the distortion on the optical waveform increases with the symbol rate \(R_{s}\). This effect becomes especially prominent at \(R_{s}\) higher than the relaxation frequency of the laser, \(f_{R}\). It is interesting to focus on these frequencies, where link optimization can have the highest impact. The models were therefore sequentially trained and tested under 5 different symbol rates expressed as fractions of \(f_{R}\) and corresponding to approximately \(\{0.1,0.25,0.5,0.75,1,1.25\}\cdot f_{R}\). In every case the training is based on an Adam optimizer with default decay rates \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and Normalized Mean Squared Error (NMSE) as a loss function, expressed as NRMSE (taking its square root) for easier interpretation. The laser phase and intensity noise are neglected to avoid setting a lower bound on the MSE performance. All models have been trained for 400 epochs, but only the best test loss is further considered to avoid overfitted results. The main attribute of a time-series prediction model is its ability to learn I/O representations. Fig 3 compares the output of the LSTM and CAT models to the RK4,5 solution at \(R_{s}\approx f_{R}\). Although both figures show high model accuracy, the LSTM struggles to capture the first few samples of the sequence. This is probably due the high reliance of LSTMs in their memory mechanism, that limits its performance when little temporal context is provided. A similar trend is shown in Fig. 4, where NRMSE is shown as a function of the symbol rate. Throughout the analyzed bandwidth, the CAT outperforms its counterparts and falls under the \(10^{-2}\) mark that sets the 1% error threshold. The trend of the 4 curves hints the correlation between \(R_{s}\) and the waveform distortion introduced, i.e. as the symbol duration becomes shorter it becomes increasingly difficult to match the input and output sequences for all models. It must be noted that, even though the CAT has more training parameters, its parallelization po \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **CAT** & **TDNN** & **LSTM** \\ \hline \# hidden nodes & 256 & 2048 & 64 \\ \hline \# hidden layers & 2 & 1 & 2 \\ \hline Activ. fun. & ReLU & ReLU & ReLU \\ \hline \# MLP sublay. & 2 & 2 & - \\ \hline Conv. win. length & 19 & 25 & - \\ \hline Embedd. size & 128 & - & - \\ \hline \# attention heads & 8 & - & - \\ \hline \end{tabular} \end{table} Table 1: Model hyperparameters used Figure 1: Block diagram of the system under investigation. Figure 2: Block diagram of the CAT model setup. tential makes its training and inference time per sequence comparable to the LSTM. This can be seen in Fig. 5, where the time elapsed to process both the training and testing sequences on a Nvidia A100 GPU is compared. It is also evident how the training and testing time per epoch of all the proposed approaches is at least an order of magnitude faster than the ODE solver generating the training and testing data. Looking at the eye diagrams at \(R_{s}\approx f_{R}\) for Gaussian input pulses in Fig. 6, the trend reveals a more contrasted picture than the NRMSE alone. Even if all 4 models show reasonable convergence compared to the ODE case (with the exception of the TDNN, that was omitted due to its poor performance), the Volterra filter and the LSTM show a consistent performance through the \(2^{10}\) symbols shown, while the CAT seems more sensitive to small variations of position and amplitude in the samples. This could be due to the positional encoding in the model, that alters the input to the network based on the position of the sample, even if its value remains constant. This drawback may however be less relevant in real scenarios, where noisy input data would affect the resulting output waveform to some degree. In terms of capturing the true width of the output pulse, the CAT shows a slightly better tracking than its counterparts, which tend to shorten the modulated pulse duration. ## Conclusions A data-driven differentiable surrogate for directly-modulated lasers was proposed. We show that the Transformer model is able to accurately predict the laser response while maintaining similar inference time compared to other time-series approaches. Our results can enable the joint optimization of directly-modulated systems without relying on experimental data or online gradient approximations. ## Acknowledgements This work was financially supported by the ERC-CoG FRECOM project (no. 771878) and the Villum YIP OPTIC-AI project (no. 29334). Figure 4: Test NRMSE performance of the proposed models Figure 5: Time elapsed (per epoch) by the presented models Figure 3: Comparison between the RK4,5 ground truth and test output sequences of a) LSTM and b) CAT
2309.01190
Exciton migration in two-dimensional materials
Excitons play an essential role in the optical response of two-dimensional materials. These are bound states showing up in the band gaps of many-body systems and are conceived as quasiparticles formed by an electron and a hole. By performing real-time simulations in hBN, we show that an ultrashort (few-fs) UV pulse can produce a coherent superposition of excitonic states that induces an oscillatory motion of electrons and holes between different valleys in reciprocal space, leading to a sizeable exciton migration in real space. We also show that an ultrafast spectroscopy scheme based on the absorption of an attosecond pulse in combination with the UV pulse can be used to read out the laser-induced coherences, hence to extract the characteristic time for exciton migration. This work opens the door towards ultrafast electronics and valleytronics adding time as a control knob and exploiting electron coherence at the early times of excitation.
Mikhail Malakhov, Giovanni Cistaro, Fernando Martín, Antonio Picón
2023-09-03T14:27:53Z
http://arxiv.org/abs/2309.01190v1
# Exciton migration in two-dimensional materials ###### Abstract Excitons play an essential role in the optical response of two-dimensional materials. These are bound states showing up in the band gaps of many-body systems and are conceived as quasiparticles formed by an electron and a hole. By performing real-time simulations in hBN, we show that an ultrashort (few-fs) UV pulse can produce a coherent superposition of excitonic states that induces an oscillatory motion of electrons and holes between different valleys in reciprocal space, leading to a sizeable exciton migration in real space. We also show that an ultrafast spectroscopy scheme based on the absorption of an attosecond pulse in combination with the UV pulse can be used to read out the laser-induced coherences, hence to extract the characteristic time for exciton migration. This work opens the door towards ultrafast electronics and valleytronics adding time as a control knob and exploiting electron coherence at the early times of excitation. Electrons usually move much faster than nuclei and, for this reason, they play a dominant role in the optical response of two- and three-dimensional materials. Thus, manipulating and controlling electronic motion in its natural timescale, well before the lattice has time to respond, may open an unprecedented platform for charge transport and valleytronics based on electron coherence. Nowadays, we have the technology to produce laser pulses as short as several attoseconds (\(10^{-18}\) s), which enables to track and investigate electron dynamics [1, 2]. By using such technology, techniques such as ultrafast absorption spectroscopy have been carried out to observe electron motion in insulators, semiconductors, and semimetals [1, 2, 3, 4, 5, 6, 7], and even in few-layers materials [8]. Attosecond and few-femtosecond pulses not only enable to track electron dynamics, but also to trigger unusual charge dynamics. In particular, real-time investigations of charge migration in molecular systems, induced by such ultrashort pulses, have been systematically reported in the literature for almost a decade, providing an unprecedented understanding of the process and opening the way for new control schemes of chemical reactions (the so-called attochemistry) [9, 10, 11, 12]. Charge migration can be induced by creating a coherent superposition of _bound_ molecular states with a broadband, attosecond, or few-fs pulse. During the free propagation of the system, this coherent superposition induces fast oscillations between the involved states, which, in the case of covering different spatial regions and/or exhibiting quite different electronic properties, may translate into electron transfer from one side of the molecule to another. To our knowledge, similar charge migration processes have not yet been observed in condensed-matter systems, probably because the electrons organize in a quasi continuum of delocalized states (the electronic bands), so that any coherent superposition induced by an ultrashort pulse involves a huge number of states with continuously and smoothly varying electronic properties. In this manuscript, we demonstrate that laser induced charge migration is possible in materials whose optical response is dominated by excitonic interactions. Excitons can be considered as quasi-particles composed of an electron-hole pair bound via Coulomb interaction. This interaction can only manifest in systems where screening effects are not dominant, as, e.g., non-metallic two-dimensional (2D) materials, where mobility of the remaining electrons is hampered due to the reduced dimensionality. As a consequence, the optical response of non-metallic 2D materials is almost entirely dominated by excitons [14]. Excitons are usually associated to bound states located within bandgaps. The exciton migration can thus be induced when an ultrashort pulse excites a superposition of those quasi-particle states. In this work, we have performed real-time simulations for monolayer boron nitride (hBN) interacting with an ultrashort UV pulse using our recently developed EDUS approach [13]. hBN displays two valley pseudospins [15, 16] related to its inversion symmetry and electronic structure [17]. We show that the ultrashort pulse enables us to excite a superposition of s- and p- excitons that are localized in different valleys of the reciprocal space (and also different regions in real space). The exciton migration produces the oscillation of excitons from one valley, around the K' point, to the other valley, around the K' point, in about 10 fs (\(10^{-15}\) s). This oscillation is translated into fast beatings in the laser-induced current. Finally, we show that such fast oscillations can be read out by using X-ray attosecond transient absorption spectroscopy (ATAS)[1], which has been successfully used to study other electron dynamics processes in insulators [18; 19; 20]. Numerical simulations of ATAS including excitonic effects are challenging. First, because ultrafast schemes require a theory beyond linear response, as one has to describe the absorption of at least two photons (one from the pump pulse, another one from the probe pulse). Second, because valence excitons as those considered in this work can only be described by correctly accounting for the electron-electron interactions. In this respect, some significant progress has been performed at the level of real-time TDDFT [21] and real-time Green's function based methods [22], and in our numerical implementation of the semiconductor Bloch equations; EDUS [13]. In addition, EDUS reduces the computational cost of including core orbitals and enables to describe x-ray interactions. In previous work [13], we have shown, by explicit comparison with elaborate calculations for single-photon absorption [23; 24; 25], that a reasonable description of the 2p and 2s excitons of hBN can be achieved even at the tight-binding level using a two-band model. Thus, to face the more challenging ATAS scenario, here we have followed a similar approach and used a three-band (two valence + one K-shell) tight-binding model of hBN, see the illustration in Fig. 1a. The core band is flat and belongs to the 1s orbital of N, at an energy of \(E_{ch}=410\) eV with respect to the Fermi level. In brief, we solve the real-time electron dynamics with the EDUS code [26; 13; 27], which consists in evolving the one-electron reduced density matrix in the reciprocal space, see more details in SI. The laser-matter and electron-electron interactions that give rise to excitonic effects are accounted for on an equal footing in the time domain. Electron-electron interactions are taken into account in the dynamical mean-field approximation, which is a reasonable approximation to describe excitons [28]. Via the calculated one-electron density matrix in time, we are able then to obtain the polarization of the system and thence the absorption spectrum. In our simulations, an 11.3-fs FWHM pulse centered at a photon energy of 6.14 eV and circularly polarized, depicted in Fig. 1a, interacts with hBN. Several exciton peaks are present in the UV spectrum within the bandgap of \(E_{g}=7.25\) eV, see Fig. 1b. When electron-electron (excitonic) interactions are switched off, the absorption only takes place for photon energies above the energy bandgap \(E_{g}\). When excitonic interactions are included, strong absorption peaks appear within the bandgap. The prominent peaks have s- and p- characters, which present a distinctive distribution in **k** space, see Figs. 1c-e in which the off-diagonal part of the density matrix is represented. The 1s exciton is well-localized around the K points. Note that an opposite handness of the polarization would excite the degenerate 1s exciton that is located at the valley around the K' point [17]. The next peak corresponds to the 2p exciton, which is also degenerate. If we use the same handness for the polarization, then we excite the 2p exciton, which is mainly localized and shows a clear singularity around the K' points. This exciton is hybridized with the 2s exciton, which is key to induce the exciton migration, and shows a small population around the K points. The third peak corresponds to the 2s exciton, which is also degenerate, and our chosen circular polarization excites the K valleys. Because of the hybridization, this exciton also has a 2p component in the K' valleys. We represent the three excitons in real space in Figs. 1f-h. Note that the 2s exciton is well-localized in **k** space and quite spread in real space. The ultrashort pulse is broad enough in frequency to excite both the 2p and 2s excitons. The laser-induced Figure 1: **Two-color excitations in hBN**. a) Illustration of the ultrafast scheme and the possible transitions in hBN. b) The UV absorption spectrum resulting from our real-time simulations using the EDUS code [13]. Dashed blue line represents the independent particle approximation (IPA) calculations of the absorption when no electron-electron interactions are included. The peaks correspond to the 1s, 2p, and 2s excitons at 5.32, 6.14, and 6.35 eV, respectively. c)-e) Distribution in **k** space for the three excitons, obtained by using a long 120-fs pulse resonant to the corresponding exciton peak and circularly polarized light. f)-h) The real-space distribution of the three excitons. current in time, see Fig. 2a, presents some quantum beats due to the coherent excitation of the two excitonic states, see the Fourier transform of the current in Fig. 2b. Those quantum beats will last until the coherence is lost. Electron collisions or electron-phonon couplings may contribute to the dephasing, but here we neglect these effects, which will have a minor impact in such short time scale. If we plot the time evolution of the density matrix in \(\mathbf{k}\) space in a maximum of the beating, see Fig. 2c, and in a minimum of the beating, see Fig. 2g, we observe that it mainly moves from the K to the K' valley in approximately 10 fs. Then it goes back to its initial state in the next 10 fs. Hence, at some points in time (e.g., 38.5 fs) the exciton is well-localized at the K' valley with s-character, while at other times (e.g., 50 fs) is mainly localized at the K valley with an p-character. The period of the oscillation \(\tau\sim 20\) fs is linked to the energy difference \(\Delta E\) of the exciton states through the formula \(\Delta E=2\pi/\tau\). Therefore, the exciton excitation and energies may be a control knob to tailor the migration oscillations in time. In real space, see Figs. 2d,f,h, we observe how the spatial distribution significantly changes when the exciton is localized around the K valleys and moves to the K' valleys. The real-space distribution is mainly a linear superposition of the 2s and 2p distribution given in Figs. 1d-e. For example, one observes that at \(t=38.5\) fs the distribution clearly shows the structure of the 2p, with strong population in the three first neighbours as in the 2s structure. This real-space motion is connected to the calculated beats in the current. Figure 2: **Exciton migration.** a) Time evolution of the current induced by the circularly polarized UV pulse. b) Fourier transform of the laser-induced current. Figs c)-h) show snapshots, at the times indicated in a), of the excitonic distributions in reciprocal and real space. Figure 3: **Laser-induced core excitons.** a) N K-edge absorption spectrum of the attosecond x-ray pulse in the absence of the pump pulse. Note we take as a reference the energy excitation from the 1s N orbital to the Fermi level. b) Distribution of the population in \(\mathbf{k}\) space with no pump excitation. c) N K-edge absorption spectrum of the attosecond x-ray pulse after pump excitation, the time delay corresponding to the times given in Fig. 2. d)-e) Distribution in \(\mathbf{k}\) space for the square modulus of the off-diagonal density matrix of the core-valence part. These results show that coherent population of excitonic states with different degrees of delocalization in the crystal may induce a fast electron/hole motion that controls the current and the valley population in a short time scale. This phenomenon relies on the coherence of the exciton states and therefore it should be observed before the laser-induced coherent dynamics couples to other degrees of freedom. To experimentally observe exciton migration, here we propose to use ATAS. ATAS consists in sending a second laser pulse, the so-called probe pulse, to the system with a certain time delay with respect to the pump pulse, see Fig. 1a, and then in measuring the absorption of the probe pulse as a function of the time delay. In this particular case, we consider an attosecond pulse that excites the K-edge transitions at the nitrogen site, i.e. transitions that promote electrons from the 1s orbitals of nitrogen to the valence/conduction band. In our tight-binging model we include this additional core band, see more details in the SI. The attosecond pulse has a 133-as FWHM duration, see SI, and a photon energy centered around 410 eV. The bandwidth of the pulse is large enough to cover all transitions to the valence and conduction bands. With no pump pulse, the calculated absorption shows a prominent core-exciton peak above the Fermi level, see Fig. 3a. This peak arises from the conduction band, as core electrons cannot be promoted into the fully occupied valence band. The transfer of core electrons to the conduction band is clearly illustrated by the relative population of core and conduction bands shown in Fig. 3b. As can be seen, besides the areas around the K,K' points, the whole reciprocal space is partially populated. This is because the conduction band at those areas is dominated by the 2p orbitals of boron and we are exciting from the 1s orbital of nitrogen, which is very well-localized in space. When the UV pulse is also present, holes created in the valence band, either through exciton formation or promotion of electrons to the conduction band, can be refilled by N K-shell electrons excited by the attosecond X-ray pulse, leading to distinct peaks appearing below the Fermi level -see scheme in the inset of Fig. 3c. Our real-time simulations show that the shape and magnitude of these peaks are indeed sensitive to exciton migration, see Fig. 3c. The valence band around the K, K' points is dominated by the 2p orbitals of nitrogen. This enhances the transitions from the 1s to the 2p orbitals of nitrogen. Hence, X-ray excitations are very sensitive to any changes occurring around the reciprocal regions in which the main exciton migration takes place. The exciton migration changes the hole distribution in the valence band with time, so that the refilling in the valence band with core electrons will also depend on time, leading to different structures of the core-exciton peaks that are formed. When the 2p character is dominant, the valence hole distribution is more extended in the reciprocal space and this enables access to more core exciton states. This is clearly shown in Fig. 3d-e, where we represent the off-diagonal part of the density matrix between the core and valence band in reciprocal space at two different pump-probe delays. As can be seen, the distributions giving rise to core excitons look substantially different at different times. Finally, we show in Fig. 4 the calculated ATAS during and after the pump excitation. Interestingly, we observe how the population of core excitons significantly increases after the maximum intensity of the pump pulse, following the population of the produced holes in the valence band. Those peaks are around 1% of the intensity of the main core excitons peaks, see Fig. 3a, and appear in an energy window in which there was no absorption of the probe pulse, enabling a high contrast in an experimental measurement. The peaks at -4.0 and -3.4 eV oscillate Figure 4: **Attosecond transient absorption spectroscopy at the valence band for tracking exciton migration.** a) Pump pulse in time, b) Laser-induced current, and c) ATAS features at the valence band. Dashed vertical lines indicate maxima and minima, and intermediate points, of quantum beats in current. with the exciton migration, but the most striking feature is found in the energy windows between those peaks, in which we observe the emergence and disappearance of core-exciton peaks. This energy window is ideal for transient absorption measurements in order to read out the fast exciton dynamics and link the absorption peaks to the valence-hole distribution as discussed above. In conclusion, we demonstrate the possibility of inducing charge migration in materials whose optical response is dominated by excitons. Using real-time simulations that account both for light-matter and electron-electron interactions, we show how an ultrashort UV pulse induces a superposition of exciton states in hBN that triggers exciton migration between the K and K' valleys. The migration depends on the exciton energies and laser excitation. Furthermore, we show that this dynamics can be read out by performing X-ray ATAS at the nitrogen K edge, which enables to probe the electron/hole density around individual atomic sites. 2D materials offer an ideal platform for controlling the exciton properties via control of layers, substrates, Van der Waals heterostructures, or strain engineering [29]. Also, exciton energies can be modified by another pulse via laser-induced Stark shift [30]. Thus, our study not only advances our understanding of exciton dynamics in attosecond science, but it also opens a promising perspective of exploiting exciton migration for developing transport and valleytronics schemes beyond the strong-field regime [31; 32; 30; 33], harnessing ultrafast electronics in two-dimensional materials at the ultimate time scale. ###### Acknowledgements. This publication is based upon work from COST Action AttoChem, CA18222 supported by COST (European Cooperation in Science and Technology). M. Malakhov, G. Cistaro, and A. Picon acknowledge grant ref. PID2021-126560NB-I00 (MCIU/AEI/FEDER, UE), and grants refs. 2017-T1/IND-5432 and 2021-5A/IND-20959 (Comunidad de Madrid through TALENTO program). F. Martin acknowledges the projects PID2019-105458RB-I00 funded by MCIN/AEI/10.13039/501100011033 and by the European Union "NextGenerationEU"/PRTRMICNN programs, and the "Severo Ochoa" Programme for Centres of Excellence in R&D (CEX2020-001039-S). Calculations were performed at the Centro de Computacion Cientifica de la Universidad Autonoma de Madrid (FI-2021-1-0032), Instituto de Biocomputacion y Fisica de Sistemas Complejos de la Universidad de Zaragoza (FI-2020-3-0008), and Barcelona Supercomputing Center (FI-2020-1-0005, FI-2021-2-0023, FI-2021-3-0019) and Picasso (FI-2022-1-0031,FI-2022-2-0031,FI-2022-3-0022).
2306.11506
Max-convolution through numerics and tropical geometry
The maximum function, on vectors of real numbers, is not differentiable. Consequently, several differentiable approximations of this function are popular substitutes. We survey three smooth functions which approximate the maximum function and analyze their convergence rates. We interpret these functions through the lens of tropical geometry, where their performance differences are geometrically salient. As an application, we provide an algorithm which computes the max-convolution of two integer vectors in quasi-linear time. We show this algorithm's power in computing adjacent sums within a vector as well as computing service curves in a network analysis application.
Taylor Brysiewicz, Jonathan D. Hauenstein, Caroline Hills
2023-06-20T12:52:22Z
http://arxiv.org/abs/2306.11506v1
# Max-Convolution through numerics and tropical geometry ###### Abstract. The maximum function, on vectors of real numbers, is not differentiable. Consequently, several differentiable approximations of this function are popular substitutes. We survey three smooth functions which approximate the maximum function and analyze their convergence rates. We interpret these functions through the lens of tropical geometry, where their performance differences are geometrically salient. As an application, we provide an algorithm which computes the max-convolution of two integer vectors in quasi-linear time. We show this algorithm's power in computing adjacent sums within a vector as well as computing service curves in a network analysis application. ## 1. Introduction Given \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}\), although computing the maximum \(M=\max_{1\leq i\leq n}v_{i}\) is an elementary task, the function \(v\mapsto\max(v)\) is not differentiable. A common technique used in optimization [15, 16] and machine learning [1, 7] is to replace the precise computation of \(M\) with an approximate computation. This article investigates three standard ways to smoothly approximate the maximum function. Equipped with \[F_{v}(t)=\sum_{j=1}^{n}t^{v_{j}},\ \ L_{v}(t)=\log_{t}(F_{v}(t)),\ \ R_{v}(t)=\frac{tF_{v}^{\prime}(t)}{F_{v}(t)},\ \ \text{and}\ \ ||v||_{p}=\left(\sum_{j=1}^{n}|v_{j}|^{p}\right)^{\frac{1}{p}},\] we consider the approximations: \[(\text{LogSumExp}):\qquad M=\lim_{t\to\infty}L_{v}(t) (\text{Ratio}):\qquad M=\lim_{t\to\infty}R_{v}(t)\] and, if each entry of \(v\) is non-negative, \[(\text{$p$-norm}):\qquad M=\lim_{p\to\infty}||v||_{p}=||v||_{\infty}.\] We drop the subscript \(v\) when the vector of interest is clear from context. Figure 1. Plots of the values of the smooth approximations \(L(t),||v||_{p}\), and \(R(t)\) for the vectors \(v_{1}=(1,2,3,4,5,6,7)\) and \(v_{2}=(1,2,3,4,5,6,7,7,7,7,7)\). These values are plotted against the natural logarithm of the largest absolute value \(T\) of a floating point number involved in the numerical evalutation of each function. When \(t>0\), the functions \(L_{v}(t)\) and \(R_{v}(t)\) are smooth as a function of \(v\) and approximate \(M\) as shown by the limits above. For \(p\in\mathbb{R}_{>0}\), the function \(||v||_{p}\) smoothly approximates \(M\) provided that each element of \(v\) is non-negative. Each of these functions can be expressed in terms of \[\mathcal{L}_{v}(t)=\log(F_{v}(t)). \tag{1}\] **Proposition 1.1**.: _For \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}\), \(t\in\mathbb{R}\) with \(t>1\), we have \(u=\log(t)>0\), and_ \[L_{v}(t)=L_{v}(e^{u}) =\frac{1}{u}\mathcal{L}_{v}(e^{u}) \tag{3}\] \[R_{v}(t)=R_{v}(e^{u}) =\frac{d}{du}\mathcal{L}_{v}(e^{u})\] (4) \[||v||_{\log(t)}=||v||_{u} =e^{L_{\log|v|}(t)}=e^{\frac{1}{u}\mathcal{L}_{\log|v|}(e^{u})} \tag{2}\] _where \(\log|\cdot|:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the componentwise log-absolute value map._ After providing some basic notation in Section 2, we derive convergence rates for each of these functions in Section 3 by analyzing \(\mathcal{L}_{v}(t)\). Namely, for \(\delta>0\), we give bounds on \(t\) for which the absolute error of these approximations is smaller than \(\delta\). As a consequence, when the vector \(v\) is integral, one may use a rounding procedure, such as the floor or ceiling function, to provably compute \(M\) via a single evaluation of an approximating function. In practice, when the entries of \(v\) are selected from a discrete set, such as the integers \(\mathbb{Z}\), the maximum \(M\) is often obtained more than once. The number of times \(M\) is attained is the _multiplicity_ of \(M\) in \(v\), namely \[\mu_{M}=\#\{i\mid v_{i}=M\}.\] When the multiplicity of \(M\) is large, the ratio approximation significantly outperforms the other approximations (see Section 5). In Section 3, we express \(\mu_{M}\) as a limit of the aforementioned functions and derive analogous convergence rates result for computing \(\mu_{M}\). In light of part (3) of Proposition 1.1, we generalize the approximation \(R_{v}(t)\) using higher-order derivatives. In particular, for \(k\geq 1\), we define \[R_{v}^{(k)}(t)=-\frac{(-t)^{k}}{(k-1)!}\frac{d^{k}}{dt^{k}}\mathcal{L}_{v}(t).\] Observe that \(R_{v}^{(1)}(t)=R_{v}^{(1)}(e^{u})=\frac{d}{du}\mathcal{L}_{v}(e^{u})=R_{v}(e^ {u})=R_{v}(t)\). We show, in Section 3, that every \(R_{v}^{(k)}(t)\) for \(k\geq 1\) converges to \(M\) at the same rate. We discuss how to use these higher-order derivatives to numerically approximate other information about \(v\) (see Theorem 3.11). In Section 4, we explore the geometry of \(L_{v}(t)\) and \(R_{v}^{(k)}(t)\) in terms of objects called amoebas from the world of tropical geometry. We realize the graph of the function \(u\mapsto\mathcal{L}_{v}(e^{u})\) as the upper boundary of a certain amoeba and provide a geometric interpretation of the performance differences of \(R_{v}(t)\) and \(L_{v}(t)\) when \(\mu_{M}\) is large. In Section 5, we conduct a series of experiments showcasing our theoretical results and the performance differences of the approximation techniques discussed. In particular, we provide empirical evidence showing the extent to which the bounds derived in Section 3 are tight. We illustrate how the ratio approximations perform significantly better than the others when the maximum appears with non-trivial multiplicity and that this feature persists in the presence of a noisy model. In Section 6, we propose an algorithm for the max-convolution problem: \[\begin{split}\text{{MAXCON:}}&\text{Given }a=(a_{0}, \ldots,a_{n})\in\mathbb{Z}^{n+1}\text{ and }b=(b_{0},\ldots,b_{n})\in\mathbb{Z}^{n+1},\\ &\text{compute }c=(c_{0},\ldots,c_{2n})\in\mathbb{Z}^{2n+1} \text{ where }\\ & c_{k}=\max_{\max\{0,k-n\}\leq i\leq\min\{k,n\}}(a_{i}+b_{k-i}). \end{split} \tag{5}\] Since each \(c_{k}\) is a maximum of the integer vector \(v^{(k)}=(a_{i}+b_{k-i})_{i=\max\{0,k-n\}}^{\min\{k,n\}}\) its value may be determined by an (appropriately large) evaluation of an approximation of that maximum. In particular, any algorithm which computes classical convolution coefficients may be used as an oracle for evaluating \(L_{v^{(k)}}(t)\). The fast Fourier transform, for example, performs such a computation using \(O(n\log(n))\) operations. By combining this fact with our bounds from Section 3, we obtain a quasi-linear time algorithm for the max-convolution problem. We end by applying our numerical approach to the maximum consecutive subsums problem and and the computation of service curve constraints. ## 2. Notation and Fundamental Results We begin by fixing the following notation \[v=(v_{1},\ldots,v_{n}) :\text{ an $n$-tuple of real numbers}\] \[M :\text{ max}(v)\] \[\mu_{c} :\text{ multiplicity of a real number $c$ in $v$, \, i.e., $\#\{i\ |\ v_{i}=c\}$}\] \[\ell :\text{ number of distinct elements in $v$}\] \[w=(w_{1},\ldots,w_{\ell}) :\text{ decreasing list of unique elements in $v$ i.e., $M=w_{1}>\cdots>w_{\ell}$}\] \[g=(g_{1},\ldots,g_{\ell}) :g_{i}=M-w_{i}\text{ with $0=g_{1}<g_{2}<\cdots<g_{\ell}$}.\] Additionally, \(t\) will denote a variable which takes on positive real values whereas \(u=\log(t)\) is its image under the natural logarithm. **Example 2.1**.: To illustrate notation, consider \(v=(7,7,-1,0,1,1,2.5,2.5,7,7)\ \in\ \mathbb{R}^{10}\). \[M=7,\quad\ell=5,\] \[\mu_{7}=4,\quad\mu_{2.5}=2,\quad\mu_{1}=2,\quad\mu_{0}=1,\quad\mu_ {-1}=1,\] \[w=(7,2.5,1,0,-1),\quad g=(0,4.5,6,7,8).\] In principle, the elements of \(v\) can be any real numbers. However, in practice, they are usually some rational floating point approximations, that is, \(v\in\mathbb{Q}^{n}\). Moreover, we may assume for our analyses that \(v\in\mathbb{Z}^{n}\) since evaluating \(\mathcal{L}_{v}(t)\) at a power \(t^{k}\) of \(t\) corresponds to scaling \(v\) by \(k\): \[\mathcal{L}_{v}(t^{k})=\mathcal{L}_{v}(e^{ku})=\mathcal{L}_{kv}(e^{u})= \mathcal{L}_{kv}(t). \tag{6}\] **Remark 2.2**.: We note that \(\mathcal{L}_{v}(t)=\mathcal{L}_{uv}(e)\). The function \(\operatorname{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text **Proposition 2.4**.: _For \(v\in\mathbb{Z}^{n}\), let \(\{\alpha_{j}\}_{j=1}^{\infty}\) be as in Proposition 2.3. Then, near \(t=\infty\):_ \[L_{v}(t) =M+\log_{t}(\mu_{M})+\log_{t}\left(\sum_{i=1}^{\ell}\frac{\mu_{w_{i}}}{ \mu_{M}}t^{-g_{i}}\right)\] \[=M+\frac{1}{\log(t)}\left(\log(\mu_{M})+\sum_{j=1}^{\infty} \alpha_{j}t^{-j}\right)\] \[=M+\frac{1}{u}\left(\log(\mu_{M})+\sum_{j=1}^{\infty}\alpha_{j}e^ {-ju}\right), \tag{11}\] \[R_{v}(t) =M-\sum_{j=1}^{\infty}j\alpha_{j}t^{-j}\] \[=M-\sum_{j=1}^{\infty}j\alpha_{j}e^{-ju},\] (12) \[R_{v}^{(k)}(t) =M-\sum_{j=1}^{\infty}j\binom{j+k-1}{k-1}\alpha_{j}t^{-j}\] \[=M-\sum_{j=1}^{\infty}j\binom{j+k-1}{k-1}\alpha_{j}e^{-ju}. \tag{10}\] _In particular, \(R_{v}(t)\) and, more generally, \(R_{v}^{(k)}(t)\) are analytic at \(t=\infty\). Additionally, if each \(v_{i}\geq 0\),_ \[||v||_{\log(t)}=e^{L_{\log|v|}(t)}=M\cdot e^{L_{\log|v|}(t)-\log(M)}. \tag{13}\] Clearly, \(L_{v}(t)\geq M\) and the \(\log_{t}(\mu_{M})\) term from (10) is responsible for a slow convergence rate of \(L_{v}(t)\) to \(M\). This logarithmic term is eliminated in \(R_{v}(t)\) and, more generally, in \(R_{v}^{(k)}(t)\). See Section 4 for a geometric explanation of this fact. **Remark 2.5**.: Since \(R_{v}(t)\) and, more generally, \(R_{v}^{(k)}(t)\) are analytic at \(t=\infty\) when \(v\in\mathbb{Z}^{n}\), Cauchy's integral formula yields that for each \(k\geq 1\) there exists \(r>0\) such that \[\frac{1}{2\pi\sqrt{-1}}\oint_{|t|=r}t^{-1}\cdot R_{v}^{(k)}(t^{-1})\cdot dt=M. \tag{14}\] Numerically, one can use the trapezoid rule [13] to approximate \(M\) from this integral. Since \(R_{v}(t)\) depends on \(F_{v}^{\prime}(t)\) which may be difficult to evaluate in practice (see Section 6), we show below how to approximate \(R_{v}(t)\) from evaluations of \(L_{v}(t)\). **Proposition 2.6**.: _For \(v\in\mathbb{R}^{n}\), \(t>1\), and \(\alpha>0\) with \(\alpha\neq 1\), define_ \[D_{v}(t,\alpha)=\log_{\alpha}\left(\frac{F_{v}(\alpha\cdot t)}{F_{v}(t)} \right)=\frac{\mathcal{L}_{v}(\alpha\cdot t)-\mathcal{L}_{v}(t)}{\log(\alpha)}. \tag{15}\] _Then,_ \[\lim_{\alpha\to 1}D_{v}(t,\alpha)=R_{v}(t).\] Proof.: Applying l'Hopital's rule yields \(\lim_{\alpha\to 1}D_{v}(t,\alpha)=\lim_{\alpha\to 1}R_{v}(\alpha\cdot t)=R_{v}(t)\) ## 3. Approximating quantities associated to \(v\) Equipped with the expansions (10)-(13) and (15), the functions \(L_{v}(t)\), \(R_{v}^{(k)}(t)\), \(D_{v}(t,\alpha)\), and \(||v||_{u}\) may each be used to approximate certain information about \(v\), such as \(M\), \(\mu_{M}\), and \(g_{2}\). For each such approximation of \(M\), we derive a lower bound on \(t\) so that the absolute error is less than a given value \(\delta>0\). For integer vectors, we pay particular attention to the case where \(\delta=1\), since one can use the floor \(\lfloor\cdot\rfloor\) and ceiling \(\lceil\cdot\rceil\) functions to _provably_ compute these values from their approximations. ### Computing the maximum We derive bounds on the absolute errors of \(L_{v}(t)\), \(R_{v}(t)\), \(D_{v}(t,\alpha)\), and \(||v||_{u}\) in Theorems 3.1, 3.3, 3.6, and 3.7 respectively. **Theorem 3.1**.: _Fix \(v\in\mathbb{Q}^{n}\) and \(\delta>0\). Then \(0\leq L_{v}(t)-M<\delta\) whenever \(t>1\) and_ \[t^{\delta+g_{2}}-t^{g_{2}}\mu_{M}-(n-\mu_{M})>0.\] _If \(v\in\mathbb{Z}^{n}\) and \(\delta=1\), this bound is obtained when_ \[t>\frac{\mu_{M}+\sqrt{\mu_{M}^{2}+4(n-\mu_{M})}}{2}.\] _If additionally \(\mu_{M}=1\) then this bound simplifies to \(t>\frac{1}{2}+\sqrt{n}\)._ Proof.: Assume, after reindexing, that \(v_{1}=\cdots=v_{\mu_{M}}=M\). Thus, \[L_{v}(t)-M=\log_{t}\left(\mu_{M}+\sum_{j=\mu_{M}+1}^{n}t^{v_{j}-M}\right).\] Hence, \(L_{v}(t)-M<\delta\) provided that the expression within the logarithm is smaller than \(t^{\delta}\). Since the function \(t^{x}\) is monotonic for \(t>1\), \[\mu_{M}+\sum_{j=\mu_{M}+1}^{n}t^{v_{j}-M}\leq\mu_{M}+(n-\mu_{M})t^{-g_{2}},\] completing the proof of the first statement since this value is less than \(t^{\delta}\) when \[t^{\delta}>\mu_{M}+(n-\mu_{M})t^{-g_{2}}.\] For \(v\in\mathbb{Z}^{n}\), we have that \(g_{2}\geq 1\) so that a sufficient condition when \(\delta=1\) is \[t^{2}-\mu_{M}t-(n-\mu_{M})>0\] yielding the second statement. The third statement follows immediately. When \(v\) consists of integers and \(\mu_{M}\) is known, Theorem 3.1 suggests an algorithm which provably computes \(M\) using one evaluation of \(L_{v}(t)\): _Return \(\lfloor L_{v}(t)\rfloor\) for \(t\) satisfying the inequality \(2t>\mu_{M}+\sqrt{\mu_{M}^{2}+4(n-\mu_{M})}\)._ The largest \(t\) value required is when \(\mu_{M}=n\) for which one can take \(t=n+1\). In particular, for any \(v\in\mathbb{Z}^{n}\), one always has \(\lfloor L_{v}(n+1)\rfloor=M\). The following example illustrates Theorem 3.1 on qualitatively different input. **Example 3.2**.: Consider the following integer vectors: \[v_{1}=(1,2,3,4,5,6,7)\qquad\text{and}\qquad v_{2}=(1,2,3,4,5,6,7,7,7,7,7).\] The maximum of both vectors is \(7\) which has multiplicity \(1\) and \(5\) in \(v_{1}\) and \(v_{2}\), respectively. By Theorem 3.1, \(L_{v_{1}}(t)\in[7,8)\) when \(t>3\) and \(L_{v_{2}}(t)\in[7,8)\) when \(t>6\). Figure 2 displays a verification of these bounds and illustrates the reduced convergence rate for \(v_{2}\) due to the increased multiplicity of the maximum. The worst-case scenario analysis for \(R_{v}(t)\) is qualitatively distinct from that of \(L_{v}(t)\). The fact which distinguishes these cases is that for a fixed \(t>1\), the function \(x\mapsto xt^{-x}\) is decreasing only after reaching its maximum on \(\mathbb{R}_{>0}\) at \(x=\log(t)^{-1}\). **Theorem 3.3**.: _Fix \(v\in\mathbb{Q}^{n}\) and \(\delta>0\). Then \(0\leq M-R_{v}(t)<\delta\) when \(t>e^{1/g_{2}}\) and_ \[t>\left(\frac{(n-\mu_{M})g_{2}}{\delta\cdot\mu_{M}}\right)^{\frac{1}{g_{2}}}.\] _If \(v\in\mathbb{Z}^{n}\) and \(\delta=1\), this bound is obtained when_ \[t>\max\left(e,\frac{n-\mu_{M}}{\mu_{M}}\right).\] Proof.: From (11), a worst-case analysis with \(t>e^{\frac{1}{g_{2}}}\) shows that \[M-R_{v}(t)=\sum_{j=1}^{\infty}j\alpha_{j}t^{-j}<\frac{n-\mu_{M}}{\mu_{M}}g_{2} t^{-g_{2}}.\] Therefore, the main result follows from \[M-R_{v}(t)<\delta\qquad\text{whenever}\qquad t^{g_{2}}>\frac{(n-\mu_{M})g_{2} }{\delta\cdot\mu_{M}}.\] When \(v\in\mathbb{Z}^{n}\) and \(\delta=1\), this simplifies to \(t^{g_{2}}>\frac{(n-\mu_{M})g_{2}}{\mu_{M}}\). Since \(t>e\) and \(g_{2}\geq 1\), this holds if additionally \[t>\frac{n-\mu_{M}}{\mu_{M}}.\] **Example 3.4**.: For \(v_{1}\) and \(v_{2}\) as in Example 3.2, Figure 3 compares the graphs of \(L(t)\) and \(R(t)\). Note that Theorems 3.1 and 3.3 guarantee that \(L_{v_{2}}(t)\in[7,8)\) when \(t>6\) and \(R_{v_{2}}(t)\in(6,7]\) when \(t>e\approx 2.718\). **Remark 3.5**.: For \(v\in\mathbb{Z}^{n}\), one has \[L_{v}(t)-M=\left\{\begin{array}{ll}O(1/\log(t))&\quad\text{if }\mu_{M}>1,\\ O(t^{-g_{2}}/\log(t))&\quad\text{if }\mu_{M}=1,\end{array}\right.\qquad \text{and}\qquad M-R_{v}(t)=O(t^{-g_{2}}).\] Figure 2. The graphs of \(L_{v_{1}}(t)\) and \(L_{v_{2}}(t)\) as in Example 3.2. When \(\mu_{M}=1\), Thereom 3.1 requires \(t=O(\sqrt{n})\) while Theorem 3.3 requires \(t=O(n)\). However, the bound \(\frac{n-\mu_{M}}{\mu_{M}}\) from Theorem 3.3 is smaller than the bound \(\frac{\mu_{M}+\sqrt{\mu_{M}^{2}+4(n-\mu_{M})}}{2}\) from Theorem 3.1 whenever \(\mu_{M}\geq\frac{1}{4}(\sqrt{8n+1}-1)\). For reference, this means that \[(n,\mu_{M})\in\{(10,2),(105,7),(1081,23),(10153,71),(100576,224),\ldots\}\] are afforded equal \(t\)-bounds for \(L_{v}(t)\) or \(R_{v}(t)\) via Theorems 3.1 and 3.3, respectively. These bounds are derived from the worst-case scenarios where \(M-1\) appears with multiplicity \(n-\mu_{M}\). However, on input vectors \(v\) sampled from the uniform distribution on \(\{0,\ldots,M\}\) with varying multiplicities \(\mu_{M}\), \(L_{v}(t)\) consistently performs worse than \(R_{v}(t)\). For more details, see the experiments in Section 5. Based on the relationship between \(D_{v}(t,\alpha)\) and \(R_{v}(t)\) summarized in Proposition 2.6, the error is similar to Theorem 3.3. Here, the worst case analysis yields the function \(x\mapsto\frac{t^{-x}(1-\alpha^{-x})}{\log(\alpha)}\) which is decreasing after reaching its maximum on \(\mathbb{R}_{>0}\) at \(x=\frac{\log(\log(\alpha t))-\log(\log(t))}{\log(\alpha)}\) which limits to \(\log(t)^{-1}\) as \(\alpha\to 1\). **Theorem 3.6**.: _Fix \(v\in\mathbb{Q}^{n}\), \(\delta>0\), and \(\alpha>1\). Then, \(0\leq M-D_{v}(t,\alpha)<\delta\) when \(t>e^{1/g_{2}}\),_ \[t>\alpha\frac{\alpha^{-g_{2}}}{1-\alpha^{-g_{2}}}\xrightarrow{ \alpha\to 1}e^{1/g_{2}},\ \ \text{and}\] \[t>\left(\frac{n-\mu_{M}}{\delta\cdot\mu_{M}}\cdot\frac{1-\alpha ^{-g_{2}}}{\log(\alpha)}\right)^{\frac{1}{g_{2}}}\xrightarrow{\alpha\to 1} \left(\frac{(n-\mu_{M})g_{2}}{\delta\cdot\mu_{M}}\right)^{\frac{1}{g_{2}}}.\] _If \(v\in\mathbb{Z}^{n}\) and \(\delta=1\), this bound is obtained when \(\alpha>1\) and_ \[t>\max\left(e,\frac{n-\mu_{M}}{\mu_{M}}\right).\] Proof.: The worst case analysis using the three assumptions on \(\alpha\) and \(t\) that are independent of \(\delta\) show that \[M-D_{v}(t,\alpha)<\delta\qquad\text{whenever}\qquad t^{g_{2}}>\frac{n-\mu_{M }}{\delta\cdot\mu_{M}}\frac{1-\alpha^{-g_{2}}}{\log(\alpha)}.\] When \(v\in\mathbb{Z}^{n}\) and \(\delta=1\), this simplifies to \(t^{g_{2}}>\frac{n-\mu_{M}}{\mu_{M}}\frac{1-\alpha^{-g_{2}}}{\log(\alpha)}\). Since \(t>e\), \(\alpha>1\), and \(g_{2}\geq 1\), this holds if additionally \(t>\frac{n-\mu_{M}}{\mu_{M}}\). To analyze the \(p\)-norm case, following [11, 12], we assume the vector \(v\) has undergone a linear transformation so that each \(v_{j}\in[0,1]\). Figure 3. The graphs of \(L(t)\) and \(R(t)\) applied to \(v_{1}\) and \(v_{2}\) from Example 3.2. **Theorem 3.7**.: _If \(v\in[0,1]^{n}\) and \(\delta>0\), then \(0\leq||v||_{u}-M<\delta\) when \(u>1\) and_ \[e^{u(\Delta+g_{2})}-e^{ug_{2}}\mu_{M}-(n-\mu_{M})>0\qquad\text{where}\quad\Delta =\log\left(1+\frac{\delta}{M}\right).\] Proof.: By (13), \(||v||_{u}=M\cdot e^{\epsilon(u)}\) where \(\epsilon(u)=L_{\log|v|}(e^{u})-\log(M)\) which is the error when using \(L_{\log|v|}(e^{u})\) to approximate \(\log(M)\). Hence, \(||v||_{u}-M<\delta\) if and only if \(\epsilon(u)<\log\left(1+\frac{\delta}{M}\right)=:\Delta\). By Theorem 3.1, this occurs whenever \(t>1\) and \[t^{\Delta+g_{2}}-t^{g_{2}}\mu_{M}-(n-\mu_{M})>0.\] Since \(u=\log(t)\), changing coordinates gives the result. **Example 3.8**.: The following illustrates the differences between the approximations \(L(t),R(t),D(t,\alpha)\), and \(||\cdot||_{p}\) of \(M\) on our running examples of \(v_{1}\) and \(v_{2}\) from Example 3.2. First, similar to previous plots, Figure 4 shows the difference of convergence rate for the \(p\)-norm approximation due to higher multiplicity. Next, Figure 5 compares \(R_{v}(t)\) with \(D_{v}(t,2)\) and \(D_{v}(t,1.5)\) for \(v=v_{1}\) and \(v=v_{2}\) showing comparable convergence rates. Finally, we compare the values of \(||v||_{p},L_{v}(t)\), \(R_{v}(t)\), and \(D_{v}(t,\alpha)\) when they require comparably large (in absolute value) floating point number for evaluation. Setting \(T\) to be the largest floating point number required, we plot these functions against \(\log(T)\) in Figures 6 and 7 for \(v=v_{1}\) and \(v=v_{2}\), respectively. Figure 4. The graphs of the \(p\)-norms of the vectors \(v_{1}/7\) and \(v_{2}/7\) with entries in \([0,1]\), where \(v_{1}\) and \(v_{2}\) are from Example 3.2. Figure 5. Comparison of \(D_{v}(t,\alpha)\) for \(v_{1}\) and \(v_{2}\) from Example 3.2 with various values of \(\alpha\). ### Computing the multiplicity Due to the simplistic nature of the expansion in (10) for \(L_{v}(t)\), we consider computing the multiplicity \(\mu_{M}\) for the maximum \(M\). In particular, it is easy to see from (10) that \[\mu_{M}=\lim_{t\to\infty}t^{L_{v}(t)-M} \tag{16}\] Of course, using this expression requires _a priori_ knowledge of \(M\) which can be attained, for example, in the integer case by applying Theorem 3.1. **Theorem 3.9**.: _Given \(v\in\mathbb{Z}^{n}\), \(\lfloor t^{L_{v}(t)-\lfloor L_{v}(t)\rfloor}\rfloor=\mu_{M}\) whenever_ \[t>\max\left\{n-\mu_{M},\frac{\mu_{M}+\sqrt{\mu_{M}^{2}+4(n-\mu_{M})}}{2}\right\}\] Proof.: When \(t>\frac{\mu_{M}+\sqrt{\mu_{M}^{2}+4(n-\mu_{M})}}{2}\), Theorem 3.1 provides that \(\lfloor L(t)\rfloor=M\). Using a worst-case analysis, one has \[0\leq t^{L_{v}(t)-M}-\mu_{M}\leq(n-\mu_{M})t^{-1}\] with the worst-case upper bound below \(1\) when \(t>n-\mu_{M}\) Figure 6. A comparison of the smooth approximations \(L_{v_{1}}(t),R_{v_{1}}(t),||v_{1}||_{p},D_{v_{1}}(t,2)\), and \(D_{v_{1}}(t,1.5)\) of the maximum of \(v_{1}\), plotted against the natural logarithm of \(T\), the required absolute value of floating point numbers for evaluation. Figure 7. A comparison of the smooth approximations \(L_{v_{1}}(t),R_{v_{1}}(t),||v_{1}||_{p},D_{v_{1}}(t,2)\), and \(D_{v_{1}}(t,1.5)\) of the maximum of \(v_{2}\), plotted against the natural logarithm of \(T\), the required absolute value of floating point numbers for evaluation. **Example 3.10**.: Continuing with \(v_{1}\) and \(v_{2}\) from Example 3.2, Theorem 3.9 provides \(t^{L_{v_{1}}(t)-\lfloor L_{v_{1}}(t)\rfloor}\in[1,2)\) for \(t>6\) and \(t^{L_{v_{2}}(t)-\lfloor L_{v_{2}}(t)\rfloor}\in[5,6)\) for \(t>6\) with Figure 8 showing convergence in advance of such worst-case bounds. ### Combining \(R_{v}^{(k)}(t)\) to improve convergence and compute \(g_{2}\) Since all of the higher-order derivatives \(R_{v}^{(k)}(t)\) have the same convergence rate, one can combine them in various ways to increase the convergence rate as well as extract other information about \(v\). The following demonstrates a higher-order approximation of \(M\) along with approximating \(g_{2}\). The computation of \(g_{2}\) and \(M\) produces, as a byproduct, the second largest element of \(v\), namely \(w_{2}=M-g_{2}\). **Theorem 3.11**.: _For \(v\in\mathbb{Z}^{n}\), we have_ \[\frac{2R_{v}^{(1)}(t)R_{v}^{(3)}(t)-R_{v}^{(2)}(t)\left(R_{v}^{(1)}(t)+R_{v}^ {(2)}(t)\right)}{R_{v}^{(1)}(t)-3R_{v}^{(2)}(t)+2R_{v}^{(3)}(t)}=M+O(t^{-g_{2} -1}) \tag{17}\] _and_ \[\frac{R_{v}^{(1)}(t)-3R_{v}^{(2)}(t)+2R_{v}^{(3)}(t)}{R_{v}^{(2)}(t)-R_{v}^{(1 )}(t)}=g_{2}+O(t^{-1}). \tag{18}\] Proof.: From Proposition 2.3 and (12), \[R_{v}^{(1)}(t) = M-g_{2}\frac{\mu_{w_{2}}}{\mu_{M}}t^{-g_{2}}+O(t^{-g_{2}-1}),\] \[R_{v}^{(2)}(t) = M-g_{2}(g_{2}+1)\frac{\mu_{w_{2}}}{\mu_{M}}t^{-g_{2}}+O(t^{-g_{2} -1}),\] \[R_{v}^{(3)}(t) = M-\frac{g_{2}(g_{2}+1)(g_{2}+2)}{2}\frac{\mu_{w_{2}}}{\mu_{M}}t^ {-g_{2}}+O(t^{-g_{2}-1})\] and so the result follows by direct symbolic elimination. **Example 3.12**.: We illustrate Theorem 3.11 using \(v_{1}\) and \(v_{2}\) from Example 3.2. Figure 9 compares the convergence of \(R_{v}^{(1)}(t)\), \(R_{v}^{(2)}(t)\), \(R_{v}^{(3)}(t)\), and the combined formula in (17) for \(v=v_{1}\) and \(v=v_{2}\) to \(M=7\) for both. For both cases, one sees faster convergence as expected from (17). Additionally, Figure 10 shows the convergence of the combined formula in (18) for \(v_{1}\) and \(v_{2}\) to \(g_{2}=1\) for both. Figure 8. The graphs of \(t^{L(t)-M}\) applied to \(v_{1}\) and \(v_{2}\) from Example 3.2. **Remark 3.13**.: The functions \(R_{v}^{(k)}(t)\) are linear combinations of the derivatives \(\mathcal{D}^{(k)}(t)=\frac{d^{k}}{du^{k}}\mathcal{L}_{v}(e^{u})\), e.g., \[R^{(1)}(t) =\mathcal{D}^{(1)}(t),\] \[R^{(2)}(t) =-\mathcal{D}^{(1)}(t)+\mathcal{D}^{(2)}(t),\] \[R^{(3)}(t) =\frac{2\mathcal{D}^{(1)}(t)-3\mathcal{D}^{(2)}(t)+\mathcal{D}^ {(3)}(t)}{2},\] \[R^{(4)}(t) =\frac{-6\mathcal{D}^{(1)}(t)+11\mathcal{D}^{(2)}(t)-6\mathcal{D} ^{(3)}(t)+\mathcal{D}^{(4)}(t)}{6}.\] In particular, the linear transformation that maps the first \(r\) values of \(\mathcal{D}^{(k)}(t)\) to the first \(r\) values of \(R^{(k)}(t)\) is represented by an \(r\times r\) lower triangular matrix \(A^{(r)}\). For \(j\leq i\), the \((i,j)\)-entry of \(A^{(r)}\) is \[\frac{(-1)^{i+1}}{(i-1)!}\mathcal{S}_{ij}\] where \(\mathcal{S}_{ij}\) is a Stirling number of the first kind. For example, \[A^{(4)}=\begin{bmatrix}(-1)^{2}1&0&0&0\\ (-1)^{3}1&(-1)^{3}(-1)&0&0\\ \left(\frac{(-1)^{4}}{2}\right)2&\left(\frac{(-1)^{4}}{2}\right)(-3)&\left( \frac{(-1)^{4}}{2}\right)1&0\\ \left(\frac{(-1)^{5}}{6}\right)6&\left(\frac{(-1)^{5}}{6}\right)(-11)&\left( \frac{(-1)^{5}}{6}\right)6&\left(\frac{(-1)^{5}}{6}\right)(-1)\end{bmatrix}.\] Figure 10. The graphs of (18) for \(v_{1}\) and \(v_{2}\) which converge to \(g_{2}=1\) for both cases. Figure 9. Comparison of various methods to approximate \(M\) for \(v_{1}\) and \(v_{2}\). ## 4. The tropical viewpoint We interpret our previous results geometrically using tools from _tropical geometry_. We stress that throughout this section, we consider specific families of tropical constructions which exist more generally. Namely, the varieties we consider are graphs of univariate Laurent polynomials with positive coefficients. Such varieties are quite special and so the results of this section may not hold in the more general setting. For an introduction to tropical geometry, we invite the interested reader to consult the standard reference [8]. To utilize the tropical geometry framework, we assume throughout this section that \(v\in\mathbb{Z}^{n}\), define \(\mathbb{C}^{\times}=\mathbb{C}\backslash\{0\}\), and consider the function \[\varphi:\mathbb{C}^{\times} \to\mathbb{C}^{2}\] \[t \mapsto F_{v}(t).\] Let \(X_{v}\) be the graph of \(\varphi\) intersected with \((\mathbb{C}^{\times})^{2}\). We note that \(X_{v}\) is the set of zeros of the polynomial \(\mathcal{F}_{v}(t,y)=y-F_{v}(t)\): \[X_{v}=\{(t,y)\in(\mathbb{C}^{\times})^{2}\mid\mathcal{F}_{v}(t,y)=0\}\subset( \mathbb{C}^{\times})^{2}.\] The _Newton polygon_ of \(\mathcal{F}_{v}(t,y)\) is the convex hull \(\mathcal{N}(\mathcal{F}_{v}(t,y))\) of the exponent vectors of \(\mathcal{F}_{v}(t,y)\). In this case, \(\mathcal{N}(\mathcal{F}_{v}(t,y))\) is simply the triangle \(\Delta_{v}\) with vertices \((M,0)\), \((\min(v),0)\), and \((0,1)\) as illustrated in Figure 11(a). The union of the outer normal rays of \(\Delta_{v}\) along with the origin form a _polyhedral fan_ called the tropicalization of \(X_{v}\), denoted \(\operatorname{trop}(X_{v})\) and illustrated in Figure 11(c). The fan \(\operatorname{trop}(X_{v})\) is a _tropical curve_ which encodes the asymptotic behavior of \(X_{v}\) near the coordinate axes \(\mathbb{C}^{2}\backslash(\mathbb{C}^{\times})^{2}\). Note that in our specific situation, the Newton polytope \(\mathcal{N}(\mathcal{F}_{v}(t,y))\), and hence the tropical curve \(\operatorname{trop}(X_{v})\), depends only on \(\min(v)\) and \(\max(v)\). An alternative construction of \(\operatorname{trop}(X_{v})\), due to Bergman [2], involves the image \(\mathcal{A}_{\tau}(X_{v})\) of \(X_{v}\) under the log-absolute value map: \[\operatorname{Log}_{\tau}|\cdot|:(\mathbb{C}^{\times})^{2} \to\mathbb{R}^{2}_{u,s}\] \[(t,y) \mapsto(\log_{\tau}(|t|),\log_{\tau}(|y|)).\] The set \(\mathcal{A}_{\tau}(X_{v})\) is called the \(\tau\)-amoeba of \(X_{v}\). We remark that we use \(u\) and \(s\) for coordinates of the codomain and that the overlap of the symbol \(u\) with previous sections is intentional. Undecorated, the notation \(\mathcal{A}(X_{v})\subseteq\mathbb{R}^{2}_{u,s}\) refers to the \(e\)-amoeba of \(X_{v}\) as illustrated in Figure 11(b). Since \(\mathcal{A}_{\tau}(X_{v})=\frac{1}{\log(\tau)}\mathcal{A}(X_{v})\), the set \(\mathcal{A}(X_{v})\) contains all of the information about all of the amoebas of \(X_{v}\). Since the absolute values of coordinates of points in \(X_{v}\) may be arbitrarily large or small, the set \(\mathcal{A}(X_{v})\) is unbounded. The portions Figure 11. (a) The Newton polygon of \(\mathcal{F}_{v_{1}}(t,y)\) where \(v_{1}\) is as in Example 3.2 (b) The amoeba \(\mathcal{A}(X_{v_{1}})\) (c) The tropical variety \(\operatorname{trop}(X_{v_{1}})\). which approach infinity are loosely referred to as the _tentacles_ of the amoeba. As \(\tau\to\infty\), these tentacles limit to the rays of \(\operatorname{trop}(X_{v})\). In this sense, \(\operatorname{trop}(X_{v})\) contains asymptotic information about \(X_{v}\) and is sometimes referred to as the _logarithmic limit set_ of the amoeba of \(X_{v}\). We call lines which intersect an amoeba \(\mathcal{A}(X_{v})\) in a ray tentacle lines. Up to translation, these rays are exactly those in \(\operatorname{trop}(X_{v})\). Note that many tentacle lines may be associated to the same tropical ray (see the vertical rays of Figure 12). The following elementary facts relate the Newton polygon \(\mathcal{N}(\mathcal{F}_{v}(t,y))\), amoeba \(\mathcal{A}(X_{v})\), and tropical curve \(\operatorname{trop}(X_{v})\). We encourage the reader to refer to Figure 12. These facts are specializations of a more general relationship between these three objects (see [8] for more details). 1. The tentacle lines of \(\mathcal{A}(X_{v})\) corresponding to the ray in \(\operatorname{trop}(X_{v})\) spanned by \((0,-1)\) correspond to distinct moduli of complex roots of \(F_{v}(t)\). 2. When the lowest order term of the Laurent polynomial \(F_{v}(t)\) is a constant \(c\), \(\operatorname{trop}(X_{v})\) contains the ray spanned by \((-1,0)\). There is one tentacle line of \(\mathcal{A}(X_{v})\) associated to that ray, which occurs at height \(\log(c)\). Additionally, the following facts relate \(\mathcal{A}(X_{v})\) and the function \(u\mapsto\mathcal{L}_{v}(e^{u})\). 1. The upper boundary \(\mathcal{U}\) of \(\mathcal{A}(X_{v})\) is the graph of \(\mathcal{L}_{v}(e^{u})\). 2. \(L_{v}(t)=L_{v}(e^{u})=\frac{\mathcal{L}_{v}(e^{u})}{u}\) is the slope of the ray from the origin to the point \((u,\mathcal{L}_{v}(e^{u}))\in\mathcal{A}(X_{v})\). 3. \(\mathcal{D}_{v}^{(k)}(e^{u})=\frac{d^{k}}{du^{k}}\mathcal{L}_{v}(e^{u})\) is the \(k^{\text{th}}\) derivative of the function \(u\mapsto\mathcal{L}_{v}(e^{u})\). 4. \(\mathcal{D}_{v}^{(1)}(e^{u})=R_{v}^{(1)}(e^{u})\) is the slope of the tangent line to the boundary of the amoeba at \((u,\mathcal{L}_{v}(e^{u}))\). We point out that (3) follows from the fact that all of the coefficients of \(F_{v}(t)\) are non-negative. In general, describing the boundaries of amoebas is challenging [5]. **Proposition 4.1**.: _Let \(v\in\mathbb{Z}^{n}\). The tentacle lines of \(\mathcal{A}(X_{v})\) are given by_ 1. \(s=\log(\mu_{M})+M\cdot u\)__ 2. \(s=\log(\mu_{m})+m\cdot u\) _where_ \(m=\min(v)\)_._ 3. \(u=\log(|\xi_{i}|)\) _where_ \(\{\xi_{i}\}_{i=1}^{d}\subset\mathbb{C}\) _are the roots of_ \(F(t)\)_._ Proof.: As observed already, the tentacles in the \((0,-1)\) direction correspond to roots \(\xi_{i}\) of \(F_{v}(t)\) and they occur at \(u=\log(|\xi_{i}|)\) which establishes (c). To see (a) and (b), suppose \(m=\min(v)=0\). Then, \(F(t)\) has a constant term of \(\mu_{m}\) and hence \(\lim_{t\to 0}\log(F(t))=\log(\mu_{m})\). This limit indicates that \(\mathcal{A}(X_{v})\) has a horizontal tentacle occurring at \(s=\log(\mu_{m})\). However, translating \(v\) by \(a\in\mathbb{Z}\) amounts to sheering the Newton polygon space by \((\alpha,\beta)\mapsto(\alpha,a\alpha+\beta)\) and the amoeba space by \((u,s)\mapsto(u-as,s)\). In particular, this transformation does not change the \(s\)-intercepts of the tentacle lines. Hence, the tentacle line associated to the minimum of \(v\) has the equation \(s=\log(\mu_{m})+m\cdot u\) whereas the line associated to the maximum \(M\) has the equation \(s=\log(\mu_{M})+M\cdot u\). Proposition 4.1 gives a geometric interpretation of how the multiplicity of \(M\) in \(v\) contributes to a slower convergence rate of \(L_{v}(t)\) but not for \(R_{v}(t)\): multiplicity corresponds to a translation of the tentacle line of \(\mathcal{A}(X_{v})\) associated to the tropical ray spanned by \((1,M)\). This is geometrically displayed in Figure 13. **Example 4.2**.: Let \(v=[0_{8},1_{5},2_{40},3_{5},4_{40}]\), where a subscript indicates multiplicity. The amoeba \(\mathcal{A}(X_{v})\) is shown in Figure 12. The polynomial \[F_{v}(t)=8+5t+40t^{2}+5t^{3}+40t^{4}\] has two pairs \(\xi_{1},\bar{\xi_{1}}\) and \(\xi_{2},\bar{\xi_{2}}\) of conjugate roots. Hence, there are two vertical tentacle lines of \(\mathcal{A}(X_{v})\). The tentacle line corresponding to the minimum multiplicity \(\mu_{m}=\mu_{0}=8\) is horizontal at height \(\log(8)\) and the equation of the remaining tentacle line is \(s=\log(40)+4u\). The upper boundary of the amoeba is the image of the positive part of \(X_{v}\) under the log-absolute value map. Figure 12 illustrates a geometric interpretation of the values of \(L_{v}(e^{u})\) and \(R_{v}^{(1)}(e^{u})\) as slopes of rays. One may also interpret the Cauchy integral in Remark 2.5: \[\frac{1}{2\pi\sqrt{-1}}\oint_{|t|=r}t^{-1}\cdot R_{v}^{(k)}(t^{-1})\cdot dt=M,\] Figure 12. The amoeba \(\mathcal{A}(X_{v})\) of the graph \(X_{v}\) where \(v\) is as in Example 4.2 along with its tentacle lines. Figure 13. The amoeba \(\mathcal{A}(X_{v})\) of the graph \(X_{v}\) for \(v\) as in Example 4.2 along with the ray from the \((0,0)\) to \((u,L_{v}(e^{u}))\) and the ray from \((u,L_{v}(e^{u}))\) with slope \(R_{v}^{(1)}(e^{u})=\mathcal{D}_{v}^{(1)}(e^{u})\). in terms of tropical geometry when \(k=1\). This is done via the order map: \[\operatorname{ord}:\mathbb{R}^{2} \to\mathbb{R}^{2}\] \[(u,s)\mapsto\left(\frac{1}{(2\pi\sqrt{-1})^{2}}\int_{\begin{subarray} {c}\operatorname{Log}|t|=u\\ \operatorname{Log}|y|=s\end{subarray}}\frac{tF_{v}^{\prime}(t)}{y-F_{v}(t)} \cdot\frac{dtdy}{ty},\frac{1}{(2\pi\sqrt{-1})^{2}}\int_{\begin{subarray}{c }\operatorname{Log}|t|=u\\ \operatorname{Log}|y|=s\end{subarray}}\frac{y}{y-F_{v}(t)}\cdot\frac{dtdy}{ty} \right).\] The function \(\operatorname{ord}\) is constant and \(\mathbb{Z}^{2}\)-valued on connected components of the complement \(\mathbb{R}^{2}\backslash\mathcal{A}(X_{v})\) of the amoeba [6]. In fact, \(\operatorname{ord}\) maps these components to distinct integer points in the Newton polygon \(\Delta_{v}\). In particular, for any point \((u,s)\) in the bottom right complement component, the first integral \[\frac{1}{(2\pi\sqrt{-1})^{2}}\int_{\begin{subarray}{c}\operatorname{Log}|t|= u\\ \operatorname{Log}|y|=s\end{subarray}}\frac{tF_{v}^{\prime}(t)}{y-F_{v}(t)} \cdot\frac{dtdy}{ty}\] degenerates as \(|y|\to 0\) (or \(s\to-\infty\)) to the integral \[\frac{1}{2\pi\sqrt{-1}}\oint_{\operatorname{Log}|t|=u}\frac{tF_{v}^{\prime}(t) }{F_{v}(t)}\cdot\frac{dt}{t}=\frac{1}{2\pi\sqrt{-1}}\oint_{\operatorname{Log}|t |=u}t^{-1}R_{v}^{(1)}(t)\cdot dt=M\] for sufficiently large \(u\) as in (14). The second integral, on the other hand, evaluates to zero. Since this value of \(\operatorname{ord}\) on \((u,s)\) is constant on connected components of the complement of the amoeba, this shows that \(\operatorname{ord}(u,s)\) evaluates to the vertex \((M,0)\) of \(\Delta_{v}\) when \((u,s)\) is in the bottom-right component of \(\mathbb{R}^{2}\backslash\mathcal{A}(X_{v})\), in agreement with Remark 2.5. ## 5. Experiments We compare each of the approximations discussed on a gallery of qualitatively different inputs \(v\). In each of the following sections, we sample vectors \(v\) from some prescribed distribution. We then compare the approximations of \(M\) on these samples on average. Figure 14. Average results of the integer-valued vector \(v\) experiments. The \(x\)-axis and \(y\)-axis correspond to \(n=1,\ldots,100\) and \(\mu=1,\ldots,n\) respectively. Each pixel is the average of 100 subexperiments measuring \(\log(1+t_{L_{v}}^{*}-t_{R_{v}}^{*})\) where \(t_{L_{v}}^{*}\) and \(t_{R_{v}}^{*}\) are the \(t\)-values such that the absolute error of the corresponding function is less than 1. The maximum, \(M\), are: (a) 10 (b) 50 (c) 100 (d) 500. Black pixels above the diagonal are when \(\mu>n\). ### Integer numbers We compared \(L_{v}(t)\) and \(R_{v}(t)\) in the integer case by defining a maximum \(M\) with multiplicity \(\mu\) for an integer-valued vector \(v\in[1,M]^{n}\). That is, we defined \(v\) of varying length \(n=1,\ldots,100\) with \(M\) appearing \(\mu\) times, where \(\mu\leq n\) and the remaining \(n-\mu\) values of \(v\) were random integers sampled from \([1,M-1]\). Figure 14 displays the four experiments comparing the performance of \(L_{v}(t)\) and \(R_{v}(t)\) for approximating a given maximum \(M\), namely \(M=10,50,100\), and \(500\), respectively. Performance is measured by \(t^{*}\), the first value of \(t\) to approximate \(M\) up to an absolute error of \(1\), so that the maximum is obtained from \(L_{v}(t^{*})\) or \(R_{v}(t^{*})\) through use of the floor or ceiling function, respectively. Each experiment consists of \(100\) subexperiments averaged over \(\log(1+t_{L_{v}}^{*}-t_{R_{v}}^{*})\). The plotted values are logarithmic and offset by \(1\) since, if \(t_{L_{v}}^{*}=t_{R_{v}}^{*}\), then \(\log(1+t_{L_{v}}^{*}-t_{R_{v}}^{*})=\log(1)=0\). ### Uniformly distributed floating point numbers We repeat the experiments of the above section with floating point vectors \(v\in[0,1]^{n}\) with \(M=1\). Figure 15 displays the results of comparing \(L_{v}(t)\) and \(R_{v}(t)\) on vectors whose elements are sampled from the uniform distribution on \([0,1]\). As with the previous experiments, each pixel at coordinates \((n,\mu)\) represents the value of \(\log(\alpha+t_{L_{v}}^{*}-t_{R_{v}}^{*})-\log(\alpha)\), where \(t^{*}\) is the first value of \(t\) to approximate \(M\) up to a given absolute error. To model a significant \(g_{2}\) gap, we constructed these vectors by choosing \(n-\mu\) vectors uniformly from \([0,1]\), multiplying them by \((n-1)/n\) and appending them to a vector of length \(\mu\) with coordinates all equal to \(1\). The figures differ only in the absolute tolerance used to define \(t^{*}\). The value \(\alpha=\min(t_{L_{v}}^{*}-t_{R_{v}}^{*})+1\) offsets the results so that when the results are averaged and plotted logarithmically, the minimum difference remains \(\log(1)=0\). The subtraction of the \(\log(\alpha)\) term then better illustrates the subexperiments where \(L_{v}(t)\) outperforms \(R_{v}(t)\). This adjustment is accounted for in the uniform distribution examples as there are select instances in which \(L_{v}(t)\) performs better than \(R_{v}(t)\) by converging at a lesser \(t\) value, thus the difference is non-positive and less than \(-1\). This occurs most notably in the experiments with tolerance \(1/n\) and \(1/100\) of Figure 15. Figure 15. Average results of the floating point-valued vector \(v\) experiments. Each pixel at position \((n,\mu)\) represents the value of \(\log(\alpha+t_{L_{v}}^{*}-t_{R_{v}}^{*})-\log(\alpha)\) averaged over \(100\) subexperiments. The \(x\)-axis and \(y\)-axis correspond to \(n=1,\ldots,100\) and \(\mu=1,\ldots,n\) respectively. The tolerance used to define \(t^{*}\) are absolute error less than: (a) \(\exp(1)\) (b) \(1\) (c) \(1/n\) (d) \(1/100\). Black pixels above the diagonal are when \(\mu>n\). ### Clustering floating point numbers We repeat a similar experiment with floating point numbers in the presence of noise. Our goal is to identify the scenarios where it is appropriate to apply Theorem 3.3 heuristically. Our setup is as follows. Suppose that 5 measurements, with values in \([0,1]\), are to be taken, but the measuring device incurs some error \(\pm\epsilon\). To rectify this, each measurement is performed 20 times. Heuristically, one may choose to apply Theorem 3.3 with the interpretation that \(v\) consists of 5 numbers, each occurring with multiplicity 20, with the goal of obtaining \(\max(v)\) up to error \(\epsilon\). In this case, Theorem 3.3 specializes to \(t>\left(\frac{4\cdot g}{\epsilon}\right)^{\frac{1}{g}}\) where \(g\) is the gap between the top two _true_ measurements. After fixing \(\epsilon\) and \(g\), we model such a situation by the following procedure. 1. Pick 5 true measurements \(w_{1},\ldots,w_{5}\in[0,1]\) by setting \(w_{5}=1\), \(w_{4}=1-g\) and \(w_{1},w_{2}\) and \(w_{3}\), sampled uniformly at random from \([0,1-g]\). 2. For each \(w_{i}\), sample 20 numbers uniformly from \([w_{i}-\epsilon,w_{i}+\epsilon]\). Collect all 100 numbers in \(v\). 3. Evaluate \(R_{v}(t^{*})\) for \(t^{*}=\left(\frac{4g}{\epsilon}\right)^{\frac{1}{g}}\) to obtain the absolute error \(\operatorname{err}_{v}=|1-R(t^{*})|\) For each pair \((g,\epsilon)\), where \(g=0.01,\ldots,1\) and \(\epsilon=0,\ldots,1\), we repeat the above procedure 500 times and average the error obtained in step 3. Additionally, we deem an approximation a _success_ if the error is smaller than \(\epsilon\). The two figures in Figure 16 display, for each pair \((g,\epsilon)\), the average error and number of successes. As indicated by the experiments summarized in Figure 16, a small gap \(g\) and large \(\epsilon\) produces the largest errors with the fewest numbers of successes, as expected. Interestingly, a large gap and small \(\epsilon\), corresponding to the upper left corner of the figures also impacts the effectiveness of the heuristic. The large gap size means that \(w_{2},\ldots,w_{5}\) are all chosen within a small interval \([0,1-g]\), and we suspect that this cluster behaves like the value \(\frac{1-g}{2}\) appearing with high multiplicity. Additionally, the small \(\epsilon\) value means it is difficult to achieve a "success" by having absolute error less than \(\epsilon\). For these noisy experiments, there are two natural interpretations of what should be considered \(N\) and \(\mu\). The first is that \(N\) should be 5, the number of true measurements, whereas \(\mu\) should be 1. The second interpretation is that \(N\) should be \(20\cdot 5=100\), the true length of \(v\) while \(\mu\) should be 20, the size of the top cluster. In the \(R_{v}(t)\) case, these distinctions cancel out in the bound provided by Theorem 3.3. In the \(L_{v}(t)\) case, however, these interpretations give drastically different bounds when applying Theorem 3.1. The later interpretation often yields such enormous \(t\) bounds that an application of that result is not useful. In Figure 17, we display the results of an experiment using the former interpretation. It did not happen that the \(L_{v}(t)\) approximation with the interpreted bound from Theorem 3.1 achieved the expected accuracy. This suggests that the later interpretation, despite its lack of utility, is likely more appropriate. Our experiments also showcase the advantage of using the \(R_{v}(t)\) approximation over \(L_{v}(t)\), especially in noisy situations with high (approximate) multiplicity. Figure 16. For each \((g,\epsilon)\) and for 500 tries, (left) average error of \(R_{v}(t^{*})\) for \(t^{*}\) from Theorem 3.3 (right) number of approximations \(R_{v}(t^{*})\) within \(\epsilon\) for \(t^{*}\) from Theorem 3.3. ## 6. Max-convolution and applications One way to use smooth approximations of the maximum function is to approximate the max-convolution of two vectors [12]. To that end, consider two integer vectors \(a=(a_{0},\ldots,a_{n})\) and \(b=(b_{0},\ldots,b_{n})\). The classical convolution problem asks to determine the vector of convolution coefficients \(a\star b\) where \[(a\star b)_{k}=\sum_{i=0}^{k}a_{i}\cdot b_{k-i}=\sum_{i=\max(0,k-n)}^{\min(k,n) }a_{i}\cdot b_{k-i}. \tag{19}\] We remark that the middle description is sufficient if one takes \(a_{i}=0\) and \(b_{i}=0\) whenever they are undefined. With the same input, the problem of max-convolution, **MAXCON**, asks for the vector \(c\) of max-convolution coefficients, where \[c_{k}=\max_{\max(0,k-n)\leq i\leq\min(k,n)}(a_{i}+b_{k-i}). \tag{20}\] These coefficients can be obtained via (19) by replacing the operations \((\cdot,+)\) with \((+,\max)\), respectively. The form \(c_{k}=\max_{0\leq i\leq n}(a_{i}+b_{k-i})\) may be used if one replaces undefined \(a_{i}\) and \(b_{i}\) with \(-\infty\). Equivalently, through constructing \[A_{t}(x)=\sum_{i=0}^{n}t^{a_{i}}x^{i},\qquad B_{t}(x)=\sum_{i=0}^{n}t^{b_{i}}x ^{i}\in\mathbb{Q}(t)[x],\] the problem of **MAXCON** asks for the largest exponents in \(t\) appearing in the coefficients of \[A_{t}(x)\cdot B_{t}(x)=\sum_{k=0}^{2n}\sum_{i=0}^{k}t^{a_{i}+b_{k-i}}x^{k}.\] Setting \(v^{(k)}=((a_{i}+b_{k-i})\ |\ \max(0,k-n)\leq i\leq\min(k,n))\), we rewrite this as \[A_{t}(x)\cdot B_{t}(x)=\sum_{k=0}^{2n}F_{v^{(k)}}(t)x^{k}.\] For fixed \(t\), the values of \(F_{v^{(k)}}(t)\) are _classical_ convolution coefficients \[((t^{a_{0}},\ldots,t^{a_{n}})\star(t^{b_{0}},\ldots,t^{b_{n}}))_{k}\] which can be computed with \(O(n\log(n))\) operations, e.g., using the fast Fourier transform (FFT) [9]. Applying \(\log_{t}\) provides an \(O(n\log(n))\) routine for evaluating \(L_{v^{(k)}}(t)\), whereby with Theorem 3.1, this process computes \(\max(v^{(k)})=c_{k}\) when evaluated at a sufficiently large value of \(t\). We summarize this discussion in the following quasi-linear time algorithm. Figure 17. For each \((g,\epsilon)\), over 500 tries, the average error of \(L_{v}(t^{*})\) from Theorem 3.1. **Example 6.1**.: Consider applying Algorithm 1 to compute the max-convolution coefficients of the vectors \[a=(3,1,2,4,1,2),\qquad b=(5,3,0,4).\] Taking \(t^{*}=6\), we obtain \[\alpha=(216,6,36,1296,6,36),\qquad\beta=(7776,216,1,1296)\] The classical convolution of \(\alpha\) and \(\beta\) is \[\ell(6)=\alpha\star\beta=(1679620,9312,281448,10365400,334404,329184,1687400,781 2,46656)\] which under \(\log_{6}\) evaluates to \[\log_{6}(\ell(6))=(8,6.38685,7.00301,9.01571,7.09923,7.09045,8.00258,5.00258,6).\] Finally, by applying \(\lfloor\cdot\rfloor\), we obtain the max-convolution coefficents \[c=(8,6,7,9,7,7,8,5,6).\] For completeness and interpretation of \(c\), we provide \(A_{t}(x),B_{t}(x)\in\mathbb{Z}[t][x]\) below along with their product: \[A_{t}(x)=t^{3}x^{0}+t^{1}x^{1}+t^{2}x^{2}+t^{3}x^{3}+t^{1}x^{4}+t^{2}x^{5}, \qquad B_{t}(x)=t^{5}x^{0}+t^{3}x^{1}+t^{0}x^{2}+t^{4}x^{3},\] \[A_{t}(x)\cdot B_{t}(x)=t^{8}x^{0}+2t^{6}x^{1}+(t^{7} +t^{4}+t^{3})x^{2}+(t^{9}+t^{7}+t^{5}+t)x^{3}+(t^{7}+t^{6}+t^{5}+t ^{2})x^{4}\] \[+(t^{7}+t^{6}+2t^{4})x^{5}+(t^{8}+t^{5}+t)x^{6}+(t^{5}+t^{2})x^{7} +t^{6}x^{8}.\] The key to Algorithm 1 lies in the ability to evaluate \(F_{v^{(k)}}(t)\) via the fast Fourier transform in \(O(n\log(n))\) by interpreting these values as classical convolution coefficients. A subsequent application of \(\log_{t}\) turns this into an evaluation of \(L_{v^{(k)}}(t)=\log_{t}(F_{v^{(k)}}(t))\) whereby one may apply Theorem 3.1. Similarly, with the aim to apply Theorem 3.6, one may use the approximation of Proposition 2.6 to wrap the ability to evaluate \(F_{v^{(k)}}(t)\) into an algorithm which may require a smaller \(t\) evaluation. ``` Input: Two integer vectors \(a=(a_{0},\dots,a_{n})\) and \(b=(b_{0},\dots,b_{m})\). Output: The max-convolution coefficients \(c=(c_{0},\dots,c_{n+m})\) Output: The max-convolution coefficients \(c=(c_{0},\dots,c_{n+m})\) 1 Choose \(t^{*}\) and \(\alpha^{*}\) satisfying the bounds of Theorem 3.6 (e.g., \(\alpha^{*}>1\) and \(t^{*}>\max(e,n-1,m-1)\)) 2 Compute \(\alpha=((t^{*})^{a_{0}},\dots,(t^{*})^{a_{n}})\) and \(\beta=((t^{*})^{b_{0}},\dots,(t^{*})^{b_{m}})\) 3 Compute \(\alpha^{\prime}=((\alpha^{*}t^{*})^{a_{0}},\dots,(\alpha^{*}t^{*})^{a_{n}})\) and \(\beta^{\prime}=((\alpha^{*}t^{*})^{b_{0}},\dots,(\alpha^{*}t^{*})^{b_{m}})\) 4 Compute \(\ell(t^{*})=\alpha\star\beta\) 5 Compute \(\ell^{\prime}(t^{*})=\alpha^{\prime}\star\beta^{\prime}\) using FFT 6 Apply \(\log_{\alpha^{*}}\left(\ell^{\prime}(t^{*})/\ell(t^{*})\right)\) component-wise to obtain \(c=\lceil\log_{\alpha^{*}}\left(\ell^{\prime}(t^{*})/\ell(t^{*})\right)\rceil\) return\(c\) ``` **Algorithm 2**MaxCon - using \(D_{v^{(k)}}(t^{*},\alpha^{*})\) **Remark 6.2**.: Algorithms 1 and 2, paired with their corresponding bounds from Section 3, give algorithms whose output constitute mathematical proofs provided that the convolution coefficients computed via FFT are exact. Otherwise, the error introduced by the \(\star\) operation must be bounded by \(1/2\), \(\delta\) should be taken to be at most \(1/2\) in the relevant theorems, and a two-sided rounding procedure should be applied rather than the ceiling or floor function. **Example 6.3**.: We apply Algorithm 2 to the vectors in Example 6.1 using \(t^{*}=6\) and \(\alpha^{*}=e\). The values of \(\ell(6)\) and \(\ell^{\prime}(6)=\ell(e\cdot 6)\) are \[\ell(6) =(1679620,93312,281448,10365400,334404,329184,1687400,7812,46656),\] \[\ell^{\prime}(6) =(5006864730.36308,3764747.57839,307062197.51625,81968557661.46344,\] \[326963800.35823,325950992.03191,5008018807.39795,1154326.73120,188 22373.78920)\] so that \(c=\lceil\log(\ell^{\prime}(6)/\ell(6))\rceil\) gives \[c=\lceil(8.0,6.0,6.99486,8.97562,6.88525,6.89789,7.99561,4.99561,6.0)\rceil=(8,6,7,9,7,7,8,5,6).\] We remark that if we use \(t^{*}=2\) and \(\alpha^{*}=1.05\), which do not necessarily meet the bounds of Theorem 3.6, we obtain the following results: \[\ell(2) =(256,128,152,674,228,224,290,36,64),\] \[\ell^{\prime}(2) =\ell(1.05\cdot 2)=(378.22859,171.53224,208.81795,1017.32991,311.12599,\] \[304.77118,421.16960,45.25101,85.76612),\] \[\log_{1.05}(\ell^{\prime}(2)/\ell(2)) =(8.0,6.0,6.50915,8.43831,6.37121,6.31101,7.64815,4.68754,6.0).\] A final application of \(\lceil\cdot\rceil\) obtains the correct integers \(c\) for the max-convolution coefficients of \(a\) and \(b\). We conclude with two applications of max-convolution. ### Maximum Consecutive Subsums Problem Given a single vector \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}\), the problem of determining the largest consecutive sum \(\sum_{i=1}^{k}v_{j_{k}+i}\) for each \(k=1,\ldots,n\) is known as the _Maximum Consecutive Subsums Problem_ (**MCSP**). As outlined in [4, SS 7.1], **MCSP** directly reduces to an instance of **MAXCON** as follows. Taking \(a,b\in\mathbb{R}^{n}\) to be \(a_{k}=-\sum_{i=1}^{k}v_{i}\) and \(b_{n-k}=\sum_{i=1}^{k}v_{i}\), the max-convolution coefficient \(c_{n-k}\) describes the largest sum of \(k\) consecutive entries of \(v\). **Example 6.4**.: For \(v=(1,4,2,3,8,1,1,5,6,7,5)\in\mathbb{Z}^{11}\), we have \[a =(-1,-5,-7,-10,-18,-19,-20,-25,-31,-38,-43),\] \[b =(43,38,31,25,20,19,18,10,7,5,1).\] The max-convolution of \(a\) and \(b\) is \[c=(42,38,36,33,28,24,23,18,13,8,0,-1,-2,\ldots).\] For example, this shows that the largest sum of \(2\) and \(5\) consecutive entries of \(v\) is \(13\) and \(24\) obtained by \(6+7\) and \(1+5+6+7+5\), respectively. By convention, one may choose to prepend \(c_{0}=\sum_{i=1}^{n}v_{i}\) to \(c\) so as to include the subsum of \(n\) consecutive integers in the output as well. We remark that even though Algorithms 1 and 2 are written for integer input, the algorithms work for floating point input as well, subject to different bounds (see Theorems 3.1 and 3.6). When using Algorithm 1, the output is an upper bound for the true max-convolution coefficients, subject to any error introduced by the FFT subroutine. Figure 18 shows the magnitude of error on the \(100\) outputs of a random MCSP problem on a vector \(v\in[0,1]^{n}\) for \(n=100\) and \(n=1000\) with each coordinate selected uniformly at random. ### Service Curve Constraints Convolution algorithms are integral in network calculus where systems model the data flow between networks [14]. The incoming data is described by a monotonic input function \(R(T)\), given in bits per second. The outgoing data (after a time delay) is described by the output function, \(R^{*}(T)\), also in bits per second. The function \(R^{*}(T)\) is constrained by _service_ constraints that state for any window of time, additional data outputted is bounded. The curves formed by these constraints are the result of a min-convolution between the service curve and input function \(R(T)\)[14]. That is, a system with an input function \(R(T)\) has an output function \(R^{*}(T)\) that will lie in the area bounded below by a service curve \(\beta(T)\) and above by a maximum service curve \(\gamma(T)\) such that \[\inf_{s\leq T}\{R(T)+\beta(T-s)\}\leq R^{*}(T)\leq R(s)+\gamma(T-s),\ \ \ \ s\leq T \tag{21}\] Note that the input and output functions admit no subscript to avoid confusion with the ratio function \(R_{v}(t)\). Additionally, we define the time variable for the service curve to be \(T\) rather than \(t\) which is the variable base used for the **MAXCON** algorithms. Although the prior focus was on the maximum, it is very simple to reformulate everything to instead compute the minimum. That is, we take \(t\to\infty\) when converging to the maximum while one can take \(t\to 0^{+}\) to converge to the minimum. As an example, we sought to recreate [14, Fig. 5.1] using Algorithm 2 to compute the discrete min-convolution of the input function and service curves. To complete this recreation, we first fit a polynomial curve to \(R(T)\) from that plot. In particular, we used the fitted sextic polynomial \[R(T)=1.6738T-0.7492T^{2}-0.08694T^{3}+0.1085T^{4}-0.01101T^{5}-0.001579T^{6}+0.0 002085T^{7}.\] The service curves are defined as \(\beta(T)=T\) and \(\gamma(T)=\begin{cases}0&\text{if }T\leq 3\\ T-3&\text{if }T>3,\end{cases}\) which corresponds with a 3 second time delay. To create a discrete problem, we evaluated these functions at equally spaced points. We apply Algorithm 2 in the floating-point case, i.e., without rounding, for the computations with \(\alpha=1.01\) and \(t=\left(\frac{1}{(n-1)}\right)^{\frac{1}{0.04}}\). Figure 19 shows the results of our min-convolution using 10 and 100 discretized points to compute the corresponding bounds on \(R^{*}(T)\) given in (21). Figure 18. Heatmap of the absolute errors of the numerical computation of **MCSP**. The numerical computations were performed on a vector of length 100 (top) and 1000 (bottom) with floating-point entries uniformly chosen in \([0,1]\). The vertical axis indicates the output error on subsums of length \(k\) after evaluating at \(\exp(t)\) (horizontal axis). One can see that even with numerical discretion, the resulting curves in Figure 19 exhibit satisfactory behavior in recreating the bounds of [14, Fig. 5.1]. Even for 10 discretization points, the computation captures the essential behavior of the convolution. Furthermore, using 100 discretization points better captures sharp transitions as well as flatter regions of the service curve bounds. Note that this experiment employs two separate convolutions. We compared our estimated values to the actual minimums computed via brute force, i.e., we computed the exact minima by computing the bounds given in Equation 21 for each \(T_{i}\), \(i=1,\ldots,N\) where N is the number of discretized points. Figures 20 and 21 display the error between the computed points via Algorithm 2 and the actual points for the min-convolution between \(R(T)\) and \(\beta(T)\), and \(R(T)\) and \(\gamma(T)\), respectively. When using a time delay of 3 seconds, non-zero values first occur when \(T>3\) resulting in nearly zero error before then. Both discretizations have errors on the order of \(10^{-3}\) or smaller. Additionally, the errors are nonnegative which highlights that our method slightly overestimates the values. Tables 1 and 2 compare the time in seconds for one application of Algorithm 2 and a brute force computation between \(R(T)\) and \(\beta(T)\), and \(R(T)\) and \(\gamma(T)\), respectively, using a single processor. The difference Figure 19. Discretized service curves computed using Algorithm 2. The curve \(R(T)\) and service constraints \(\gamma(T)\) and \(\beta(T)\) are discretized into 10 (top) and 100 (bottom) equally spaced points. between quasi-linear and quadratic time algorithms becomes apparent as \(n\) grows. Note that utilizing Algorithm 1 produced similar error and computational time. \begin{table} \begin{tabular}{|r|r|r|} \hline Number of Discrete Points & \(D_{v}(t,\alpha))\) & Brute Force \\ \hline 10 & 0.0003841 & 0.002666 \\ 50 & 0.0003392 & 0.006488 \\ 100 & 0.0005358 & 0.01067 \\ 500 & 0.001053 & 0.04643 \\ 1000 & 0.001647 & 0.1114 \\ 5000 & 0.01183 & 1.3957 \\ 10000 & 0.02939 & 10.5240 \\ 100000 & 1.1738 & 1546.2364 \\ \hline \end{tabular} \end{table} Table 1. Time, in seconds, to calculate the min-convolution between \(R(T)\) and \(\beta(T)\). Figure 21. Plots error of the computed points minus the actual points of the minimum convolution between the lines \(R(T)\) and \(\gamma(T)\) for (left) 10 and (right) 100 discretized points. Figure 20. Plots error of the computed points minus the actual points of the minimum convolution between the lines \(R(T)\) and \(\beta(T)\) for (left) 10 and (right) 100 discretized points.
2301.04812
A Two-limb Explanation for the Optical-to-infrared Transmission Spectrum of the Hot Jupiter HAT-P-32Ab
We present a new optical transmission spectrum of the hot Jupiter HAT-P-32Ab acquired with the Carnegie Observatories Spectrograph and Multiobject Imaging Camera (COSMIC) on the Palomar 200 inch Hale Telescope (P200). The P200/COSMIC transmission spectrum, covering a wavelength range of 3990--9390 \AA, is composed of 25 spectrophotometric bins with widths ranging from 200 to 400 \AA and consistent with previous transit measurements obtained in the common wavelength range. We derive a combined optical transmission spectrum based on measurements from five independent instruments, which, along with the 1.1--1.7 $\mu$m spectrum acquired by the Hubble Space Telescope and two Spitzer measurements, exhibits an enhanced scattering slope blueward of a relatively flat optical continuum, a water absorption feature at 1.4 $\mu$m, and a carbon dioxide feature at 4.4 $\mu$m. We perform Bayesian spectral retrieval analyses on the 0.3--5.1 $\mu$m transmission spectrum and find that it can be well explained by a two-limb approximation of $134^{+45}_{-33}\times$ solar metallicity, with a strongly hazy morning limb of $1134^{+232}_{-194}$ K and a haze-free evening limb of $1516^{+33}_{-44}$~K. This makes HAT-P-32Ab a promising target for James Webb Space Telescope to look for asymmetric signatures directly in the light curves.
Xin-Kai Li, Guo Chen, Hai-Bin Zhao, Hong-Chi Wang
2023-01-12T04:49:49Z
http://arxiv.org/abs/2301.04812v1
A Two-limb Explanation for the Optical-to-infrared Transmission Spectrum of the Hot Jupiter HAT-P-32Ab ###### Abstract We present a new optical transmission spectrum of the hot Jupiter HAT-P-32Ab acquired with the Carnegie Observatories Spectrograph and Multiobject Imaging Camera (COSMIC) on the Palomar 200 inch Hale Telescope (P200). The P200/COSMIC transmission spectrum, covering a wavelength range of 3990-9390 A, is composed of 25 spectrophotometric bins with widths ranging from 200 to 400 A and consistent with previous transit measurements obtained in the common wavelength range. We derive a combined optical transmission spectrum based on measurements from five independent instruments, which, along with the 1.1-1.7 \(\mu\)m spectrum acquired by the Hubble Space Telescope and two Spitzer measurements, exhibits an enhanced scattering slope blueward of a relatively flat optical continuum, a water absorption feature at 1.4 \(\mu\)m, and a carbon dioxide feature at 4.4 \(\mu\)m. We perform Bayesian spectral retrieval analyses on the 0.3-5.1 \(\mu\)m transmission spectrum and find that it can be well explained by a two-limb approximation of \(134^{+45}_{-33}\times\) solar metallicity, with a strongly hazy morning limb of \(1134^{+232}_{-194}\) K and a haze-free evening limb of \(1516^{+33}_{-44}\) K. This makes HAT-P-32Ab a promising target for James Webb Space Telescope to look for asymmetric signatures directly in the light curves. techniques: spectroscopic -- planets and satellites: atmospheres -- planets and satellites: individual (HAT-P-32Ab) Vol.0 (20xx) No.0, 000-000 ## 1 Introduction The study of exoplanets is one of the fastest growing sub-disciplines in astronomy and planetary science. Observations and studies of exoplanet atmospheres have sprung up, and we have now discovered over 5200 exoplanets, among which over 3900 were discovered by the transit method (according to NASA Exoplanet Archive 1, as of 2022 November). During a transit, some of the stellar light will pass through the optically thin part of the planetary atmosphere, resulting in wavelength-dependent planetary radii with potential imprints of absorption and scattering features of planetary atmosphere at the terminator (Seager & Sasselov, 2000). It is feasible to retrieve the atmospheric properties from the observed transmission spectrum under certain model assumptions, e.g., line-by-line radiative transfer 1D model with parameterized temperature structure, chemical compositions, and clouds or hazes properties (Madhusudhan & Seager, 2009). Footnote 1: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/) Close-in hot Jupiters are the most favorable targets for transmission spectroscopy with current instrumentation, which are gas giants with high temperatures, short orbital periods and extended atmospheres. The high atmospheric temperatures of hot Jupiters make them fantastic laboratories for unveiling the chemical abundances of giant planets. Through the analysis of the transmission spectrum of hot Jupiters, various species have been identified (Madhusudhan, 2019), such as atomic metals including Na and K in the optical wavelengths due to their particularly prominent features at \(\sim\)589 and \(\sim\)768 nm (e.g., Nikolov et al., 2018; Chen et al., 2018), and water vapor in the near infrared wavelengths for a water absorption feature centered at 1.4 \(\mu\)m (e.g., Deming et al., 2013). Among species found in the hottest hot Jupiters, gaseous TiO and VO may drive a temperature inversion in these planets (Hubeny et al., 2003). The role of TiO/VO in these hottest atmospheres is still not clear, which could be depleted due to mechanisms such as deep-atmosphere or nightside cold trap, gravitational settling, photodissociation, thermal dissociation, and high C/O chemistry (Showman et al., 2009; Spiegel et al., 2009; Knutson et al., 2010; Madhusudhan, 2012; Parmentier et al., 2013, 2018). The 3D global circulation models have been used to investigate the compositions, distributions, and formation of clouds and how they shape the transmission and emission spectra (Parmentier et al., 2016; Helling et al., 2019, 2021). Potential observational evidence for asymmetries resulting from atmospheric circulation has started to emerge through high-resolution Doppler spectroscopy (Ehrenreich et al., 2020; van Sluijs et al., 2022; Cont et al., 2022). One target of special interest is the highly inflated hot Jupiter HAT-P-32Ab, transiting a late-F-type star with a period of 2.15 days at a distance of 0.0343 AU discovered by Hartman et al. (2011). The planet has a mass of \(0.585\pm 0.031M_{\rm Jup}\), a radius of \(1.789\pm 0.025R_{\rm Jup}\), and an equilibrium temperature of \(1801\pm 18\) K, while the host star has a mass of \(1.160\pm 0.041M_{\odot}\), a radius of \(1.219\pm 0.016R_{\odot}\), and an effective temperature of \(6269\pm 64\) K (Hartman et al., 2011; Czesla et al., 2022). There is a resolved M1.5V companion, HAT-P-32B, at an angular separation of \(2\farcs 9\)(Adams et al., 2013). HAT-P-32Ab is one of the best targets for transmission spectroscopy because of its large transit depth of more than 2%, its large atmospheric scale height of about 1500 km, and a relatively bright host star (\(V=11.4\) mag). Several observational studies have been conducted to reveal the atmospheric property of HAT-P-32Ab. Ground-based binned spectrophotometry, from GMOS on Gemini North telescope (Gemini-N/GMOS) (Gibson et al., 2013), MODS on Large Binocular Telescope (LBT/MODS) (Mallonn & Strassmeier, 2016), and OSIRIS on Gran Telescopio Canarias (GTC/OSIRIS) (Nortmann et al., 2016), and multi-color broad-band photometry (Mallonn et al., 2016; Mallonn & Wakeford, 2017; Tregloan-Reed et al., 2018) all come to similar conclusions that HAT-P-32Ab has a flat featureless optical transmission spectrum, with a possible scattering slope at the blue-optical, indicative of a bimodal cloud distribution that consists of a Rayleigh-like haze and a gray cloud deck. The space observations, carried out with STIS and WFC3 on Hubble Space Telescope (HST), not only confirmed the enhanced Rayleigh scattering and the thick cloud deck (Alam et al., 2020), but also revealed the presence of a water absorption feature at \(\sim\)1.4 \(\mu\)m (Damiano et al., 2017; Alam et al., 2020). However, the full optical-to-infrared retrieval analysis performed by Alam et al. (2020) cannot account for the shallower transit depths measured by Spitzer at 3.6 and 4.5 \(\mu\)m. In addition to transmission spectrum, dayside emission spectrum has also been acquired using ground-based \(H\)- and \(K_{S}\)-band photometry, Spitzer 3.6 and 4.5 \(\mu\)m photometry, and HST/WFC3 spectroscopy (Zhao et al., 2014; Nikolov et al., 2018), which shows no evidence of the \(\sim\)1.4 \(\mu\)m water feature but agrees with an isothermal atmosphere of \(1995\pm 17\) K or an atmosphere with a modest thermal inversion. In this work, we present a new optical transmission spectrum of HAT-P-32Ab observed by the Carnegie Observatories Spectrograph and Multiobject Imaging Camera (COSMIC; Kells et al., 1998) on the Palomar 200 inch Hale Telescope (P200) and attempt to reconcile the existing discrepancy between data and model in the infrared. In Section 2, we introduce the details of the observation and data reduction processes. In Section 3, we present the analyses on the white and spectroscopic light curves. In Section 4, we perform the Bayesian spectral retrieval analyses on HAT-P-32Ab's transmission spectrum. Finally, we discuss the implications of the retrieval results in Section 5 and draw conclusions in Section 6. ## 2 Observations and Data Reduction We obtained a transit time series of HAT-P-32Ab on the night of 2013 October 10 from 07:35 UT to 12:21 UT. The transit was observed with COSMIC installed at the prime focus of P200 located atop Palomar Mountain in north San Diego County, California. The spectroscopic mode of COSMIC has a field of view of \(13\farcm 65\times 13\farcm 65\), equipped with a thinned, back-illuminated SITe \(2048\times 2048\) CCD (\(0\farcs 4\) per pixel). A long-slit mask with a slit width of 12\(\arcsec\) was created to simultaneously monitor the flux of HAT-P-32A (\(V=11.3\) mag) and the reference star TYC 3281-957-1 (\(V=11.0\) mag, \(3\farcm 2\) away). The 300 lines per mm grism was used to acquire the spectra, covering a wavelength range of 340-970 nm at a dispersion of \(\sim\)0.31 nm per pixel. An exposure time of 90 s was adopted, except for the first two taken with 60 and 120 s, resulting in a total of 85 frames. The long readout time of \(\sim\)117 s strongly reduced the duty cycle to 43%. The mercury arc lamp was observed through a longslit mask with a slit width of 2\(\arcsec\) for wavelength calibration. The raw spectral images were reduced following the methodology described in Chen et al. (2021) based on IRAF (Tody, 1993) and customized IDL scripts, including corrections for overscan, bias, flat fields, sky background, and cosmic rays. The 1D spectra were extracted using the optimal extraction algorithm (Horne, 1986) with an aperture diameter of 21 pixels (\(8\farcs 4\)), which minimized the scatter in the white-light curve. The white-light curve was created by summing the flux between 399 and 939 nm, while the spectroscopic light curves were created by binning the flux in a step of 20 nm, except for two 30 nm channels and one 40 nm channel at the longest wavelengths. The time stamp was extracted from the fits header and converted to Barycentric Julian Dates in Barycentric Dynamical Time (BJD\({}_{\rm TDB}\); Eastman et al., 2010). ## 3 Light-Curve Analysis ### White-light curve To model the transit light curve, we used the Python package batman(Kreidberg, 2015) configured with the quadratic limb-darkening law, which implements the analytic formalism from Mandel & Agol (2002). A circular orbit was assumed. The transit model \(\mathcal{M}\) was parameterized by orbital period (\(P\); fixed to 2.15000815 days from Fowler et al., 2021), orbital inclination (\(i\)), scaled semi-major axis (\(a/R_{\star}\)), radius ratio (\(R_{p}/R_{\star}\)), mid-transit time (\(T_{\rm mid}\)), and limb-darkening coefficients (\(u_{1}\) and \(u_{2}\)). Since the close companion HAT-P-32B was not spatially resolved in our COSMIC observation, we revised the transit model as \(\mathcal{M}^{\prime}=(\mathcal{M}+f_{d})/(1+f_{d})\) to account for its dilution. The dilution flux ratio \(f_{d}=F_{B}/F_{A}\) was calculated from the best-fit PHEONIX stellar template retrieved from the GTC/OSIRIS measurements presented by Nortmann et al. (2016) (see Table 1 in Appendix A). To account for the correlated systematic noise in the observed light curve, we used the Python package george(Ambikasaran et al., 2015) to implement the Gaussian processes (GPs; Rasmussen & Williams, 2006; \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & Prior & Posterior Estimate \\ \hline \(P\) [days] & 2.15000815(fixed) & \(-\) \\ \(i\) [\({}^{\circ}\)] & \(\mathcal{U}(80,90)\) & \(88.57^{+0.81}_{-0.60}\) \\ \(a/R_{\star}\) & \(\mathcal{U}(3.0,9.0)\) & \(6.202^{+0.054}_{-0.069}\) \\ \(R_{p}/R_{\star}\) & \(\mathcal{U}(0.10,0.20)\) & \(0.1510^{+0.0011}_{-0.0011}\) \\ \(u_{1}\) & \(\mathcal{N}(0.3520,0.0340^{2})\) & \(0.286^{+0.021}_{-0.012}\) \\ \(u_{2}\) & \(\mathcal{N}(0.2991,0.0182^{2})\) & \(0.291^{+0.018}_{-0.018}\) \\ \(T_{\rm mid}\) [MJD\({}^{\rm a}\)] & \(\mathcal{U}(76.8852,76.9252)\) & \(76.90416^{+0.00011}_{-0.00011}\) \\ \(\sigma_{\rm w}\) [\(10^{-6}\)] & \(\mathcal{U}(0.1,5000)\) & \(287^{+30}_{-27}\) \\ \(\ln A\) & \(\mathcal{U}(-10,-1)\) & \(-5.0^{+1.3}_{-1.3}\) \\ \(\ln\tau_{1}\) & \(\mathcal{U}(-6,5)\) & \(0.0^{+1.21}_{-1.1}\) \\ \(\ln\tau_{2}\) & \(\mathcal{U}(-5,5)\) & \(2.7^{+1.1}_{-1.1}\) \\ \(c_{0}\) & \(\mathcal{N}(1.00587,0.00039^{2})\) & \(1.00587^{+0.00038}_{-0.00028}\) \\ \(c_{1}\) & \(\mathcal{N}(-0.0431,0.0018^{2})\) & \(-0.0431^{+0.0018}_{-0.0018}\) \\ \(c_{2}\) & \(\mathcal{N}(-0.400,0.054^{2})\) & \(-0.400^{+0.0018}_{-0.039}\) \\ \hline \hline \end{tabular} \({}^{a}\mathrm{MJD}=\mathrm{BJD}_{\rm TDB}-2,456,500\). \end{table} Table 1: Parameters Estimated from the White-light Curve Figure 1: White-light curve of HAT-P-32 observed by P200/COSMIC on the night of 2013 October 10. From top to bottom are (i) raw flux time series of HAT-P-32 and its reference star, (ii) raw white-light curve (i.e., normalized target-to-reference flux ratios), (iii) white-light curve corrected for systematics, and (iv) best-fit light-curve residuals. The best-fit models are shown in black. Figure 2: Inclination and semi-major axis derived in this work compared to those in the literature. Gibson et al., 2012). For the GP mean function, we adopted the transit model multiplied by a polynomial baseline, i.e., \(\mathcal{M}^{\prime}(c_{0}+c_{1}t+c_{2}t^{2})\). For the GP covariance matrix, we used the product of two Matern \(\nu=3/2\) kernels, with time (\(t\)) and spatial FWHM (\(s\)) as the input vectors, parameterized by an amplitude (\(A\)) and two characteristic length scales (\(\tau_{t}\) and \(\tau_{s}\)). To account for potential underestimation of white noise, a jitter parameter \(\sigma_{j}\) was added in the quadrature sum to the nominal flux uncertainties. To estimate the posterior distributions of the 13 free parameters (\(i\), \(a/R_{\star}\), \(T_{\rm mid}\), \(R_{p}/R_{\star}\), \(u_{1}\), \(u_{2}\), \(c_{0}\), \(c_{1}\), \(c_{2}\), \(A\), \(\tau_{t}\), \(\tau_{s}\), \(\sigma_{j}\)), we used the Python package emcee(Foreman-Mackey et al., 2013) to implement the affine invariant Markov Chain Monte Carlo (MCMC) ensemble sampler. In practice, the natural logarithmic values \(\ln A\), \(\ln\tau_{t}\), and \(\ln\tau_{s}\) were used in the MCMC process. A total of 32 walkers were initialized and two short chains of 2000 steps were used for the "burn-in" phase. The final production was created after running a long chain of 50,000 steps that were thinned by every ten steps. We adopted uniform priors for all the parameters except for the baseline polynomial coefficients and the limb-darkening coefficients, which were controlled by normal priors. For the baseline polynomial coefficients, a second-order polynomial function was fitted to the out-of-transit flux and the resulting best-fit values and uncertainties were adopted as the normal priors. For the limb-darkening coefficients, the prior mean and sigma values were calculated from the ATLAS stellar models using the code developed by Espinoza and Jordan (2015). The white light curve and best-fit model are shown in Figure 1. The best-fit light-curve residuals have a standard deviation of 270 ppm that is 3.6 times photon noise. The posteriors of free parameters are listed in Table 1. The derived transit parameters are in a broad agreement with those in the literature. Figure 2 presents the comparison for \(i\) and \(a/R_{\star}\) between this work and the other transmission spectroscopy studies along with the discovery paper. ### Spectroscopic light curves Previous studies (Alexoudi et al., 2018, 2020) found that the shape of transmission spectrum could vary with the adopted orbital parameters due to the impact parameter degeneracy. In the case of HAT-P-32Ab, Alexoudi et al. (2020) found negligible slope changes introduced by the impact parameter degeneracy. To compare with the results from HST, we adopted the nonlinear limb-darkening law, and fixed the four limb-darkening coefficients to the values interpolated from the Table 3 of Alam et al. (2020). We modeled each individual spectroscopic light curve using the same method as that of the white-light curve, with the number of free parameters being reduced to eight (\(R_{p}/R_{\star}\), \(c_{0}\), \(c_{1}\), \(c_{2}\), \(A\), \(\tau_{t}\), \(\tau_{s}\), \(\sigma_{j}\)) for each passband. We fixed \(i\) and \(a/R_{\star}\) to the values from Hartman et al. (2011) as Alam et al. (2020) did. We fixed \(T_{\rm mid}\) to the value derived from the white-light curve (see Table 1). Since there was no significant common-mode noise, we did not apply the widely adopted common-mode removal technique to avoid potential underestimation of transit depth uncertainties (Jiang et al., 2022). In the MCMC processes of the spectroscopic light curves, we also used 32 walkers and ran two short chains of 2000 steps for the "burn-in" phase, but created the final production after a long chain of 5000 steps without thinning. The adopted spectroscopic passbands are illustrated in the top panel of Figure 3, while the derived wavelength dependent planet-to-star radius ratios, i.e., the transmission spectrum, are shown in the bottom panel and listed in Table A.1. The spectroscopic light curves and best-fit models are shown in Figure 4. The standard deviations of the best-fit light-curve residuals for all the passbands are 0.8-2.4\(\times\) photon noise, with a median value at 1.3\(\times\). ### Transmission spectrum Our P200/COSMIC transmission spectrum is consistent with a flat line at a confidence level of 0.3\(\sigma\) (\(\chi^{2}=6.1\) for 24 degree of freedoms, hereafter dof). When compared to a sloped line in the form of \(a_{0}+a_{1}\ln\lambda\), we obtain \(\chi^{2}=6.0\) for 23 dof at a confidence level of 0.3\(\sigma\). We Figure 3: The top panel shows the stellar spectra of HAT-P-32 and its reference star, along with passbands used in this work marked in shaded colors. The bottom panel presents the transmission spectrum of HAT-P-32Ab acquired with P200/COSMIC, compared to a 1800 K 1\(\times\)solar cloud-free fiducial model with Na and K but without TiO and VO. further utilize the nested sampling algorithm (Feroz & Hobson, 2008) configured with 1000 live points to estimate the model evidence \(\mathcal{Z}\) for model comparison. The Python package PymultiNest(Buchner et al., 2014) is used to implement the MULTINEST library (Feroz et al., 2009). We interpret the model preference following the Trotta (2008) criteria, which categorizes the value of \(\ln B_{10}=\ln\mathcal{Z}_{1}-\ln\mathcal{Z}_{0}\) into inconclusive (\(|\ln B_{10}|<1\)), weak Figure 4: Spectroscopic light curves of HAT-P-32 acquired with P200/COSMIC. From left to right are the original ones, the ones after removing the systematic models, and corresponding best-fit light-curve residuals. The black solid lines show the best-fit models. (\(1\leq|\ln B_{10}|<2.5\)), moderate (\(2.5\leq|\ln B_{10}|<5\)), and strong (\(|\ln B_{10}|\geq 5\)). We find that the flat line has a model evidence higher than that of the sloped line (\(\Delta\ln\mathcal{Z}=2.4\)), indicating that it is not necessary to consider a non-flat model to explain the P200/COSMIC transmission spectrum. Since the optical transmission spectrum of HAT-P-32Ab has also been acquired by Gemini-N/GMOS (Gibson et al., 2013), LBT/MODS (Mallonn & Strassmeier, 2016), GTC/OSIRIS (Nortmann et al., 2016), and HST/STIS (Alam et al., 2020), we further investigate the consistency of transmission spectrum in the common wavelength range of different instruments. Figure 5 shows the comparison between these five sets of optical transmission spectra. We downsample the other transmission spectra to the P200/COSMIC passbands with uncertainties propagated, and estimate a difference of \(\chi^{2}=6.8\) (18 dof) for Gemini-N/GMOS, \(\chi^{2}=13.7\) (24 dof) for LBT/MODS, \(\chi^{2}=10.5\) (19 dof) for GTC/OSIRIS, \(\chi^{2}=8.3\) (24 dof) for HST/STIS when compared to P200/DBSP. This reveals a good consistency between our spectrum and the others. Given that different instruments might host specific systematics, we decide to combine the five spectra to alleviate the potential impact of specific systematics. We resample the other spectra onto the HST/STIS passbands and average these five spectra in the common passbands using the inverse square of uncertainties as weight. The combined optical transmission spectrum is presented in Table 10. It features an enhanced slope, driven by HST/STIS and LBT/MODS, at \(\lambda<4000\) A, and a relatively flat continuum at \(4000<\lambda<9000\) A in the common wavelength range of all five spectra. ## 4 Atmospheric Retrieval In order to explore the atmospheric properties of HAT-P-32Ab, we perform Bayesian spectral retrieval analyses on the combined optical transmission spectrum derived in Sect. 3.3 along with the HST/WFC3 and Spitzer measurements from Alam et al. (2020). We configure PLATON(Zhang et al., 2019, 2020) to implement forward modeling in the subsequent retrieval analyses, which conducts 1D radiative transfer to calculate the transmission spectrum of a hydrostatic atmosphere with equilibrium chemistry. The atmosphere is assumed to have an isothermal temperature of \(T_{\rm iso}\), which is divided into 100 layers, log-equally spaced from 10\({}^{3}\) to 10\({}^{-9}\) bar, with the reference planet radius (\(R_{\rm p,1bar}\)) set at 1 bar. The gas absorption, collisional absorption, and scattering absorption are taken into account. The gas abundances are obtained from the equilibrium chemistry abundance grid pre-calculated by GGChem(Woitke et al., 2018), which is a function of species name, temperature, pressure, metallicity (\(Z\)), and C/O ratio2. A total of 34 atomic and molecular species Figure 5: Transmission spectra of HAT-P-32Ab acquired by Gemini-N/GMOS (blue lower-triangles; Gibson et al., 2013), LBT/MODS (purple left-triangles; Mallonn & Strassmeier, 2016), GTC/OSIRIS (olive upper-triangles; Nortmann et al., 2016), HST/STIS (green squares; Alam et al., 2020), and P200/COSMIC (red diamonds). The combined optical transmission spectrum, i.e., weighted average of the five spectra resampled to the HST/STIS passbands, is shown in black circles. are considered, including: H, He, C, N, O, Na, K, H\({}_{2}\), H\({}_{2}\)O, CH\({}_{4}\), CO, CO\({}_{2}\), NH\({}_{3}\), N\({}_{2}\), O\({}_{2}\), O\({}_{3}\), NO\({}_{2}\), C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{4}\), H\({}_{2}\)CO, H\({}_{2}\)S, HCl, HCN, HF, MgH, OCS, OH, PH\({}_{3}\), SiH, SiO, SO\({}_{2}\), TiO, and VO. The gas opacities are calculated at a resolution of \(\lambda/\Delta\lambda=10,000\), with line lists coming from ExoMol (Tennyson & Yurchenko, 2018), HITRAN 2016 (Gordon et al., 2017), CDSD-4000 (Tashkun & Perevalov, 2011), Rey et al. (2017), and NIST. The collisional absorption coefficients are taken from HITRAN (Richard et al., 2012; Karman et al., 2019). The clouds are described as an optically thick cloud deck with a cloud-top pressure of \(P_{\rm cloud}\). The hazes are parameterized to have a Rayleigh-like scattering with a slope of \(\gamma\) at an amplitude of \(A_{\rm scatt}\), i.e., \(\sigma(\lambda)=A_{\rm scatt}\sigma_{\rm Rayleigh}\lambda^{\gamma}\). We adopt a planet mass of \(0.585M_{\rm J}\)(Czesla et al., 2022) in the forward model and use a stellar radius of \(1.225R_{\odot}\)(Tregloan-Reed et al., 2018) to obtain the transit depth. We consider three model hypotheses in the following subsections and perform the retrieval analyses in the Bayesian framework. We configure PymultiNest with 250 live points to implement the nested sampling algorithm to estimate the model evidence and to explore the posterior distributions of free parameters. As summarized in Table 2, we adopt uniform or log-uniform priors for all the parameters. ### Hypothesis 1: uniform limb with uniform clouds By default, PLATON assumes an isothermal 1D atmosphere with uniform clouds (hereafter Hypothesis 1). The forward model consists of seven free parameters: \(T_{\rm iso}\), \(\log P_{\rm cloud}\), \(\gamma\), \(\log A_{\rm scatt}\), C\(/\)O, \(R_{\rm 1bar}\), and \(\log Z\). From the retrieval of Hypothesis 1, we obtain an isothermal temperature of \(1130^{+139}_{-134}\) K, a subsolar C/O ratio of \(0.30^{+0.21}_{-0.17}\), and a supersolar metallicity of \(0.96^{+0.78}_{-0.38}\) dex that is slightly lower than the value retrieved from the HST and Spitzer measurements using the default setup of PLATON (Alam et al., 2020). Our scattering slope of \(-1.1^{+0.2}_{-0.4}\) is much shallower than the value of \(-9.0^{+1.0}_{-0.6}\) reported by Alam et al. (2020), while our scattering amplitude of \(10^{3.5\pm 0.5}\) is much stronger than their \(10^{1.0\pm 0.4}\). This is mainly owing to the fact that the combined optical transmission spectrum is flatter than the HST/STIS spectrum within the wavelength range of 0.4-0.9 \(\mu\)m. However, we also note that the adopted priors and sampling algorithms might also introduce differences in the posterior estimates. While it is not clear what priors and sampling algorithms were used in Alam et al. (2020), the comparisons between parameters derived from our work and Alam et al. (2020) should be taken with a grain of salt. We retrieve a loosely constrained cloud-top pressure with a 90% lower limit of \(\sim\)2.6 mbar. The joint posterior distributions of all the free parameters are presented in Figure A.1. The retrieved maximum a posterior (MAP) model can fit all the data at a 2.4\(\sigma\) confidence level (\(\chi^{2}_{\rm MAP}=66.7\) for 44 dof). As shown in the top panel of Figure 6, the retrieved models of Hypothesis 1 agree well with most of the data within 0.4-0.9 and 1.1-1.7 \(\mu\)m, but fail to fit a few data points, in particular those at the blue end and those in the mid infrared. The blue end shows an enhanced slope toward shorter wavelengths. The data point at 0.985 \(\mu\)m could be an outlier of the overall spectral shape given that the neighboring wavelength range of 0.5-0.9 \(\mu\)m is the average spectrum from five independent instruments while the one at 0.985 \(\mu\)m comes from the average of only two. In the mid infrared, the two Spitzer data points show clear offsets from the model predictions, and are also deviating from the overall optical-to-infrared trend. ### Hypothesis 2: uniform limb with patchy clouds Three dimensional general circulation models (GCM) have predicted that the atmospheric circulation of hot Jupiters could induce asymmetries in temperature structure, chemistry, and clouds between the morning and evening limbs (e.g., Showman et al., 2009; Parmentier et al., 2016; Helling et al., 2019), resulting in different spectral signatures in the limb transmission spectra (Fortney et al., 2010). While it is possible to directly measure such asymmetries in the light curves if ultra-high photometric precision can be achieved (von Paris et al., 2016; Powell et al., 2019; Espinoza & Jones, 2021), we seek solutions through approximating the atmosphere by linear combinations of multi-sector 1D atmospheric models (Line & Parmentier, 2016; Kempton et al., 2017; Welbanks & Madhusudhan, 2022). In Hypothesis 2, we assume that the atmosphere is effectively composed of one clear sector and one cloudy sector, with \(\phi\) being the faction of cloud coverage. Following MacDonald & Madhusudhan (2017) and Welbanks & Madhusudhan (2021), the cloudy sector has a Rayleigh-like scattering haze above a gray cloud deck. This amounts to eight free parameters: \(T_{\rm iso}\), \(\log P_{\rm cloud}\), \(\gamma\), \(\log A_{\rm scatt}\), C\(/\)O, \(R_{\rm 1bar}\), \(\log Z\), and \(\phi\). From the retrieval of Hypothesis 2, we obtain a cloud coverage of \(61^{+5}_{-7}\) %, an isothermal temperature of \(1288^{+121}_{-119}\) K, a subsolar C/O ratio of \(0.29^{+0.18}_{-0.15}\), and a supersolar metallicity of \(1.89^{+0.25}_{-0.28}\) dex. For the haze property, the scattering slope of \(-3.0^{+0.8}_{-1.4}\) is slightly steeper, while the scattering amplitude increases significantly to \(10^{6.3^{+0.6}_{-0.8}}\). The cloud-top pressure is again loosely constrained, with a 90% lower limit of \(\sim\)0.1 mbar. The joint posterior distributions of all the free parameters are presented in Figure A.2. Figure 6: Retrieved transmission spectra of HAT-P-32Ab from hypotheses of (i) uniform limb with uniform clouds (top panel), (ii) uniform limb with patchy clouds (middle panel), and (iii) two limbs with uniform clouds (bottom panel). The optical combined data are shown in circles, while the HST/WFC3 and Spitzer data from Alam et al. (2020) are shown in squares and triangles, respectively. The median, \(1\sigma\), and \(2\sigma\) confidence intervals of retrieved models are shown in solid lines with shaded areas. The retrieved models of Hypothesis 2 are shown in the middle panel of Figure 6. The MAP model of Hypothesis 2 can fit all the data at the 1.9\(\sigma\) confidence level (\(\chi^{2}_{\rm MAP}=58.3\) for 43 dof), slightly better than Hypothesis 1. The major improvement in Hypothesis 2 occurs in the optical, where the new retrieval suggests the presence of pressure-broadened line wings of Na and K. The strong scattering amplitude introduced by the haze in the cloudy sector also slightly moves downwards the model in the mid-infrared. However, the enhanced slope at the blue end and the Spitzer data points are still not well explained. ### Hypothesis 3: two limbs with uniform clouds While the patchy clouds could result in asymmetries in the limb, as assumed in Hypothesis 2, it is also possible that the morning limb and evening limb are indeed asymmetric. Similar to the retrieval frameworks adopted in Espinoza and Jones (2021) and Welbanks and Madhusudhan (2022), we assume in Hypothesis 3 that the atmosphere can be equivalent to a cooler morning limb and a warmer evening limb with equal weights, which have separate isothermal temperatures (\(T^{\rm{norm}}_{\rm iso}\) and \(T^{\rm{even}}_{\rm iso}\)). For both limbs, uniform clouds are adopted individually, each composed of a Rayleigh-like scattering haze above a gray cloud deck. The 3D GCM studies suggest that planets with intermediate temperatures (1400-1800 K) could have homogeneous mean molecular weight but intermittent C/O ratio across observable planet disk (Helling et al., 2022). Therefore, the C/O ratio is considered to be limb-dependent, while the metallicity is assumed to be the same for both limbs. Consequently, there are 12 free parameters: \(T^{\rm{morn}}_{\rm iso}\), \(T^{\rm{even}}_{\rm iso}\), \(\log P^{\rm{morn}}_{\rm cloud}\), \(\log P^{\rm{even}}_{\rm cloud}\), \(\gamma^{\rm{morn}}\), \(\gamma^{\rm{even}}\), log \(A^{\rm{morn}}_{\rm scatt}\), \(\log A^{\rm{even}}_{\rm scatt}\), \(\rm C/O^{\rm{morn}}\), \(\rm C/O^{\rm{even}}\), \(R_{\rm 1bar}\), and \(\log Z\). From the retrieval of Hypothesis 3, we obtain a supersolar metallicity of \(2.13^{+0.13}_{-0.12}\) dex. The retrieved C/O ratios are supersolar and almost the same for both limbs (\(\sim\)0.7), although the constraints are tighter for the evening limb while looser for the morning limb. The retrieved temperatures are \(1134^{+232}_{-194}\) K for the morning limb and \(1516^{+33}_{-44}\) K for the evening limb, resulting in an evening-morning difference of \(\Delta T=373^{+192}_{-224}\) K. This is consistent with the evening-averaged and morning-averaged temperature profiles within 1-10 mbar derived from cloud-free GCM simulations for another hot Jupiter with similar physical properties (WASP-17b; Kataria et al., 2016). The scattering slopes are loosely constrained, with a 90% upper limit of \(\gamma<-12.5\) for the morning limb and \(\gamma<-9.8\) for the evening limb. The scattering amplitudes are much stronger in the morning limb (\(A_{\rm scatt}=10^{11.0^{+0.5}_{-0.7}}\)) than in the evening limb (\(A_{\rm scatt}=10^{-2.5^{+1.7}_{-1.0}}\)), indicating that the morning limb is dominated by a very strong haze and that the evening limb is almost haze free. Like Hypotheses 1 and 2, the cloud-top pressure is also loosely constrained in Hypothesis 3, with a 90% lower limit of 0.02 mbar in the morning limb and 4 mbar in the evening limb. However, the lower limit does indicate \begin{table} \begin{tabular}{l l c c c c} \hline \hline Parameter & Description & Prior & \multicolumn{2}{c}{Posterior} \\ & & Hypothesis 1 & Hypothesis 2 & Hypothesis 3 \\ \hline \(T_{\rm iso}\) & Atmospheric temperature (K) & \(\mathcal{U}(500,2300)\) & \(1130^{+139}_{-134}\) & \(1288^{+121}_{-119}\) & – \\ \(T^{\rm{morn}}_{\rm iso}\) & \(T_{\rm iso}\) of morning limb & \(\mathcal{U}(500,2300)\) & – & – & \(1134^{+232}_{-194}\) \\ \(T^{\rm{even}}_{\rm iso}\) & \(T_{\rm iso}\) of evening limb & \(\mathcal{U}(500,2300)\) & – & – & \(1516^{+433}_{-44}\) \\ \(\log P_{\rm cloud}\) & Log of cloud-top pressure (bar) & \(\mathcal{U}(-6,2)\) & \(-0.5^{+1.6}_{-1.7}\) & \(-1.1^{+2.0}_{-2.3}\) & – \\ \(\log P^{\rm{morn}}_{\rm cloud}\) & \(\log P_{\rm cloud}\) of morning limb & \(\mathcal{U}(-6,2)\) & – & – & \(-1.8^{+2.5}_{-2.6}\) \\ \(\log P^{\rm{even}}_{\rm cloud}\) & \(\log P_{\rm cloud}\) of evening limb & \(\mathcal{U}(-6,2)\) & – & – & \(-0.5^{+1.6}_{-1.6}\) \\ \(\gamma\) & Scattering slope & \(\mathcal{U}(-20,2)\) & \(-1.1^{+0.2}_{-0.4}\) & \(-3.0^{+0.8}_{-1.4}\) & – \\ \(\gamma^{\rm{morn}}\) & \(\gamma\) of morning limb & \(\mathcal{U}(-20,2)\) & – & – & \(-17.1^{+3.4}_{-2.0}\) \\ \(\gamma^{\rm{even}}\) & \(\gamma\) of evening limb & \(\mathcal{U}(-20,2)\) & – & – & \(-15.7^{+4.5}_{-2.7}\) \\ \(\log A_{\rm scatt}\) & Log of scattering factor & \(\mathcal{U}(-4,12)\) & \(3.5^{+0.5}_{-0.5}\) & \(6.3^{+0.6}_{-0.8}\) & – \\ \(\log A^{\rm{morn}}_{\rm scatt}\) & log\(A_{\rm scatt}\) of morning limb & \(\mathcal{U}(-4,12)\) & – & – & \(11.0^{+0.5}_{-0.7}\) \\ \(\log A^{\rm{even}}_{\rm scatt}\) & log\(A_{\rm scatt}\) of evening limb & \(\mathcal{U}(-4,12)\) & – & – & \(-2.5^{+1.7}_{-1.0}\) \\ \(\rm C/O\) & Carbon-to-oxygen ratio & \(\mathcal{U}(0.05,2)\) & \(0.30^{+0.21}_{-0.17}\) & \(0.29^{+0.8}_{-0.15}\) & – \\ \(\rm C/O^{\rm{morn}}\) & \(\rm C/O\) of morning limb & \(\mathcal{U}(0.05,2)\) & – & – & \(0.82^{+0.76}_{-0.47}\) \\ \(\rm C/O^{\rm{even}}\) & \(\rm C/O\) of evening limb & \(\mathcal{U}(0.05,2)\) & – & – & \(0.73^{+0.03}_{-0.04}\) \\ \(R_{\rm 1bar}\) & Planet radius at 1 bar (\(R_{3}\)) & \(\mathcal{U}(0.5,2)\) & \(1.704^{+0.016}_{-0.72}\) & \(1.686^{+0.028}_{-0.028}\) & \(1.704^{+0.014}_{-0.014}\) \\ \(\log Z\) & Log of metallicity (\(Z_{\odot}\)) & \(\mathcal{U}(-1,3)\) & \(0.96^{+0.016}_{-0.38}\) & \(1.89^{+0.28}_{-0.28}\) & \(2.13^{+0.14}_{-0.12}\) \\ \(\phi\) & Fraction of cloud coverage & \(\mathcal{U}(0,1)\) & – & \(0.61^{+0.05}_{-0.07}\) & – \\ \hline \(\ln\mathcal{Z}\) & Model evidence & – & \(366.1\pm 0.3\) & \(369.7\pm 0.3\) & \(374.2\pm 0.3\) \\ \(\chi^{2}_{\rm MAP}\) & \(\chi^{2}\) of maximum a posterior model & – & 66.7 & 58.3 & 41.7 \\ dof & Degree of freedom & – & 44 & 43 & 39 \\ \hline \end{tabular} \end{table} Table 2: Parameter Estimation and Statistics from the Atmospheric Retrievals that the cloud deck in the evening might be deeper. The joint posterior distributions of all the free parameters are presented in Figure A.3. The retrieved models of Hypothesis 3 are shown in the bottom panel of Figure 6. The MAP model of Hypothesis 3 can fit all the data at the 0.9\(\sigma\) confidence level (\(\chi^{2}_{\rm MAP}=41.7\) for 39 dof), better than both Hypotheses 1 and 2. The enhanced slope at the blue end and the Spitzer data points can be reasonably explained by the models of Hypothesis 3. The optical spectral features in the MAP model are dominated by the opacities of TiO, VO, MgH, Na, and K in the evening limb. If the data point at 0.985 \(\mu\)m is excluded as an outlier, the values of \(\chi^{2}_{\rm MAP}\) are reduced to 61.3, 51.1, and 35.8 for Hypotheses 1, 2, and 3, respectively, corresponding to goodness of fit at 2.0\(\sigma\), 1.3\(\sigma\), and 0.5\(\sigma\). ## 5 Discussion Our Bayesian spectral retrieval analyses reveal that the current 0.3-5.1 \(\mu\)m transmission spectrum data set of HAT-P-32Ab strongly favors the hypothesis of two limbs with uniform clouds as opposed to the hypotheses of uniform limb with either uniform clouds or patchy clouds. The retrieved models from the two-limb hypothesis can fit the current data set reasonably well, indicating that the atmosphere of HAT-P-32Ab can be equivalent to a warmer hazy-free component and a cooler hazy component, which we attribute to the evening and morning limbs. Although the retrieved C/O ratios are almost the same (\(\sim\)0.7) on both limbs, it is largely unconstrained in the morning limb. Figure 7 presents the confidence Figure 7: The first row presents the confidence regions of the derived morning-limb and evening-limb transmission spectra based on the two-limb retrieval in Hypothesis 3. The second and third rows show the marginalized posterior distributions of the volume mixing ratios of the major spectral tracing species retrieved in Hypothesis 3, including H\({}_{2}\)O, CO\({}_{2}\), Na, K, TiO, and VO. regions of the derived morning-limb and evening-limb transmission spectra based on the two-limb retrieval. It also shows the marginalized limb-dependent posteriors of the major spectral tracers that are derived from our equilibrium chemistry retrieval, of which Na, K, TiO, and VO contribute to the optical wavelengths, H\({}_{2}\)O contributes to the infrared wavelengths, and CO\({}_{2}\) contributes to the Spitzer bands. The most evident abundance changes occur in TiO and VO, which are likely depleted in the cooler morning limb due to either condensation or nightside cold trap, consistent with GCM predictions (Parmentier et al., 2016; Helling et al., 2022). Based on the assumption of equilibrium chemistry, the atmospheric metallicity of HAT-P-32Ab is constrained to be \(134^{+45}_{-33}\) times solar metallicity, which is strongly enhanced over its solar-metallicity host star (\([\rm Fe/H]=-0.04\pm 0.08\); Hartman et al., 2011) and much stronger than the observed planet mass-metallicity enrichment trend at the mass of 0.585 \(M_{\rm J}\)(e.g., Kreidberg et al., 2014; Wakeford et al., 2017; Welbanks et al., 2019). The enrichment of atmospheric metallicity toward lower planet mass has been suggested as a potential link to the core-accretion planet formation theory (Miller and Fortney, 2011; Fortney et al., 2013; Mordasini et al., 2014; Thorngren et al., 2016). However, Welbanks et al. (2019) found that the enrichment trends could differ if it was based on different species (e.g., CH\({}_{4}\), H\({}_{2}\)O), suggesting that the equilibrium chemistry assumption might not work in general. Future observations with James Webb Space Telescope (JWST) that cover a variety of molecular species in the infrared wavelengths will enable us to answer whether or not equilibrium chemistry is reasonable for HAT-P-32Ab. The powerful capability of JWST could also enable a direct measurement of the transmission spectrum for each individual limb, which can independently confirm whether the morning-evening asymmetries are present. The morning-to-evening transit depth differences derived from our retrieval have a maximum value of \(\sim\)990 ppm within 0.3-5.1 \(\mu\)m, which could induce an asymmetric signature as large as \(\sim\)400 ppm during ingress or egress in the light-curve residuals when compared to the symmetric light-curve model depending on the orientation of the semi-circles of the evening and morning limbs. Asymmetric signatures of such amplitude are easily detectable by JWST according to the simulations on the hot Jupiter HAT-P-41b orbiting a star of similar spectral type and brightness (Espinoza and Jones, 2021). We note that the major driver to favor the two-limb hypothesis in this work could probably come from the enhanced slope at the blue end and the two Spitzer data points that have a potential CO\({}_{2}\) spectral signature but are at a lower level than the 0.3-1.7 \(\mu\)m wavelength range. The offset between the two Spitzer data points and other wavelengths could also come from contamination of stellar activity or instrumental biases. The five-season photometric monitoring reveals that HAT-P-32A is constant on night-to-night timescales within the precision of \(\sim\)2 mmag and likely to be constant on year-to-year timescales (Nikolov et al., 2018; Alam et al., 2020). The consistency among the optical transmission spectra (0.5-0.9 \(\mu\)m) independently acquired by five instruments and the consistency between optical (0.3-0.9 \(\mu\)m) and near-infrared (1.1-1.7 \(\mu\)m) both confirm the inactive nature of HAT-P-32A. The large instantaneous wavelength coverage of JWST will further confirm whether the downward mid-infrared spectral signature of CO\({}_{2}\) is of instrumental origin or a sign of morning-evening asymmetry (The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022). ## 6 Conclusions We obtained an optical transmission spectrum for the hot Jupiter HAT-P-32Ab within the wavelength range of 399-939 nm using P200/COSMIC. We derived a combined optical transmission spectrum by weighted averaging the measurements from five independent instruments including HST/STIS, Gemini-N/GMOS, LBT/MODS, GTC/OSIRIS, and P200/COSMIC. We performed Bayesian spectral retrievals on the combined optical spectrum along with the HST/WFC3 and Spitzer measurements, with the hypotheses of (i) uniform limb with uniform clouds, (ii) uniform limb with patchy clouds, and (iii) two limbs with uniform clouds. We conclude that: 1. The current 0.3-5.1 \(\mu\)m transmission spectrum of HAT-P-32Ab is characterized by an enhanced scattering slope at the blue-optical, a relatively flat continuum but consistent with spectral signatures of TiO, VO, Na, K, and MgH in the optical band, a water absorption feature at 1.4 \(\mu\)m, and a CO\({}_{2}\) absorption feature at 4.4 \(\mu\)m. 2. The current data set of HAT-P-32Ab reveals an atmosphere of high metallicity (\([\rm Fe/H]=2.13^{+0.13}_{-0.12}\)), and can be well explained by a two-limb approximation, with the warmer evening limb being haze-free and the cooler morning limb being strongly haz. The morning-evening temperature difference of \(373^{+192}_{-224}\) K is consistent with the GCM predictions. 3. HAT-P-32Ab is a prior target for follow-up observations with JWST transmission spectroscopy. The inferred morning and evening limbs, if confirmed, will enable direct measurements of limb spectra through asymmetric light-curve modeling. **Acknowledgements** G.C. acknowledges the support by the B-type Strategic Priority Program of the Chinese Academy of Sciences (grant No. XDB41000000), the National Natural Science Foundation of China (grant Nos. 42075122, 12122308), Youth Innovation Promotion Association CAS (2021315). H.Z. thanks the Space debris and NEO research project (grant Nos. KJSP2020020204, KJSP2020020102), Civil Aerospace pre-research project (grant No. D020304). G.C. and H.Z. also thank the Minor Planet Foundation. The authors would like to thank the anonymous referee for the constructive comments on the manuscript, and Carolyn Heffner, Kajsa Peffer, Kevin Rykoski, and Jennifer Milburn for their great supports during the observations. This research uses data obtained through the Telescope Access Program (TAP), which has been funded by the TAP member institutes. Observations obtained with the Hale Telescope at Palomar Observatory were obtained as part of an agreement between the National Astronomical Observatories, Chinese Academy of Sciences, and the California Institute of Technology.
2305.18714
Align, Perturb and Decouple: Toward Better Leverage of Difference Information for RSI Change Detection
Change detection is a widely adopted technique in remote sense imagery (RSI) analysis in the discovery of long-term geomorphic evolution. To highlight the areas of semantic changes, previous effort mostly pays attention to learning representative feature descriptors of a single image, while the difference information is either modeled with simple difference operations or implicitly embedded via feature interactions. Nevertheless, such difference modeling can be noisy since it suffers from non-semantic changes and lacks explicit guidance from image content or context. In this paper, we revisit the importance of feature difference for change detection in RSI, and propose a series of operations to fully exploit the difference information: Alignment, Perturbation and Decoupling (APD). Firstly, alignment leverages contextual similarity to compensate for the non-semantic difference in feature space. Next, a difference module trained with semantic-wise perturbation is adopted to learn more generalized change estimators, which reversely bootstraps feature extraction and prediction. Finally, a decoupled dual-decoder structure is designed to predict semantic changes in both content-aware and content-agnostic manners. Extensive experiments are conducted on benchmarks of LEVIR-CD, WHU-CD and DSIFN-CD, demonstrating our proposed operations bring significant improvement and achieve competitive results under similar comparative conditions. Code is available at https://github.com/wangsp1999/CD-Research/tree/main/openAPD
Supeng Wang, Yuxi Li, Ming Xie, Mingmin Chi, Yabiao Wang, Chengjie Wang, Wenbing Zhu
2023-05-30T03:39:53Z
http://arxiv.org/abs/2305.18714v1
Align, Perturb and Decouple: Toward Better Leverage of Difference Information for RSI Change Detection ###### Abstract Change detection is a widely adopted technique in remote sense imagery (RSI) analysis in the discovery of long-term geographic evolution. To highlight the areas of semantic changes, previous effort mostly pays attention to learning representative feature descriptors of a single image, while the difference information is either modeled with simple difference operations or implicitly embedded via feature interactions. Nevertheless, such difference modeling can be noisy since it suffers from non-semantic changes and lacks explicit guidance from image content or context. In this paper, we revisit the importance of feature difference for change detection in RSI, and propose a series of operations to fully exploit the difference information: Alignment, Perturbation and Decoupling (APD). Firstly, alignment leverages contextual similarity to compensate for the non-semantic difference in feature space. Next, a difference module trained with semantic-wise perturbation is adopted to learn more generalized change estimators, which reversely bootstraps feature extraction and prediction. Finally, a decoupled dual-decoder structure is designed to predict semantic changes in both content-aware and content-agnostic manners. Extensive experiments are conducted on benchmarks of LEVIR-CD, WHU-CD and DSIFN-CD, demonstrating our proposed operations bring significant improvement and achieve competitive results under similar comparative conditions. Code is available at [https://github.com/wangsp1999/CD-Research/tree/main/openAPD](https://github.com/wangsp1999/CD-Research/tree/main/openAPD) ## 1 Introduction Change detection is a vision task aiming at identifying pixel-wise semantic changes between paired bitemporal images, this technique is helpful for analysis of remote sense imagery (RSI) with high resolution, which provides important information about changes in land surface and expansion of settlement during long time of observation [14, 17]. With the development of remote satellite sensoring, there exist more open source databases of remote sense imagery with fine-grained semantic annotations [3, 2, 11], which makes it possible to exploit the data-hungry deep-learning approaches [13, 1] to achieve more accurate change detection. Due to the nature of pair-wise input and dense prediction, encoder-decoder ar Figure 1: Conceptual illustration of operations proposed in this work to improve utilization of difference information. (a) A context reliant alignment operation to mitigate changes irrelevant to semantic. (b) A difference guidance module trained with discrete semantic perturbation. (c) Traditional single decoder with hybrid input is decoupled into a content-aware decoder and content agnostic decoder. chitectures with siamese backbones prevail in recent efforts, where features of single images are extracted separately and the difference information is exploited in a decoder to highlight areas of changing objects [14, 15, 16, 17, 18, 19]. In a nutshell, recent approaches usually model the difference information between input pairs in simple and direct manners (e.g. taking difference or concatenation to obtain difference information) [14, 15, 16] or implicitly embed the difference into feature interaction [15, 16], while still leaving some key issues open. **Firstly**, difference information is inherently vulnerable to pseudo-changes (e.g. the seasonal illumination changes during long-term observation), but few of previous works explicitly take such interference into account, hence these methods are not guaranteed to be robust enough to non-semantic changes. **Secondly**, most of previous literature take difference information from features for final decoding, ignoring the fact that difference naturally contains information of changeful objects, which reversely provides spatial guidance to representation learning [13]. **Finally**, the relationship between image content and difference information is seldom discussed in prior works, the image content can be regarded as auxiliary prior information for change decoding, while also introducing some irrelevant change cues distracting prediction. With the reviews above, we claim that the difference information in current research is still underutilized. Therefore, in this paper, we design series of operations aiming at mitigating the aforementioned issues and fully leveraging feature difference to boots change detection results. Concretely, we equip the hierarchical encoder-decoder network with three operations in sequential order: Alignment, Perturbation and Decoupling (APD), which is illustrated in Figure 1. **Alignment:** To alleviate noise from pseudo-changes, we first propose a graph-based alignment module, which exploits the contextual similarity between patches to aggregate information from areas of the same semantic as compensation, this results in more precise extraction of semantic difference in following stages. **Perturbation:** Following the alignment operation, we propose a perturbation-aided difference prediction module. Especially, this module is trained with discrete semantic perturbation as feature augmentation, thus can recognize more generalized change patterns as guidance for feature extraction in the following stages. **Decoupling:** We decouple the decoder into an asymmetric dual-stream structure for final prediction, one focuses on integrating bitemporal contents with difference while the other takes pure difference information for decoding, this helps utilize the complementary property between image content and difference information while avoiding irrelevant noise. We conduct experiments on three challenging benchmarks for RSI change detection: LEVIR-CD [15], WHU-CD [12] and DSIFN-CD [16], and demonstrate the superiority of our proposed approach over existing competitive methods. Plenty of ablation studies also verify the effectiveness of our proposed operations. In a nutshell, the contribution of this paper can be summarized as following * We reconsider the problem of pseudo-changes and propose a graph-based alignment module to explicitly utilize contextual similarity for compensation. * We propose a hierarchical difference extraction structure to guide the feature extraction process stage-by-stage, which is equipped with a specially designed discrete semantic perturbation scheme for feature augmentation. * Different from most of prior works, we propose a dual decoder structure to decouple the utilization of image content from pure difference encoders. * We integrate the proposed operation above and convert traditional encoder-decoder structure into a new change detector APD, which achieves competitive results on mainstream benchmarks. ## 2 Related Works ### Change Detection with Deep Learning We roughly divide deep learning-based change detection methods into two types [16]: two-stage and one-stage methods. In general, the two-stage method trains a CNN/FCN to classify the bitemporal images respectively, and compares their classification results to obtain change areas. To achieve this goal, both the bitemporal semantic labels and the change label should be provided [13, 12, 14]. The one-stage method is a more prevailed framework in recent research, which takes change detection as a dense classification task and directly produces the change result from the bitemporal images. The FC series [14] was one of the earliest methods to adopt convolution neural networks for change detection, where three architectures were proposed: FC-EF, FC-Siam-Conc, and FC-Siam-Diff. FC-EF adopted the early fusion strategy, while FC-Siam-Conc and FC-Siam-Diff adopted the medium fusion strategy with different difference policies. Besides, DTCDSCN [12] takes inspiration from the two-stage methods, and converts change detection to multi-task pipeline with semantic map as auxiliary supervision. ChangeSTAR [16] further relaxes the paired requirement by taking unpaired images as input. ### Attention Mechanism in Change Detection Although deep neural network achieves significant improvement over handcrafted features, traditional network structures can not explicitly capture the contextual reliance within RSI, thus can not accurately detect changes in object-level. Therefore, there appears works resorting to attention mechanism [15, 12, 13] to help gather more contextual information. Zhang et al.[16] proposed a deeply supervised change detection network (IFN), which applies channel attention and spatial attention for feature enhancement. Besides the contextual modeling, recent works also make attempt to integrate the attention mechanism into bitemporal interaction as an implicit replacement of feature differ ence operation. Both BIT [3] and Tran-sUNetCD [11] proposed a bitemporal image transformer by mixing CNN and transformer, where the cross image interaction is conducted in latent encoded space. ChangeFormer [1] further developed a siamese variant of SegFormer [23] for feature extraction. ### Feature Augmentation Feature augmentation aims at perturbing data in the encoded feature space to help the neural network learn invariant representation. Different from data augmentation with low-level transformation, feature-level augmentation is regarded as increasing the semantic variance to ensure the generalization ability of models. ISDA [17] perturbed the deep features in terms of their statistical characteristic, thus improving the accuracy of recognition. The work of SFA [11] utilized a similar adaptive noise for domain generalization. Further, Manifold Mixup [24] extended the widely-used mixup trick into feature space to smooth the semantic boundaries. MGD [17] introduced random mask for knowledge distillation among deep features to obtain better results of student models. In change detection, the difference information is usually derived from the discrepancy between two feature maps, which should be invariant to the difference order and channel-wise masking. Therefore, we take inspiration from feature augmentation methods and design a specialized discrete semantic perturbation mechanism to help difference prediction module recognize more general patterns. ## 3 Approach Figure 2 depicts the pipeline of our APD change detector. Similar to previous work, our method is built based on an encoder-decoder architecture. We denote the bitemporal image pair as \(\mathcal{X}_{0}\) and \(\mathcal{X}_{1}\in\mathbb{R}^{3\times H\times W}\) with change label map \(\mathcal{Y}\in\{0,1\}^{H\times W}\), then feed images to a classic hierarchical deep neural network to extract feature representations of multi-level \(\{\mathcal{F}_{0}^{l},\mathcal{F}_{1}^{l}\}_{l}\), where \(l\in[2,N]\) represents the stage index (at most \(N\) stages). However, to fully leverage the difference information, we propose the concept of "Align First and Then Difference", and insert a Graph Interactive and **D**ifference **E**n enhancement module (GIDE) between stages in the encoder part, which transforms \(\mathcal{F}_{i}^{l}\) into more change-sensitive manifold \(\widetilde{\mathcal{F}}_{i}^{l}\) \[\widetilde{\mathcal{F}}_{0}^{l},\widetilde{\mathcal{F}}_{1}^{l}, \mathcal{O}^{l} =\mathbf{GIDE}(\mathcal{F}_{0}^{l},\mathcal{F}_{1}^{l})=P\left(A \left(\mathcal{F}_{0}^{l},\mathcal{F}_{1}^{l}\right)\right) \tag{1}\] \[\mathcal{F}_{0}^{l+1},\mathcal{F}_{1}^{l+1} =\mathbf{Enc}_{l}\left(\widetilde{\mathcal{F}}_{0}^{l},\widetilde {\mathcal{F}}_{1}^{l}\right),\] where \(\mathbf{Enc}_{l}\left(\cdot,\cdot\right)\) denotes the \(l\)-th stage of the encoder. Specifically, GIDE is composed of an alignment module \(A\left(\cdot,\cdot\right)\), which aggregates pixel-level semantic information, and a perturbation-aided difference module \(P\left(\cdot,\cdot\right)\) which augments feature difference and highlights local areas. Additional to the alignment and perturbation, \(P\left(\cdot,\cdot\right)\) also generates feature difference \(\mathcal{O}^{l}\) as pure difference information for the following decoding. In the part of the decoder, we decouple the classical single decoder into a complementary structure of content-aware decoder \(D_{aw}\) and content-agnostic decoder \(D_{ag}\) to excavate the content and difference information respectively for final prediction. Figure 2: The overall framework of our proposed APD change detector. (a) The part of the encoder equipped with our proposed GIDE Module. (b) Detailed illustration of graph-based alignment within GIDE. (c) Illustration of the perturbation guided difference prediction in GIDE. (d) The proposed asymmetrical duel decoder part for final prediction. ### Context-aware Alignment The alignment operation aims at alleviating the disturbance of pseudo-changes and amplifying the area of semantic change, thus we resort to an undirected bipartite graph as an aggregation structure to align features (as shown in Figure 2 (b)). Formally, suppose both \(\mathcal{F}^{l}_{0},\mathcal{F}^{l}_{1}\) are of resolution \(H_{l}\times W_{l}\), then we can take each pixel in feature map as a graph node and build an adjacent matrix \(\mathcal{G}\), which is a bipartite graph, i.e. an edge can only link pixels from different feature maps, thus the adjacent matrix can be represented as four-dimensional, \(\mathcal{G}\in\{0,1\}^{H_{l}\times W_{l}\times H_{l}\times W_{l}}\), and the node feature is the feature of the corresponding pixel. Considering the feature of a single pixel can be sensitive to non-semantic changes, while its contextual information can be more robust since it contains more structural information, therefore it is more suitable to encode the edge between nodes with the reference of contextual similarity. To do this, we first obtain the context feature \(\mathcal{F}^{l}_{0,c},\mathcal{F}^{l}_{1,c}\) via a non-parameterized dilated convolution of di-late factor \(d\) and kernel size \(k\), for \(m=0,1\), we have \[\mathcal{F}^{l}_{m,c}[u,v]=\sum_{\begin{subarray}{c}(i,j)\in[-k,k]\\ (i,j)\neq(0,0)\end{subarray}}\mathcal{F}^{l}_{m}[u+id,v+jd], \tag{2}\] where \([u,v]\) indicates obtaining the pixel-wise feature at location \((u,v)\). To build graph \(\mathcal{G}\), for each pixel \((u,v)\) on \(\mathcal{F}^{l}_{0,c}\), we compute top \(n\) nearest coordinates on \(\mathcal{F}^{l}_{1,c}\) \[\mathcal{H}_{u,v}=\textbf{TopK}_{i\in[0,H_{l}],j\in[0,W_{l}]}\left(\left\| \mathcal{F}^{l}_{0,c}[u,v]-\mathcal{F}^{l}_{1,c}[i,j]\right\|_{2}\right), \tag{3}\] with the grouped coordinate set of nearest neighbor \(\mathcal{H}_{u,v}\), the high-dimensional adjacent matrix can be expressed as \[\mathcal{G}[u,v,i,j]=\left\{\begin{array}{ll}1,&\text{ if }(i,j)\in\mathcal{H}_{u,v} \\ 0,&\text{ otherwise.}\end{array}\right. \tag{4}\] With the relation graph \(\mathcal{G}\) based on context, we can take the naive graph convolution to aggregate information from similar node pairs to both feature maps as follows \[\mathcal{F}^{l}_{0}{}^{\prime},\mathcal{F}^{l}_{1}{}^{\prime} =\textbf{GCN}(\mathcal{G},\mathcal{F}^{l}_{0},\mathcal{F}^{l}_{1}) \tag{5}\] \[\hat{\mathcal{F}}^{l}_{0}=\mathcal{F}^{l}_{0}+\mathcal{F}^{l}_{0} {}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}_{1}=\mathcal{F}^{l}_{1}+ \mathcal{F}^{l}_{1}{}^{\prime},\] where \(\hat{\mathcal{F}}^{l}_{0},\hat{\mathcal{F}}^{l}_{1}\) are aligned features after aggregation of graph convolution and will be utilized in following difference prediction module. ### Perturbation-aided Difference Prediction After the feature alignment, we make attempts to leverage the aligned feature to produce coarse spatial guidance, thus helping feature extraction in the following stage. To this end, we introduce a feature augmentation mechanism to train a coarse difference prediction module (as described in Figure 2 (c)). We start with a simple difference operation, however, we explicitly modulate the channel of feature difference with a random vector \(\mathbf{v}\) filled with discrete values \[\hat{\mathcal{O}}^{l} =(\hat{\mathcal{F}}^{l}_{0}-\hat{\mathcal{F}}^{l}_{1})\odot \mathbf{v} \tag{6}\] \[\textbf{s.t.}\quad\|\mathbf{v}\|_{1} =(1-\tau)C^{l}\quad\mathbf{v}\in\{+1,-1,0\}^{C^{l}},\] where \(C^{l}\) is the channel dimension of \(\hat{\mathcal{F}}^{l}_{0},\hat{\mathcal{F}}^{l}_{1}\), and \(\odot\) represents channel-wise modulation, i.e. the \(i\)-th element of vector \(\mathbf{v}\) is broadcasted and multiplied to all pixels on the \(i\)-th channel. The value of vector \(\mathbf{v}\) is constrained within \(0\) and \(\pm 1\), thus each channel of feature difference is either reversed (\(-1\)), masked (\(0\)) or retained (\(+1\)). The insight behind such perturbation is that even though part of semantic information is masked out or reversed, the preserved semantic should be informative enough to coarsely highlight the change. Besides, to prevent too many channels from being masked, we define a mask ratio \(\tau\in[0,1]\) to control the ratio of \(0\) values in \(\mathbf{v}\). It should be noted that channel-wise modulation is only adopted during training and removed from inference. The modulated difference \(\hat{\mathcal{O}}^{l}\) is fed into an ASPP [Chen _et al._, 2018]-like structure to obtain a one-dimensional coarse mask \(\mathcal{M}^{l}\in\mathbb{R}^{H_{l}\times W_{l}}\) \[\mathcal{M}^{l}=\sigma\left(\textbf{MLP}\left(\textbf{Cat}\left(\hat{\mathcal{ O}}^{l}),\hat{\mathcal{O}}^{l}\right)\right)\right), \tag{7}\] where \(\textbf{GAP}(\cdot)\) denotes global average pooling and \(\textbf{Cat}(\cdot,\cdot)\) indicates concatenation between two features along channel dimension, note that the pooled feature is broadcasted to all pixels to align the resolution, \(\textbf{MLP}(\cdot)\) represents multi-layer perceptron mapping the input to single channel mask, and a sigmoid function \(\sigma(\cdot)\) is adopted to obtain coarse-level change area in \(l\)-th stage. In the end, GIDE feeds back such difference-dominant information to the original siamese-aligned feature \[\hat{\mathcal{F}}^{l}_{0}=\hat{\mathcal{F}}^{l}_{0}\otimes\mathcal{M}^{l} \quad\hat{\mathcal{F}}^{l}_{1}=\hat{\mathcal{F}}^{l}_{1}\otimes\mathcal{M}^{l} \quad\mathcal{O}^{l}=(\hat{\mathcal{F}}^{l}_{0}-\hat{\mathcal{F}}^{l}_{1}) \otimes\mathcal{M}^{l}, \tag{8}\] where \(\otimes\) indicates spatial-wise modulation, i.e. \(\mathcal{M}^{l}\) is broadcast to different channels. **Deep Supervision.** Besides, we inject additional supervised objectives in the perturbation module as deep supervision to ensure the accuracy of estimated guidance mask. Concretely, we downsample the original change map \(\mathcal{Y}\) to adapt the size of mask \(\mathcal{M}^{l}\) as \(\mathcal{Y}^{l}\) and apply the binary cross-entropy as loss \[\mathcal{L}^{l}_{d}= -\frac{1}{H_{l}W_{l}}\sum_{(i,j)}\mathcal{Y}^{l}[i,j]log(\mathcal{ M}^{l}[i,j]) \tag{9}\] \[-\frac{1}{H_{l}W_{l}}\sum_{(i,j)}{(1-\mathcal{Y}^{l}[i,j])log(1- \mathcal{M}^{l}[i,j])}.\] The deep supervision together with our designed random perturbation helps train more generalized modules to differentiate various change patterns. ### Decouple Decoders In the decoder part, to exploit the complementary property of image content and pure difference, we devise an asymmetric dual-decoder structure, which takes \(\mathcal{F}^{N}_{0},\mathcal{F}^{N}_{1}\) and \(\mathcal{O}^{l}\) as input and predicts the change areas. The content-aware decoder takes the encoded image features as input, and hierarchically appends difference information to generate the intermediate feature, finally a segmentation head is applied on the final output to obtain the prediction \[\mathcal{D}^{N}_{aw} =\textbf{MLP}\left(\textbf{Cat}(\mathcal{F}^{N}_{0},\mathcal{F}^{N }_{1})\right) \tag{10}\] \[\mathcal{D}^{l-1}_{aw} =\textbf{Dec}_{l,aw}\left(\mathcal{S}_{\uparrow}(\mathcal{D}^{l}_{aw} ),\mathcal{O}^{l-1}\right)\] \[\hat{\mathcal{Y}}_{aw} =\mathcal{T}_{aw}(\mathcal{D}^{1}_{aw}),\] where \(\textbf{Dec}_{l,aw}(\cdot,\cdot)\) is the \(l\)-th decoding block of decoder, which consists of concatenation and two cascaded Conv-BN-ReLU blocks with kernel size \(3\times 3\), \(\mathcal{S}_{\uparrow}(\cdot)\) represents the up-sampling operation, and \(\mathcal{T}_{aw}(\cdot)\) is the segmentation head. As for the content-agnostic decoder, we only feed the pure difference information as input, the other structure is similar to the content-aware one \[\mathcal{D}_{ag}^{N} =\textbf{MLP}\left(\mathcal{O}^{N}\right) \tag{11}\] \[\mathcal{D}_{ag}^{l-1} =\textbf{Dec}_{l,ag}\left(\mathcal{S}_{\uparrow}(\mathcal{D}_{ag }^{l}),\mathcal{O}^{l-1}\right)\] \[\mathcal{\hat{Y}}_{ag} =\mathcal{T}_{ag}(\mathcal{D}_{ag}^{1}),\] where the structure of \(\textbf{Dec}_{l,ag}(\cdot,\cdot)\) is similar to that of content-aware decoder but the concatenation is replaced by summation of two input features. Finally, both output are summed up with an activation function to obtain final results \[\mathcal{\hat{Y}}=\sigma(\mathcal{\hat{Y}}_{ag}+\mathcal{\hat{Y}}_{aw}). \tag{12}\] ### Loss Function During training, we simply take the cross-entropy \(\mathcal{L}_{ce}\) to supervise the final output \(\mathcal{\hat{Y}}\) and the total loss function can be expressed as Equation (13) with balance factor \(\lambda_{1}\) and \(\lambda_{2}\) \[\mathcal{L}_{total}=\mathcal{L}_{ce}+\lambda_{1}\sum_{l}\mathcal{L}_{d}^{l}+ \lambda_{2}\mathcal{L}_{comp}. \tag{13}\] In Equation (13), we introduce an additional comparative loss \(\mathcal{L}_{comp}\) for feature regularization, which is expressed as \[\mathcal{L}_{comp}= \frac{1}{HW}\sum_{(i,j)}\mathcal{Y}^{N}[i,j]\left[\left\|\widetilde {\mathcal{F}}_{0}^{N}[i,j]-\widetilde{\mathcal{F}}_{1}^{N}[i,j]\right\|_{2}- \gamma\right]_{+} \tag{14}\] \[+ \frac{1}{HW}\sum_{(i,j)}\left(1-\mathcal{Y}^{N}[i,j]\right)\left\| \widetilde{\mathcal{F}}_{0}^{N}[i,j]-\widetilde{\mathcal{F}}_{1}^{N}[i,j] \right\|_{2},\] where \(\left[\cdot\right]_{+}\) represents clipped by \(0\) if the value inside is negative and \(\gamma\) is a hyperparameter. This comparative term helps backbone to distinguish feature of changed objects. ## 4 Experiment ### Experiment Setup **Dataset.** We evaluate our proposed APD change detector on three publicly change detection datasets: LEVIR-CD [3], WHU-CD [11] and DSIFN-CD [13]. LEVIR-CD is a public large building change detection dataset that contains 637 bitemporal RS image pairs of resolution 1024x1024. We utilize the default train/val/test split. WHU-CD is another public building change detection dataset, which consists of one pair of ultra-resolution (0.075m) aerial images of size 32507x15354. We follow the default cropping policy of size 512x512 and dataset split(train/test) provided by the authors. DSIFN-CD contains the changes in six major cities' landcover objects in China. We divide the 512x512 images into 256x256 pixel patches without overlapping, and we follow the default standard train/val/test split. Consequently, there are 14400/1360/192 samples for training/val/test. **Implementation Details.** We implemented our model under the Pytorch framework, using a single NVIDIA GeForce GTX 1080 Ti GPU for training and the batch size is set to 8. During training, we apply data augmentation through random flip, crop and photometric distortion. We use AdamW with weight decay equal to 0.05 for optimization. The initial learning rate is 0.001 and we train models for 60k, 40k and 100k iterations for LEVIR-CD, WHU-CD and DSIFN-CD datasets. For context aggregation, we set \(k=1\) and \(d=16\). The hyperparameters in loss term are \(\lambda_{1}=1.0,\lambda_{2}=1.0,\gamma=1.0\). **Evaluation Metrics.** For horizontal comparison with other methods, we follow the common setting and use the F1 score and Intersection over Union (IoU) with regard to the change objects as the primary evaluation indices. Meanwhile, we also report precision(P) and recall(R) of the change objects. ### Main Result **Methods for Comparison.** To verify the effectiveness of our method, we make comparison with several advanced change detection approaches, including three purely convolutional-based methods (FC-EF [1], FC-Siam-Conc [1], FC-Siam-Diff [1], three attention-aided methods (DTCDSCN [11], STANet [3], SNUNet [12]) and three methods with transformer-like structure (BIT [3], ChangeFormer [1], TransUNetCD [11]). **Quantitative Results.** Table 1 reports the overall quantitative comparison results on LEVIR-CD, DSIFN-CD and WHU-CD. In the datasets of DSIFN-CD and WHU-CD, our proposed APD change detector outperforms other methods, reaching the best level in terms of all four metrics. In the LEVIR-CD dataset, although our method does not achieve the best precision, APD can still detect more changing pixels, which makes significant advantages in the other three metrics. For example, the F1 score of our method exceeds the latest ChangeFormer by 1.31%, 1.2%, and 9.57% on the three datasets, respectively. In Table 1, we also indicate the information of the utilized backbone of different CD methods. It can be observed that APD only applies a simple and lightweight ResNet-18 network as a feature extractor and does not use complex structures such as UNet[1] or transformer-based network[1], which are widely used in segmentation tasks. On the other hand, although our approach only takes ResNet18 as the backbone, it can still outperform competitors with larger model capacity (ResNet50) or more advanced structure (MiT-B1), which indirectly manifests the superiority of our proposed GIDE module and decoupled decoders. **Qualitative Results.** In addition, Figure 3 also shows the visualization comparison of the different change detection methods on the three datasets. As highlighted in red, blue and yellow respectively, our proposed method captures more detailed change compared with other change detection schemes. In the visualization results of LEVIR-CD dataset, our APD detector can not only detect the building change more accurately, but also avoid some noise (e.g. changes in the appearance of land cover, seasonal illumination changes, etc.) that affects the contour of the changed area. From the visualization results of the WHU-CD dataset, it can be seen that most compared methods cannot eliminate the pseudo change caused by shadows when detecting changes. In contrast, our method can eliminate such pseudo-change, which proves that our method can learn effective context, eliminate the irrelevant change and express the real semantic variation. At the same time, our method can also effectively detect subtle changes, which are observed from the visualization results of the DSIFN-CD dataset. The compared methods can hardly detect the detailed changed area of the long and narrow road due to the lack of precise semantic difference information. However, our APD detector can effectively capture the subtle variation and generate a more precise change map, which verifies the superiority of APD detector. ### Ablation Study **Verification Experiment on Proposed Operations**. We design ablation experiments for each proposed operation, i.e. Alignment, Perturbation and Decouple. For each operation, we provide a specific baseline counterpart to demonstrate the effectiveness. The overall results are shown in Table 2. The experiment shows that the Alignment module, Perturbation-aided difference module and Decoupled Decoder are helpful for change detection performance. To evaluate the alignment operation, we remove the context-aided alignment in GIDE on purpose and directly fed the siamese features into difference prediction. In terms of Table 2, the context-aided alignment significantly improves the recall and IoU score. This is because the Alignment module effectively utilizes the contextual similarity to gather information from areas of the same semantic to enhance the semantic difference information. This improvement is also reflected in Figure 4, if the alignment part in the GIDE module is removed, the noise caused by the lack of the contextual information makes the model unable to distinguish between real changes and pseudo-changes, thus affecting the precise recognition of the changed areas. To verify the effectiveness of perturbation, we also devise a counterpart by eliminating the deep supervision and semantic perturbation, instead we directly take the feature from both images and their difference as output. Table 2 demonstrates that our proposed perturbation-aided deep supervision essentially improve the recall and F1-score, since combination between deep supervision and random disturbance can help difference predictor focus on more general patterns of changeful objects, consequently the estimated change area \begin{table} \begin{tabular}{c|c|c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{8}{c}{LEVIR-CD} & \multicolumn{8}{c}{DISFN-CD} & \multicolumn{8}{c}{WHU-CD} \\ & & Precision & Recall & F1 & IoU & Precision & Recall & F1 & IoU & Precision & Recall & F1 & IoU \\ \hline FC-EF & UNet & 86.91 & 80.17 & 83.40 & 71.53 & 72.61 & 52.73 & 61.09 & 43.98 & 71.63 & 67.25 & 69.37 & 53.11 \\ FC-Siam-Diff & UNet & 89.53 & 83.31 & 86.31 & 75.92 & 59.67 & 65.71 & 62.54 & 45.50 & 47.33 & 77.66 & 58.81 & 41.66 \\ FC-Siam-Conc & UNet & 91.99 & 76.77 & 83.69 & 71.96 & 66.45 & 54.21 & 59.71 & 42.56 & 60.88 & 73.58 & 66.63 & 49.95 \\ SNUNet & UNet++ & 89.18 & 87.17 & 81.86 & 78.83 & 60.60 & 72.89 & 66.18 & 49.45 & 85.60 & 81.49 & 83.50 & 71.67 \\ DTCDSCN & E8-Res34 & 88.53 & 86.83 & 87.67 & 78.05 & 53.87 & 77.99 & 63.72 & 46.76 & 63.92 & 82.30 & 71.95 & 56.19 \\ STANet & ResNet18 & 83.81 & **91.00** & 87.26 & 77.40 & 67.71 & 61.68 & 64.56 & 47.66 & 79.37 & 85.50 & 82.32 & 69.95 \\ BiT & ResNet18 & 89.24 & 89.37 & 89.31 & 80.68 & 68.36 & 70.18 & 69.26 & 52.97 & 86.64 & 81.48 & 83.98 & 72.39 \\ TransUNetCD & ResNet50 & 92.43 & 89.82 & 91.11 & 83.67 & 71.55 & 69.42 & 66.62 & 57.95 & 93.59 & 89.60 & 93.59 & 84.42 \\ ChangeFormer & MiT-B1 & 92.05 & 88.80 & 90.40 & 82.48 & 88.48 & 84.94 & 86.67 & 76.48 & 89.12 & 82.73 & 85.61 & 75.14 \\ \hline Ours & ResNet18 & **92.81** & 90.64 & **91.71** & **84.69** & **89.39** & **86.40** & **87.87** & **78.36** & **95.10** & **95.26** & **95.18** & **90.80** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison results of different change detection methods on LEVIR-CD, DSIFN-CD and WHU-CD \begin{table} \begin{tabular}{c c c|c c c c} \hline \hline Align & Perturb & Decouple & \multicolumn{4}{c}{LEVIR-CD} \\ & & Precision & Recall & F1 & IoU \\ \hline \multirow{3}{*}{\(\checkmark\)} & & & 85.59 & 86.27 & 85.93 & 75.33 \\ & & & 87.06 & 87.16 & 87.11 & 77.16 \\ & \(\checkmark\) & & 87.92 & 87.43 & 87.68 & 87.06 \\ & & \(\checkmark\) & **93.83** & 87.31 & 90.45 & 82.56 \\ \(\checkmark\) & \(\checkmark\) & & 92.07 & 90.2 & 91.03 & 83.54 \\ \(\checkmark\) & & \(\checkmark\) & 93.00 & 89.50 & 91.22 & 83.36 \\ & \(\checkmark\) & \(\checkmark\) & 93.38 & 89.27 & 91.28 & 83.95 \\ \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 92.81 & **90.64** & **91.71** & **84.69** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on the effectiveness of operations proposed in our approach. Figure 3: Qualitative results of different CD methods on LEVIR-CD, DSIFN-CD and WHU-CD dataset: (a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-Diff, (e) FC-Siam-Conc, (f) DTCDSCN, (g) BIT, (h) ChangeFormer, (i) Ours, and (j) Ground-truth. provides in-depth guidance for following feature extraction. In additional, we also visualize the heatmap of intermediate feature to show the impact from perturbation. Figure 5 shows the feature visualization before and after introducing the perturbation in the second stage of the backbone. It can be observed that with the perturbation-aided module, our model significantly reduces the impact of the pseudo-change (pond, road, etc.) and strengthens focus on the changeful objects. Finally, we evaluate the impact from decoupled decoder, specifically, to build the baseline, we introduce a single decoder structure which takes the concatenation of image features and difference information as input. From the Table 2, we find decoupled structure brings substantial performance gain on detection precision and IoU score. This demonstrates that decoupled decoders make full use of the complementary between image content and difference information for better change detection. Figure 6 shows the feature visualization of two shunts in our decoupled decoders. We observe that the output of the Content-agnostic Decoder can retains the profile of real semantic change. Further, the dual-decoder structure can help detect some small and detailed changes that could not be detected by pure difference information. **Verification Experiment on Hyperparameters.** Next we also conduct parameter verification experiments to test sensitivity to hyperparameters in the disturbance module on the LEVIR-CD dataset. The results are shown in Table 3. First we evaluate the effect of the ratio of the masked channels, concreived, we set the ratio \(\tau\) to 50%, 25%, 12.5% and 6.25% of the input feature dimension. The experimental results show that there exist negative impact on detection performance when the perturbation ratio is too high or too low, this is because large perturbation ratio can result in too many masked channels and loss in difference information, thus the difference prediction underfit to underlying change patterns and affect feature extraction in following stages. On the other hand, too few perturbed channels will decrease the variance in difference information, thus the difference prediction overfit to specific change patterns. Therefore, we choose 25% as the most appropriate ratio. Secondly, we evaluate the effect of different forms of deep supervision \(\mathcal{L}_{d}^{l}\). From Table 3, when replacing cross-entropy with dice loss, the results are degraded in different ratios of perturbation, because small objects account for a certain proportion in the change detection dataset, to which the dice loss is very sensitive. Therefore, once some pixels of small objects are incorrectly predicted, it will lead to large fluctuation in the backward gradient. Instead, the cross-entropy takes each pixel equally, which alleviates this problem. ## 5 Conclusion In this paper, we review the issue of underutilization of difference information in previous methods of RSI change detection, and propose a new change detector termed as APD. The APD change detector features with three elaborately designed operations, i.e. Alignment, Perturbation and Decoupling, to fully leverage the difference information and bootstrap change detection results. With a lightweight backbone, our APD detector can effectively improve the performance on challenging benchmarks of change detection and achieve state-of-the-art results in most metrics, and extensive ablation studies also verify the effectiveness of each operation. ## Acknowledgements This work was supported in part by Natural Science Foundation of China under contract 62171139, and in part by Zhongshan science and technology development project under contract 2020AG016. Figure 4: Visualization results of effect from Alignment Figure 5: Feature visualization results of effect from Perturbation Figure 6: Feature visualization of effect from Decoupled Decoder
2306.01715
A New Galaxy Cluster Merger Capable of Probing Dark Matter: Abell 56
We report the discovery of a binary galaxy cluster merger via a search of the redMaPPer optical cluster catalog, with a projected separation of 535 kpc between the BCGs. Archival XMM-Newton spectro-imaging reveals a gas peak between the BCGs, suggesting a recent pericenter passage. We conduct a galaxy redshift survey to quantify the line-of-sight velocity difference ($153\pm281$ km/s) between the two subclusters. We present weak lensing mass maps from archival HST/ACS imaging, revealing masses of $M_{200}=4.5\pm0.8\times10^{14}$ and $2.8\pm0.7\times10^{14}$ M$_\odot$ associated with the southern and northern galaxy subclusters respectively. We also present deep GMRT 650 MHz data revealing extended emission, 420 kpc long, which may be an AGN tail but is potentially also a candidate radio relic. We draw from cosmological n-body simulations to find analog systems, which imply that this system is observed fairly soon (60-271 Myr) after pericenter, and that the subcluster separation vector is within 22$^\circ$ of the plane of the sky, making it suitable for an estimate of the dark matter scattering cross section. We find $\sigma_{\rm DM}=1.1\pm0.6$ cm$^2$/g, suggesting that further study of this system could support interestingly tight constraints.
David Wittman, Rodrigo Stancioli, Kyle Finner, Faik Bouhrik, Reinout van Weeren, Andrea Botteon
2023-06-02T17:36:28Z
http://arxiv.org/abs/2306.01715v2
# A New Galaxy Cluster Merger Capable of Probing Dark Matter: Abell 56 ###### Abstract We report the discovery of a binary galaxy cluster merger via a search of the redMaPPer optical cluster catalog, with a projected separation of 535 kpc between the BCGs. Archival XMM-_Newton_ spectro-imaging reveals a gas peak between the BCGs, suggesting a recent pericenter passage. We conduct a galaxy redshift survey to quantify the line-of-sight velocity difference (\(153\pm 281\) km/s) between the two subclusters. We present weak lensing mass maps from archival HST/ACS imaging, revealing masses of \(M_{200}=4.5\pm 0.8\times 10^{14}\) and \(2.8\pm 0.7\times 10^{14}\) M\({}_{\odot}\) associated with the southern and northern galaxy subclusters respectively. We also present deep GMRT 650 MHz data revealing extended emission, 420 kpc long, which may be an AGN tail but is potentially also a candidate radio relic. We draw from cosmological n-body simulations to find analog systems, which imply that this system is observed fairly soon (60-271 Myr) after pericenter, and that the subcluster separation vector is within 22\({}^{\circ}\) of the plane of the sky, making it suitable for an estimate of the dark matter scattering cross section. We find \(\frac{\sigma_{\rm DM}}{m}=1.1\pm 0.6\) cm\({}^{2}\)/g, suggesting that further study of this system could support interestingly tight constraints. Galaxy clusters (584); Dark matter (353); Galaxy spectroscopy (2171); Weak gravitational lensing (1797); Hubble Space Telescope (761) 0000-0002-8870-7880]David Wittman 0000-0002-4880-0880]Rodrigo Stancioli 0000-0002-4880-7880]Kyle Finner 0000-0002-4880-0880]Faik Bourrik 0000-0002-1883-0880]Reinout van Weeren 0000-0002-1888-7880]Andrea Botteon ## 1 Introduction A collision of two galaxy clusters dramatically reveals the contrasting behaviors of gas, galaxies, and dark matter (DM). Seminal papers on the Bullet Cluster provided a "direct empirical proof of dark matter" (Clowe et al., 2006) as well as limits on the scattering cross-section of DM particles with each other (Markevitch et al., 2004; Randall et al., 2008), aka DM "self-interaction." The Bullet constraint, \(\frac{\sigma_{\rm DM}}{m}<0.7\) cm\({}^{2}\)/g, is still quite large in particle physics terms--roughly at the level of neutron-neutron scattering. In principle, ensembles of merging clusters enable tighter constraints (Harvey et al., 2015; Wittman et al., 2018) but these are complicated by the fact that few systems have well-modeled dynamics. Specifically, the time since pericenter, pericenter speed, and viewing angle cannot be extracted from systems that have more than two merging subclusters. Even with binary mergers, other factors may hinder the study of dark matter, such as a merger axis closer to the line of sight rather than the plane of the sky or bright stars that limit deep optical observations. Hence there is interest in finding more "clean" binary systems with merger axis close to the plane of the sky. Historically, merging systems were discovered upon notice of disturbed X-ray morphology, which typically happened serendipitously in pointed observations. Meanwhile, modern optical sky surveys find tens of thousands of clusters and promise to find many more as they get wider and deeper (Racca et al., 2016; Ivezic et al., 2019). These surveys potentially contain new binary mergers, if appropriate cuts can filter out tens of thousands of more ordinary clusters. We have developed a new selection method based on the redMaPPer (Rykoff et al., 2014) cluster catalog, which is in turn based on the \(\sim\)10,000 deg\({}^{2}\) Sloan Digital Sky Survey (SDSS; York et al., 2000) imaging. We select clusters not dominated by a single brightest cluster galaxy (BCG), in which there is substantial angular separation between the top BCG candidates. These clusters become candidate mergers, which are then checked against archival XMM-_Newton_ and _Chandra_ data where available; if the X-ray peak is between the BCGs, the candidate is wor the study in their own right. In this paper we present the first such candidate, RM J003353.1-075210.4, which we identify as Abell 56 as explained in SS2. Additional sections in this paper present a galaxy redshift survey of the system (SS3); a weak lensing analysis (SS4); a search for analog systems in a cosmological simulation (SS5); radio observations in search of a radio relic that could outline a merger shock (SS6); and a constraint on the dark matter scattering cross section \(\frac{\sigma_{\rm DM}}{m}\)(SS7). We assume a flat \(\Lambda\)CDM cosmology with \(H_{0}=69.6\) km/s and \(\Omega_{m}=0.286\). ## 2 Abell 56: Initial Overview _Nomenclature._ The redMaPPer designation for this cluster is RM J003353.1-075210.4. The original coordinates for Abell 56 (Abell et al., 1989 ; hereafter ACO) are nearly \(5^{\prime}\) north of the redMaPPer position. Although ACO cite the positional uncertainty as \(2.5^{\prime}\), inspection of the SDSS imaging1 reveals no other clusters in the area, suggesting that the ACO coordinates are off by more than their nominal uncertainty. (The limited depth of the ACO catalog is such that any real ACO cluster must be in the redMaPPer catalog.) Indeed, the widely-used SIMBAD database2 resolves the name "Abell 56" to the redMaPPer position, as does the SDSS Navigator noted above. Hence we identify this cluster as Abell 56. Note, however, that the NASA/IPAC Extragalactic Database (NASA/IPAC Extragalactic Database (NED), 2019) resolves this name to the original ACO coordinates. Footnote 1: [https://skyserver.sdss.org/dr16/en/tools/chart/navi.aspx](https://skyserver.sdss.org/dr16/en/tools/chart/navi.aspx) This cluster has also been detected by the _Planck_ Sunyaev-Zel'dovich survey (Planck Collaboration et al., 2016), with the designation PSZ2 G109.99-70.28. This gas peak position is \(1.3^{\prime}\pm 2.4^{\prime}\) from the redMaPPer position and \(4.0^{\prime}\pm 2.4^{\prime}\) from the ACO position. Hence Planck Collaboration et al. (2016) adopted the redMaPPer position while adopting "ACO 56" as the identifier in their union catalog. As a result, a search for G109.99-70.28 on SIMBAD yields the redMaPPer position; however NED yields the much more uncertain gas peak position. _BCGs and redshifts._ Figure 1 presents two views of Abell 56: with XMM-_Newton_ contours (see below) over SDSS multiband imaging and over a single-band (F814W) archival image from the Hubble Space Telescope Advanced Camera for Surveys (HST/ACS) (see SS4). At the redMaPPer photometric redshift of 0.30, the physical scale is 4.5 kpc/arcsec. There are two galaxy subclusters separated by close to \(2^{\prime}\) (530 kpc), with the X-ray peak located along the subcluster separation vec Figure 1: Abell 56: 0.4-1.25 keV XMM-_Newton_ contours over SDSS multiband (left) and HST/ACS F814W (right) images. tor, about 32\({}^{\prime\prime}\) (140 kpc) from the southern subcluster. These numbers will be refined with further data in later sections of this work. The southern subcluster is dominated by a galaxy observed by the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al., 2013) to be at \(z=0.30231\); redMaPPer assigns this galaxy 84% probability of being the overall BCG. The northern subcluster has two galaxies that appear nearly equally salient in Figure 1; the eastern one (about 0.2 mag brighter) is assigned 16% probability of being the overall BCG and BOSS places it at \(z=0.30475\). redMaPPer technically assigns some nonzero BCG probability to the second-brightest galaxy in each subcluster, but in each case it is only 0.017%, which we consider negligible. Repp & Ebeling (2018) briefly considered this cluster as part of an 86-cluster sample. It is classified in their Table 6 as being in the most disturbed of their four optical morphology classes. _Merger basics._ Taking the BCGs as tracers for a first calculation of the merger geometry, we find a projected separation of 118\({}^{\prime\prime}\) (535 kpc) and a line-of-sight velocity difference of 565 km/s. For comparison, the projected separation in the X-ray selected Bullet cluster is 720 kpc (Bradac et al., 2006; Clowe et al., 2006) and those in Golovich et al. (2019) radio-selected sample of merging clusters are generally closer to 1 Mpc, indicating that more time since pericenter (TSP) has passed. This suggests the potential of optical selection to find systems with smaller separations hence less TSP. Because a complete understanding of the merging process will require snapshots of systems spanning a range of TSP, optical selection may find its place within a range of complementary selection methods. _Richness and related estimates._Rykoff et al. (2014) give the optical richness \(\lambda\) (a measure of how many galaxies are in the cluster, within a certain luminosity range below the BCG) as 128. Simet et al. (2017) calibrated the relation between weak lensing mass and \(\lambda\) (including its scatter), from which we estimate the mass of Abell 56 to be \(M_{200}=10.4^{+7.6}_{-4.6}\times 10^{14}\)\(h^{-1}\) M\({}_{\odot}\). Sereno & Ettori (2017) implemented a system for mass forecasting with proxies, taking into account various biases, and found \(M_{200}=11.49\pm 0.89\times 10^{14}\) M\({}_{\odot}\) for this system based on its redMapper richness. They also found \(M_{500}=7.09\pm 0.77\times 10^{14}\) M\({}_{\odot}\) using \(Y_{500}\), a measure of the Sunyaev-Zel'dovich effect, as a proxy. For comparison, Planck Collaboration et al. (2016) found \(M_{500}=5.62^{+0.54}_{-0.58}\times 10^{14}\) M\({}_{\odot}\) from their scaling relation based on the same \(Y_{500}\) measurement. From the scaling relations of Rozo & Rykoff (2014) one would expect the X-ray temperature \(T_{X}\) to be around 7 keV with up to 40% scatter at fixed richness. Because this is a merging cluster, the X-ray properties may vary from the scaling relations even more than usual. _X-ray properties from archival data._ The cluster was observed with the XMM-_Newton_ European Photon Imaging Camera (EPIC) in 2010 (Obs.ID 0650380401, P.I. Allen). The exposure times were 7121 s, 7127 s, and 5533 s for the MOS1, MOS2, and PN instruments, respectively. As the short exposure does not allow for a detailed analysis of the intracluster medium (ICM) properties and the cluster morphology, we restricted our analysis to obtaining a point-source-subtracted, exposure-corrected image, as well as a global temperature and luminosity for the cluster. We performed the data reduction using the XMM-_Newton_ Science Analysis System (SAS) version 19.0.0. We excluded periods of high soft-proton background by imposing a cutoff of 0.4 (0.8) on the soft-proton rate for the MOS (PN) detectors3, which resulted in filtered exposure times of 6817 s, 6621 s, and 4178 s for MOS1, MOS2, and PN, respectively. Only single-to-quadruple events from MOS and single-to-double events from PN were used in our analysis. Point-source detection and masking were performed by the cheese routine from the ESAS package. The contours in Figure 1 are from the 0.4-1.25 keV band after point-source masking and exposure correction, using the procedure described in the XMM ESAS Cookbook (Snowden & Kuntz, 2014) and adaptively smoothed using the adapt routine from ESAS. Footnote 3: We define MOS (PN) soft-proton events as those with energy \(E\)\(>\)10 (10\(<\)\(E\)\(<\)12) keV. The higher-than-usual baseline soft-proton rate for this observation may result in significant residual soft-proton contamination even after the exclusion of flare events. This is at least partially mitigated by the background-subtraction strategy. In order to obtain a background-subtracted spectrum, we used the double-subtraction method described in Arnaud et al. (2002). We defined the source region as a circle with a 90\({}^{\prime\prime}\) radius centered on the cluster, whereas the background was extracted from a slightly larger circular region away from the cluster. Blank-sky files (Carter & Read, 2007) were used to mitigate the effects of the spatial variation of background components across the detector. For a detailed description of the method, we refer to Arnaud et al. (2002). Using XSPEC (Arnaud, 1996), we fit the spectrum to an apec model, multiplied by a phabs model to account for galactic absorption. We obtained a total unabsorbed luminosity in the \(0.5-10.0\) keV range of \(L_{X}=3.8\pm 0.2\times 10^{44}\) erg/s and a temperature \(T_{X}=5.9^{+1.1}_{-0.8}\) keV, where the uncertainties represent the 90% confidence intervals. ## 3 Redshift Survey and Clustering Kinematics ### Redshift survey _Observational setup._ We observed Abell 56 with the DEIMOS multi-object spectrograph (Faber et al., 2003) at the W. M. Keck Observatory on July 1, 2022 (UT). The DEIMOS field of view is approximately \(16^{\prime}\times 4^{\prime}\), making it well suited to merging clusters when the long axis is placed along the subcluster separation vector. We prepared two slitmasks with approximately sixty \(1^{\prime\prime}\) wide slits in each. Galaxies were selected for targeting based on (i) a preference for brighter targets; and (ii) a preference for galaxies likely to be in the cluster based on Pan-STARRS photometric redshifts (Beck et al., 2021). Because the photometric redshifts are imprecise, this approach naturally helps probe for potential foreground/background structures that could affect the modeling of Abell 56. Specifically, each Pan-STARRS photometric redshift \(z_{\rm PS}\) has a corresponding uncertainty \(\sigma_{\rm PS}\) such that the likelihood of the galaxy being in a cluster at redshift \(z_{\rm cl}\) is \[\mathcal{L}\propto\frac{1}{\sigma_{\rm PS}}\exp\frac{(z_{\rm PS}-z_{\rm cl})^ {2}}{2\sigma_{\rm PS}^{2}} \tag{1}\] The median value of \(\sigma_{\rm PS}\) was 0.16, so a broad range of redshifts was included. We then upweighted brighter galaxies by applying a multiplicative weight (\(24-r\)), where \(r\) is the apparent \(r\) magnitude, to quantify the priority of each galaxy as input to the slitmask design software dsimulator; larger numbers indicate higher priority. We manually raised the priority of a few galaxies that potentially formed a foreground group at the north end of the field. We used the 1200 line mm\({}^{-1}\) grating, which results in a pixel scale of 0.33 A pixel\({}^{-1}\) and a resolution of \(\sim\)1 A (50 km/s in the observed frame). The grating was tilted to observe the wavelength range \(\approx\) 4200-6900 A(the precise range depends on the slit position), which at \(z\approx 0.3\) includes spectral features from the [OII] 3727 A doublet to the magnesium line at 5177 A. The total exposure time was 45 (77) minutes on the first (second) mask, divided in three (four) exposures. The seeing was roughly \(1^{\prime\prime}\), with minor variations over time. _Data reduction and redshift extraction._ We calibrated and reduced the data to a series of 1-D spectra using Pypelt (Prochaska et al., 2020; Prochaska et al., 2020). We double-checked the arc lamp wavelength calibration against sky emission lines, and found good agreement. To extract redshifts from the 1-D spectra we wrote custom Python software to emulate major elements of the approach used by the DEEP2 (Newman et al., 2013) survey using the same instrument. The throughput as a function of wavelength varies from slit to slit, hindering direct comparison to template spectra. Because throughput is generally a slowly varying function of wavelength, the spectra are compared to templates only after removing the slowly varying trends from each. First, telluric absorption features are reversed using Mauna Kea models from the Pypelt development suite. Next, we create a smooth model or unsharp mask by convolving the 1-D spectrum with a kernel 150 A wide, which is uniform but for a 10 A diameter hole in the center. Finally, the intensity of each pixel in the 1-D spectrum is expressed as a fraction of the intensity in the smooth model. The same operations are performed on redshifted versions of the galaxy templates from the Sloan Digital Sky Survey4, and a \(\chi^{2}\) value is computed for each template-redshift combination. A user then inspects the match between the data and the model with the global minimum \(\chi^{2}\), or other models at local minima, before determining whether a redshift is secure. A secure redshift may not appear at the global minimum due to poorly subtracted sky lines or other artifacts (e.g., a spurious "line" with spuriously small uncertainties may appear at the gap between CCDs). Furthermore, some slits suffer from vignetting at the red end, which appears as a drop in intensity too steep to be removed by the unsharp masking process. In these cases the user specifies a maximum wavelength to consider for template matching. A negligible fraction of slits contained stars, so we did not include stellar templates in the automated search; users can manually classify a spectrum as a star without extracting a redshift. Footnote 4: Available at [https://classic.sdss.org/dr7/algorithms/spectemplates/](https://classic.sdss.org/dr7/algorithms/spectemplates/); we used templates 23 through 27. The uncertainty in the redshift is initially computed from the curvature of the \(\chi^{2}\) surface about the minimum, and is typically \(\lesssim 10^{-4}\) (23 km/s in the frame of the cluster). We compared redshifts obtained by different users on different computing hardware, operating systems, and Python installations. We found that user-dependent uncertainty is also \(\lesssim 10^{-4}\), mostly due to specification of the maximum wavelength. We therefore add \(10^{-4}\) in quadrature to the uncertainty derived from the curvature of the \(\chi^{2}\) surface to derive a final uncertainty estimate. We found 54 and 48 secure redshifts in the two masks respectively, for a total of 102. These are listed in Table 1. _Comparison to archival redshifts._ We searched NED for archival spectroscopic redshifts within a radius of \(5^{\prime}\), and found 16 galaxies, largely from BOSS (Dawson et al., 2013). Of these, five were galaxies that we had targeted. The mean redshift difference between independent measurements of the same target is 9 km/s, with an rms scatter of 9 km/s. We then removed the duplicates and merged the catalogs from NED and from our two masks to produce a final catalog of 113 galaxies. Figure 2 shows a histogram of these redshifts. ### Subclustering and kinematics _Non-Abell 56 structures._ Figure 2 reveals a potential background cluster at \(z=0.37\), and possibly another at \(z=0.46\). To assess how strongly clustered these galaxies are on the sky, Figure 3 shows a sky map color-coded by redshift. Galaxies in the putative cluster at \(z=0.37\) (0.46) are shown as open (closed) green circles, while galaxies near \(z=0.30\) (i.e., associated with Abell 56) are shown with a continuous color map that contains no green. Neither set of background galaxies shows signs of clustering in space. Furthermore, we estimate the velocity dispersion of each set using the biweight estimator (Beers et al., 1990) and find only \(279\pm 45\) (\(105\pm 48\)) for the structure at \(z=0.37\) (0.46). Uncertainties on biweight estimators are obtained by the jackknife method throughout this paper. These velocity dispersions are far less than the velocity dispersion of Abell 56 (below), suggesting that they are an order of magnitude less massive and unlikely to substantially contaminate the weak-lensing and X-ray maps presented below. _Abell 56._ Of 67 galaxies in the \(0.285\leq z\leq 0.314\) window, the biweight estimate for the systemic red \begin{table} \begin{tabular}{l c c c} \hline \hline RA (deg) & DEC (deg) & z & uncertainty \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: Galaxy redshifts Figure 2: Redshift histogram, with inset showing the redshift interval around Abell 56. shift is \(0.30256\pm 0.00058\). At this redshift, the physical scale is \(4.521\) kpc/arcsec given our adopted cosmological model (Wright, 2006). The redshift distribution is compatibile with a single Gaussian, according to a Kolmogorov-Smirnov test. This is consistent with the low line-of-sight velocity component \(\Delta v_{\rm los}\) suggested by the archival redshifts of the north and south BCGs. The biweight estimate of velocity dispersion is \(1264\pm 145\) km/s--rather large, but likely to be inflated by merger activity as noted below. We use the mc3gmm code (Golovich et al., 2019) to assign galaxy membership to subclusters. This code models the distribution of galaxies in (RA, Dec, z) space as a mixture of \(N\) elliptical Gaussian profiles (i.e., subclusters), with physically motivated priors on subcluster variance in each dimension as well as covariance (i.e., ellipticity and rotation). \(N\) is determined by the user; we set \(N\)=2 based on the optical imaging and further supported by the lensing data presented in SS4.5 The user sets nonoverlapping bounds for the central (RA, Dec, z) of each subcluster to avoid degeneracies; mc3gmm maximizes the likelihood by adjusting the parameters within those bounds. We run mc3gmm on the galaxies in the redshift window \(0.285\leq z\leq 0.314\) and the result is shown in Figure 4. The velocities of the subclusters are nearly identical, suggesting that the relative motion of the subclusters is in a direction close to the plane of the sky. The biweight estimate for the systemic redshift of the 33 (34) north (south) members is \(0.30298\pm 0.00099\) (\(0.30231\pm 0.00071\)). This yields \(\Delta v_{\rm los}=153\pm 281\) km/s. Footnote 5: Golovich et al. (2019) varied \(N\) and for each merging system found the value of \(N\) that best satisfied the Bayesian Information Criterion (BIC), thus deriving \(N\) from the spectroscopic data alone. Here, the subcluster separation is smaller than typically seen in Golovich et al. (2019), and the spectroscopic data points are fewer, making it more difficult to meet BIC criteria for \(N\)\(>\)1 based on the spectroscopy alone. The biweight velocity dispersion is \(1283\pm 236\) km/s for the north subcluster, and \(1251\pm 191\) for the south. Simulations of merging clusters (e.g., Pinkney et al., 1996; Takizawa et al., 2010) show that a pericenter passage in the plane of the sky boosts the observed velocity dispersion by a factor of \(\approx\)1.5 for hundreds of Myr afterward. Hence, one should not interpret these large velocity dispersions as indicative of extremely massive clusters. ## 4 Weak Lensing Analysis We perform a weak-lensing analysis on the HST F814W imaging. Galaxies are detected in the F814W image with SExtractor (Bertin and Arnouts, 1996). For each galaxy, PSF models are generated following the method of Jee et al. (2007) by utilizing their publicly available PSF catalog. Each galaxy is fit with a PSF-convolved Gaussian distribution and the complex ellipticities are recorded (our ACS weak-lensing pipeline is outlined in Finner et al., 2017, 2021, 2023). Objects with ellipticity greater than 0.8, ellipticity uncertainty Figure 4: Corner plot showing distribution of subcluster members in RA, DEC, and velocity space relative to the overall mean. Members of the north (south) subcluster are shown in blue (red). Figure 3: Redshift map. Abell 56 galaxies are coded with a continuous color map, while galaxies in the putative background cluster at \(z=0.37\) (0.46) are shown as open (closed) green circles. XMM-_Newton_ contours are shown in red. greater than 0.3, and intrinsic size (pre-psf) less than 0.5 pixels are removed to prevent spurious sources such as diffraction spikes around bright stars and poorly fit objects from entering the source catalog. The next step is to eliminate as many foreground and cluster galaxies as possible, while still retaining a sizeable sample of background sources. With only single-band imaging available, we select galaxies with F814W AB magnitudes fainter than 24. We apply this magnitude cut to the GOODS-S photometric redshift catalog (Dahlen et al., 2013) and find that the contamination by foreground galaxies is expected to be \(\sim 2\%\). Cluster galaxies may contribute additionally to the contamination. As their contamination should be radially dependent, we test the radial dependence of the source density. We find it to be flat, which suggests cluster galaxies are not significantly contaminating our source catalog. The final source catalog contains \(\sim 43\) galaxies arcmin\({}^{-2}\). The source catalog is then provided to the FIATMAP code (Wittman et al., 2006) to create a surface mass density map. FIATMAP convolves the observed shear field with a kernel of the form \[r^{-2}(1-\exp(\frac{-r^{2}}{2r_{i}^{2}}))\exp(\frac{-r^{2}}{2r_{o}^{2}}) \tag{1}\] where \(r_{i}\) and \(r_{o}\) are inner and outer cutoffs, respectively. The inner cutoff is necessary to prevent amplification of shape noise in sources at small \(r\), and was set to 50 arcsec. The outer cutoff suppresses noise that may come from unrelated structures along the line of sight at large projected separations, and is of limited value in a small field; we set it to 100 arcsec, which is comparable to the radius of the field. The results were pixelized onto a map with 1.5 arcsec pixels. In addition to this fiducial map, a family of viable reconstructions can be made by bootstrap resampling the shear catalog (see below). Figure 5 shows the fiducial map as a set of contours overlaid on a Pan-STARRS multiband image6(Waters et al., 2020). Two weak lensing peaks are evident, associated with (albeit slightly offset from) each galaxy subcluster, with the X-ray peak in between. This confirms the basic merger scenario developed above. Footnote 6: Retrieved from [http://pslimages.stsci.edu/cgi-bin/pslcutouts](http://pslimages.stsci.edu/cgi-bin/pslcutouts). We estimate the mass of each subcluster by fitting a two-halo NFW model with a fixed mass-concentration relation from Diemer and Joyce (2019). To achieve the best-fit model, the shear of the two-halo model is derived at the position of each background galaxy and the chi-square is minimized. The effective distance ratio of the sources is set by the effective distance ratio of GOODS-S sources fainter than 24th magnitude. The best-fit two-halo model has a mass of \(M_{200}=4.5\pm 0.8\times 10^{14}\) M\({}_{\odot}\) and \(M_{200}=2.8\pm 0.7\times 10^{14}\) M\({}_{\odot}\) for the south and north subclusters, respectively. We allow the centroid of each halo to be fit and they converge to the projected mass distribution peaks. On the other hand, if we fix the halo centroids to the BCGs, we find the south and north subcluster masses decrease by 10% and 60%, respectively. To test the dependence of the mass estimate on our choice of magnitude cut, we vary the magnitude constraint on the background catalog from 22nd to 25th magnitude and find that the mass estimates decrease for brighter magnitude cuts but within the mass uncertainty. To estimate the total mass of the cluster, we simulate two NFW halos at the projected separation of the two mass peaks. Integrating the model from the center of mass to \(R_{200}\), we estimate the total mass of the cluster to be \(M_{200}=9.7\pm 2.0\times 10^{14}\) M\({}_{\odot}\) (\(M_{500}=7.1\pm 1.6\times 10^{14}\) M\({}_{\odot}\)). To further quantify the detection significance, we bootstrap resampled the source catalog to generate 1000 realizations of the mass map. As expected, the mean map yielded by these resamplings matches the fiducial map yielded by the original catalog. At any given sky position, we can measure the rms variation of surface mass density across the map realizations to obtain a noise map. The ratio of the fiducial map to this noise Figure 5: Surface mass density contours from weak lensing (white) overlaid on a Pan-STARRS multiband image and red XMM-_Newton_ surface brightness contours. The small closed contour between the subclusters is a trough. Green GMRT 650 MHz contours (§6) start at 70 \(\mu\)Jy/beam with increments of 680 \(\mu\)Jy/beam and a 4\({}^{\prime\prime}\) beam. map is then a significance map. The peak of the southern (northern) subcluster is detected at a significance of 6.3 (5.5). The projected separation between the mass peaks, \(d_{\rm proj}=438\) kpc, is important for the dynamical modeling in SS5. To estimate the uncertainty on the peak locations, the peak from each of the 1000 realizations was recorded. The 1000 peaks were then passed to a \(k\)-means algorithm with the number of distributions fixed to two. The \(k\)-means algorithm iteratively calculates the centroid of the peaks and assigns peaks to each centroid until the centroid converges. This procedure yields two distributions of mass-peak locations, which are then processed with a kernel density estimator to find the \(1\sigma\) and \(2\sigma\) uncertainties. We find that the southern mass peak is consistent with its BCG at the \(1\sigma\) level. In contrast, the northern mass peak is offset \(19.2\pm 4.9\) arcsec (\(87\pm 22\) kpc) to the south of the northern BCG. We address this offset further in SS7. Our immediate goal here is to define a 68% confidence interval on the projected separation between mass peaks, which we find to be \(96.9\pm 45.6\) arcsec (\(438\pm 206\) kpc). To further check the halo position uncertainties, we consider again the two-halo fit. As a model-driven procedure, this should be more robust against edge effects than the mapping procedure, which convolves the observed shear field. Nevertheless, as noted above, the halo center parameters converge to the projected mass distribution peaks. The positional uncertainties from the two-halo fit are smaller than those from the resampled mapping method. Hence our adoption of the values from the latter method is the more cautious approach. ## 5 Simulated Analogs and Dynamical Parameters We find analog systems in the Big Multidark Planck (BigMDPL) Simulation (Klypin et al., 2016) using the method of Wittman et al. (2018) and Wittman (2019). The observables used to constrain the likelihood of any given analog and viewing angle are: * the projected separation between mass peaks \(d_{\rm proj}\), for which we use \(438\pm 206\) kpc from SS4. * the line-of-sight relative velocity \(\Delta v_{\rm los}\), for which we use \(153\pm 281\) km/s from SS3. * the subcluster masses, for which we use \(M_{200}=4.5\pm 0.8\times 10^{14}\) M\({}_{\odot}\) and \(M_{200}=2.8\pm 0.7\times 10^{14}\) M\({}_{\odot}\) for the south and north subclusters, respectively, from SS4. Note that dynamical timescales and velocities depend only weakly on the masses. Table 2 lists the resulting highest probability density confidence intervals for time since pericenter (TSP), pericenter speed \(v_{\rm max}\), viewing angle \(\theta\) (defined as the angle between the subcluster separation vector and the line of sight, i.e. 90\({}^{\circ}\) when the separation vector is in the plane of the sky), and the angle \(\varphi\) between the current separation and velocity vectors. \(\varphi\) is potentially an indicator of how head-on the trajectory is, as well as of merger phase (surpassing 90\({}^{\circ}\) at apocenter). The likelihood ratio of analogs in the outbound vs. returning phase is 19:1. Table 2 also lists the confidence intervals for the dynamical parameters when the analysis is restricted to the outbound scenario. These particular parameters are not sensitive to the current merger phase. ## 6 Radio Observations and Results Pericenter speeds in cluster mergers are typically greater than the sound speed in the gaseous ICM, so each subcluster launches a shock in the ICM of the other subcluster (Ha et al., 2018). In hydrodynamic simulations of the Bullet (Springel & Farrar, 2007) the shock begins at pericenter speed and loses very little speed over time, while the corresponding subcluster falls behind due to the gravity of the other subcluster. Our Abell 56 analogs do not include gas, but we use the pericenter speed, gravitational subcluster slowing, and analog time of observation to predict the separation between a subcluster and a hypothetical constant-velocity shock. We find \(\sim\)200 kpc separation in the outbound phase. The analogs indicate that an additional \(\approx\)1.2 Gyr passes before the subclusters return to the same projected separation en route to a second pericenter. In this time, a hypothetical constant-velocity shock would have proceeded over 2 Mpc further out. Therefore, observing the shock location could further disambiguate between outbound and returning scenarios. This toy model glosses over the complexities of ICM properties affecting the shock speed, but the timescale of the returning scenario is so long that the subcluster-shock separation remains \(>\)1 Mpc even with factor-of-two variations in shock speed, or complete stalling of the shock after \(\sim\)500 Myr. \begin{table} \begin{tabular}{c c c c c} Scenario & TSP (Myr) & \(v_{\rm max}\) (km/s) & \(\theta\) (deg) & \(\varphi\) (deg) \\ \hline \multicolumn{5}{c}{68\% CI} \\ \hline All & 60-271 & 1960-2274 & 68-90 & 6-33 \\ Outbound & 90-291 & 1952-2282 & 68-90 & 0-26 \\ & & 95\% CI & & \\ \hline All & 0-451 & 1729-2510 & 42-90 & 0-86 \\ Outbound & 0-366 & 1681-2489 & 44-90 & 1-67 \\ \end{tabular} \end{table} Table 2: Dynamical parameters from analogs Shocks are often detected as discontinuities in the X-ray surface brightness, but in this case the archival X-ray data are too shallow to support such a detection. Shocks may also inject sufficient non-thermal energy into charged particle motion that electrons emit synchrotron radiation, detectable as an extended radio source known as a radio relic (van Weeren et al., 2019). Archival 150 MHz data from the TIFR GMRT Sky Survey (TGSS) Alternative Data Release (Intema et al., 2017) show extended emission 270 kpc south of the southern BCG. Due to the large synthesized beam size (\(25\arcsec\)) and an accompanying point source, it is difficult to further characterize this emission using the TGSS data alone. Cuciti et al. (2021) observed the cluster at 1.5 GHz and \(\approx 12\arcsec\) beam using the Jansky Very Large Array (JVLA). The source in question appears at the southern edge of their Figure A.1. However, they classified this cluster as having no extended emission, presumably because they pointed at the original Abell coordinates, about \(7\arcmin\) north of the source in question, and because they were primarily searching for radio halos rather than relic candidates. We also checked the VLASS (Lacy et al., 2020) and GLEAM (Wayth et al., 2015) surveys, and found no evidence of a halo or relic. We were granted 15 hours on the upgraded GMRT (uGMRT, Gupta et al., 2017) for Band 4 (550-900 MHz) observations of Abell 56 (proposal code 42_069) with much smaller synthesized beam size (\(4\arcsec\)). Observations were taken on 20 June 2022 and 24 June 2022. We used the SPAM pipeline (Intema, 2014) to calibrate the visibilities, and used wsclean(Offringa et al., 2014; Offringa and Smirnov, 2017) to create an image. The source 270 kpc south of the southern BCG extends for \(\approx 420\) kpc (\(93\arcsec\)) in the east-west direction and is barely resolved in the north-south direction. Its contours are overlaid in green on the Pan-STARRS image in Figure 5. This makes it clear that the bright point source at the western end of the radio emission is coincident with a galaxy; our redshift survey confirms that this galaxy is in the cluster. The most likely explanation for most of this emission is an AGN tail. Given the orientation of this feature which matches that expected of a merger shock, it is worth considering that AGN tails play a role in the formation of some relics by providing seed electrons that are re-accelerated by the passage of a shock (e.g., van Weeren et al., 2017). In such cases there is spectral aging across the narrow axis of the tail in addition to the expected aging from head to tail. Exploring this possibility would require high angular resolution spectral maps. Finally, we note that there is no evidence of a relic much further south as expected in the returning scenario, nor of a relic on the north side of the north subclusters. ## 7 Dark Matter Cross Section Estimate Spergel and Steinhardt (2000) first suggested that dark matter (DM) particles may scatter off each other in a process distinct from the interactions with standard model particles that are probed by direct detection experiments. The cross section for such scattering is usually quoted in terms of \(\frac{\sigma_{\rm DM}}{m}\), the cross section per unit mass, because the mass of the DM particle is unknown. Markevitch et al. (2004) laid out multiple physical arguments for inferring this parameter, at least at a back-of-the-envelope level, from merging cluster observations. Simulations (e.g., Randall et al., 2008; Robertson et al., 2017) are required to properly interpret such observations. However, as a first estimate to motivate deeper observations and perhaps simulations of Abell 56, we present an initial back-of-the-envelope estimate. One physical argument is that momentum exchange will slow the DM halos relative to the galaxies, resulting in a DM-galaxy offset. Markevitch et al. (2004) developed an argument based on finding no significant offset: requiring that the scattering depth be \(<\!1\) leads to an upper limit on \(\frac{\sigma_{\rm DM}}{m}\). In this case, there is a significant offset in the north, so we turn to the method of Harvey et al. (2014) and Harvey et al. (2015), which uses the ratio of gas-galaxy and DM-galaxy offsets. This method relies on an analogy between DM and the much more interactive gas, so it has some limitations, but it also reduces some sources of observational uncertainty. Foremost, it eliminates the assumption that the surface mass density relevant to DM scattering--the volume density integrated along the merger axis--equals the surface mass density we can measure, which is nearly _perpendicular_ to the merger axis. In fact clusters are triaxial (Harvey et al., 2021) and align to some extent with their neighboring clusters (Joachimi et al., 2015), hence one may expect greater column density along the merger axis. To the extent this pattern is echoed by the gas, the gas analogy may reduce this systematic error. Second, the gas analogy eliminates any uncertainty due to viewing angle, as that angle applies equally to the gas-galaxy and DM-galaxy separations. The chief limitation of the gas analogy is that it breaks down over time. SIDM simulations show that, given enough time, the galaxies within each subcluster fall back to their associated DM--and continue oscillating (Kim et al., 2017). Around the time of apocenter between subclusters, the DM-galaxy offset in each subcluster has a sign opposite that predicted by the gas analogy. Hence, the gas analogy should not be applied if the system is observed long after pericenter. The analogs indicate that Abell 56 is observed much closer to peri center than apocenter, so the gas analogy is appropriate here for a first estimate. In the southern subcluster, the DM-BCG separation7 is \(7\pm 16\) kpc and the gas-BCG separation is \(111\pm 38\) kpc, yielding \(\frac{\sigma_{\rm DM}}{m}=0.35\pm 1.03\) cm\({}^{2}\)/g, consistent with zero. In the northern subcluster, the DM-BCG separation is \(87\pm 22\) kpc, while the gas-BCG separation is unclear because it is difficult to identify a gas peak specifically associated with the northern subcluster. To be conservative we use the offset to the main gas peak, \(424\pm 38\) kpc. This yields \(\frac{\sigma_{\rm DM}}{m}=1.43\pm 0.61\) cm\({}^{2}\)/g. Multiplying the two likelihoods yields \(\frac{\sigma_{\rm DM}}{m}=1.10\pm 0.64\) cm\({}^{2}\)/g. Footnote 7: All separations in this paragraph are quoted after projecting them onto the merger axis, but we note that the components perpendicular to the merger axis are generally negligible. We performed a few checks on the statistical significance of the offset in the north. None of the 1000 bootstrap realizations of the convergence map in SS4 placed the overall mass peak as far north as the northern BCG, and only three of them placed a local mass peak (defined as a peak in the northern half of the field) that far north. We emphasize the tentative nature of the dark matter constraint. More work will be needed to understand why the northern subcluster has a significant DM-BCG offset while the south does not. Ground-based weak lensing, or more space-based pointings, may be helpful to reduce any systematic uncertainties related to the relative small footprint of the ACS data. Deeper imaging may reveal strongly lensed sources that could lead to more precise mass models. X-ray or radio confirmation of a shock position could further build confidence in the merger scenario. Even without detection of a shock, deeper data on the overall X-ray morphology combined with hydrodynamical simulations would greatly advance understanding of this merger. ## 8 Summary and Discussion We have presented a new binary, dissociative merging galaxy cluster discovered by cross-referencing archival X-ray data with locations of bimodal redMaPPer clusters. The selection technique has the potential to be applied more widely, as optical surveys continue to cover more area more deeply than ever before. In particular, the southern sky may provide new targets via the 5000 deg\({}^{2}\) Dark Energy Survey (DES; Abbott et al., 2018) and eventually the deeper 20,000 deg\({}^{2}\) Legacy Survey of Space and Time (LSST; LSST Science Collaborations et al., 2009). Finding the rare merger through pointed X-ray followup of selected candidates will require very careful selection. The forthcoming eROSITA X-ray survey could enable more of a cross-correlation approach where candidates are selected based on joint optical and X-ray properties. This particular cluster promises to be useful for constraints on \(\frac{\sigma_{\rm DM}}{m}\), given that its merger axis is close to the plane of the sky and its trajectory was sufficiently head-on to provide a substantial separation between the gas peak and the main BCG. The lensing map presented here is based on a single orbit of ACS time, and should be supplemented with deeper and wider data to better understand why there is a significant offset in the north but not in the south. Hydrodynamical simulations could shed light on whether this could happen in a Cold Dark Matter (CDM) scenario, perhaps with projection effects or other complications not identified here. Such simulations should also be compared to deeper X-ray maps to confirm that we understand the merger scenario. To place this system in context with other merging clusters with the potential to probe \(\frac{\sigma_{\rm DM}}{m}\), we refer to Table 1 of Wittman et al. (2018), which ranked the importance of various subclusters used in their ensemble analysis and that of Harvey et al. (2015). In Harvey et al. (2015), the measurement uncertainties on the "star-gas" separation \(\delta_{\rm SG}\) and the "star-interacting DM" separation \(\delta_{\rm SI}\) were assumed to be the same for all subclusters in the ensemble. Wittman et al. (2018) noted that this resulted in a particularly simple analytic expression for the (unnormalized) inverse-variance weight of a given subcluster in an ensemble: \(\frac{\delta_{\rm SG}^{2}}{1+\delta_{\rm SI}^{2}/\delta_{\rm SG}^{2}}\). By glossing over the measurement uncertainties in any given observation, this quantifies the importance of a subcluster in a hypothetical ensemble where all subclusters are equally well observed. After normalizing this weight in the same way as did Wittman et al. (2018) for their Table 1, we find that the southern subcluster of Abell 56 would appear in eighth place on the list of usable subclusters (additional subclusters with formally greater weight were marked as unusable in that table due to various complications). The northern subcluster of Abell 56 is difficult to place on this table because only an upper limit, not a measurement, is available for \(\delta_{\rm SG}\). More X-ray data will be needed to determine the constraining potential of this substructure. ###### Acknowledgements. RJvW acknowledges support from the ERC Starting Grant ClusterWeb 804208. We thank Nissim Kanekar for help with GMRT exposure time calculations, and Huib Intema for help with the SPAM pipeline. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed at the Space Telescope Science Institute. Keck:II (Deimos), HST (ACS), GMRT, XMM SAS (v19.0.0; Gabriel et al. 2004), mc3gmm code (Golovich et al. 2019), FIATMAP code (Wittman et al. 2006), SExtractor (Bertin & Arnouts 1996)
2310.05295
Visual Storytelling with Question-Answer Plans
Visual storytelling aims to generate compelling narratives from image sequences. Existing models often focus on enhancing the representation of the image sequence, e.g., with external knowledge sources or advanced graph structures. Despite recent progress, the stories are often repetitive, illogical, and lacking in detail. To mitigate these issues, we present a novel framework which integrates visual representations with pretrained language models and planning. Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret. It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative. Automatic and human evaluation on the VIST benchmark (Huang et al., 2016) demonstrates that blueprint-based models generate stories that are more coherent, interesting, and natural compared to competitive baselines and state-of-the-art systems.
Danyang Liu, Mirella Lapata, Frank Keller
2023-10-08T21:45:34Z
http://arxiv.org/abs/2310.05295v2
# Visual Storytelling with Question-Answer Plans ###### Abstract Visual storytelling aims to generate compelling narratives from image sequences. Existing models often focus on enhancing the representation of the image sequence, e.g., with external knowledge sources or advanced graph structures. Despite recent progress, the stories are often repetitive, illogical, and lacking in detail. To mitigate these issues, we present a novel framework which integrates visual representations with pretrained language models and planning. Our model translates the image sequence into a _visual prefix_, a sequence of continuous embeddings which language models can interpret. It also leverages a sequence of question-answer pairs as a _blueprint plan_ for selecting salient visual concepts and determining how they should be assembled into a narrative. Automatic and human evaluation on the VIST benchmark Huang et al. (2016) demonstrates that blueprint-based models generate stories that are more coherent, interesting, and natural compared to competitive baselines and state-of-the-art systems. ## 1 Introduction Visual storytelling involves narrating an engaging and logically coherent story based on a sequence of images (see the example in Figure 1). The task lies at the intersection of natural language processing and computer vision and has recently attracted increasing interest from both communities Wang et al. (2022); Hsu et al. (2021); Xu et al. (2021); Chen et al. (2021); Hsu et al. (2020); Wang et al. (2020); Huang et al. (2016). Visual storytelling differs from image captioning, which typically focuses on generating descriptive text, e.g., by identifying and depicting objects within an image. It requires a deeper understanding of how images and the events they illustrate relate to each other in order to create a convincing narrative. Visual storytelling is commonly modeled as a two-stage process. The image sequence is first encoded into a representation which typically includes image embeddings and detected objects. Subsequently, a decoder generates a story token by token based on the encoding of the image sequence. Recent work has mainly focused on enhancing the first stage of the generation process e.g., by leveraging external knowledge sources Hsu et al. (2021); Chen et al. (2021); Hsu et al. (2020); Yang et al. (2019). Advanced representations for image sequences have also been explored, such as scene graphs Hong et al. (2020) and story graphs Hsu et al. (2021). Despite recent progress, these methods struggle to produce meaningful narratives, are prone to hallucination and repetition, often generate vague sentences, and have difficulty identifying salient visual concepts. We attribute the lack of story quality to at least two reasons. Previous work on text-based genera Figure 1: Blueprint annotation for a visual story. Color-coded answers are extracted from the gold story. Questions are generated by feeding the answers and the gold story as context to a pretrained question generator. tion has demonstrated that _planning_ can improve story coherence, allowing to control the trajectory of events, the characters described and their actions (Yao et al., 2019; Xu et al., 2018; Rashkin et al., 2020; Goldfarb-Tarrant et al., 2020; Fan et al., 2019; Yang et al., 2022). However, planning has not been considered in the context of visual storytelling, existing models adopt black-box architectures which are not particularly controllable or interpretable. Another limitation concerns the nature of current models which are essentially trained from scratch, and as a result have limited language modelling and generalization capabilities (they only see multimodal training samples; see top of Figure 1). Although pretrained language models (Raffel et al., 2020; Lewis et al., 2020; Brown et al., 2020) have been widely adopted for general-purpose story generation, their potential for visual storytelling remains unexplored. In this work we propose an approach to visual storytelling which integrates pretrained language models with visual representations and incorporates an intermediate planning step before generating the full story. Our encoder translates the image sequence into a _visual prefix_, a sequence of continuous embeddings which language models can interpret. Following Narayan et al. (2022), we represent plans as a sequence of question-answer pairs, called _blueprints_, which serve as a proxy for content selection (i.e., what to say) and planning (i.e., in what order). Blueprints are loosely related to the Question-under-Discussion (QUD) theory of discourse (Larsson, 2002; Roberts, 2012; De Kothy et al., 2020), which posits that text structure can be analyzed by identifying _implicit_ questions raised and answered by subsequent spans of text. We augment visual storytelling training data with story blueprints (see Figure 1), which we obtain automatically thanks to state-of-the-art question generation technology. We fine-tune pretrained language models to generate blueprints from image sequences _and_ the stories based on them. We showcase two types of storytelling models, which vary in how the planning mechanism is implemented. A _top-down_ model generates the blueprint first and then continues to generate the corresponding story in one go, whereas an _integrated_ model interleaves planning with text generation rather than determining a plan in advance; generation is iteratively conditioned on the image input, the blueprint and the story generated so far. Experiments on the VIST benchmark (Huang et al., 2016) show that blueprint-based models generate more coherent, interesting, and human-like stories compared to the state of the art and large language models (LLMs) like GPT-3.5, according to automatic and human evaluation. ## 2 Related Work Visual StorytellingHuang et al. (2016) introduced visual storytelling as a vehicle for developing AI tools with human-like understanding of grounded event structure and linguistic abilities that go beyond descriptive language. While earlier work (Gonzalez-Rico and Fuentes-Pineda, 2018; Kim et al., 2018) employed simple encoder-decoder architectures (using CNNs to extract visual features and RNNs to generate text), more recent methods (Xu et al., 2021; Chen et al., 2021; Hsu et al., 2020; Yang et al., 2019) leverage external resources (e.g., ConceptNet) as a way of instilling commonsense reasoning skills. Sometimes, scene graphs are also used to model relations between objects (Lu et al., 2016; Hong et al., 2020; Wang et al., 2020). To our knowledge, none of these approaches make use of plan-based decoding. Hsu et al. (2021) construct a graph representing the image sequence (based on training data and external resources) and identify the highest scoring path as the best storyline encapsulated therein. The storytline can be viewed as a form of planning, however, on the encoder side. Most existing approaches (Xu et al., 2021; Hsu et al., 2020; Wang et al., 2020; Yang et al., 2019) train Transformer models from scratch, with the exception of Chen et al. (2021), who employ a vanilla BART model as a baseline without task-specific adaptation. In contrast, our work leverages the language modeling and generalization capabilities of pretrained language models for visual storytelling. Planning and GenerationIn the domain of automatic story generation, planning has been effective at capturing the content and structure of stories. The generation process is often decomposed into two stages, namely planning an outline and then elaborating on it, e.g., by filling in specific details of a story. Plans have been represented as a sequence of event or phrase keywords (Yao et al., 2019; Xu et al., 2018; Rashkin et al., 2020), character actions (Liu et al., 2020), plot structures (Goldfarb-Tarrant et al., 2020), and more elaborate descriptions including details about the setting of the story, its characters, and main plot points Yang et al. (2022). The idea of having a separate planning stage has also been explored for other text generation tasks including summarization Narayan et al. (2021); Liu and Chen (2021) and data-to-text generation Moryossef et al. (2019); Puduppully et al. (2022). Our work is closest to Narayan et al. (2022) who propose the use of question-answer pairs as intermediate plans for summarization. However, their approach is designed for descriptive text. Our work extends their framework to a multimodal setting, where the input consists of image sequences, and the output are narratives characterized by more abstract and figurative language. ## 3 Blueprint-based Visual Storytelling Let \(I\) represent a sequence of \(k\) images, denoted as \(\{v_{1},v_{2}...,v_{k}\}\). Given this input, our goal is to generate a blueprint plan \(B\) (i.e., an ordered set of question-answer pairs) and a story \(S\) based on it. Most generation datasets do not include blueprint annotations, and visual storytelling is no exception. We first describe how we automatically obtain \(\{I_{i},B_{i},S_{i}\}_{i=1}^{N}\) training data samples (Section 3.1), and then introduce our story generation models (Section 3.2). ### Blueprint Annotation Let \(\{I_{i},S_{i}\}_{i=1}^{N}\) denote a dataset consisting of pairs of image sequences and their corresponding stories. We automatically create blueprint \(B_{i}\) based on story \(S_{i}\) using state-of-the-art question generators Romero (2021); Raffel et al. (2020), coupled with a filtering procedure to remove repetitions and ill-formed questions. The generation of question-answer pairs involves two steps, namely answer extraction and question generation. In the context of storytelling, capturing key events is crucial for a compelling narrative. While noun phrases and named entities are commonly recognized as significant content units in other tasks such as summarization Narayan et al. (2022); Deutsch and Roth (2023), verb phrases also play a vital role in conveying story dynamics, actions, and relationships Trabasso et al. (1989); Eisenberg and Finlayson (2017); Liu et al. (2020). Therefore, in addition to named entities and noun phrases, we also extract verb phrases as answer candidates using the spaCy library. We then generate questions for answer candidates with a T5 model Raffel et al. (2020); Romero (2021) fine-tuned on the SQuAD reading comprehension dataset Rajpurkar et al. (2018). The answer and the story are provided as context to predict the corresponding question. We decontextualize stories by replacing pronouns with their corresponding head mentions, using a state-of-the-art coreference resolution model Dobrovolskii (2021). Question-answer pairs are subsequently filtered to eliminate noise which is unavoidable due to the automatic preprocessing steps mentioned above. We thus remove any question-answer pairs where the answer is already present in the question. We also employ a round-trip consistency check Alberti et al. (2019) which discards questions if they yield answers different from those used to generate them. ### Blueprint Models Our approach leverages the generation capabilities of pre-trained sequence-to-sequence models. As our backbone model, we employ BART-base Lewis et al. (2020) which has been fine-tuned for text generation. We adapt this model to our visual storytelling task in two ways. Aside from enabling the generation of blueprints, we convert the image sequence to a _visual prefix_ which the pretrained language model can interpret. The pretrained language model is prompted with this prefix to generate the blueprint, and eventually the story. Visual Prefix ConstructionOur model needs to grasp what the image sequence is about, e.g., the depicted objects, actions, and their associations. Drawing inspiration from recent advances in vision and language research Mokady et al. (2021); Tsimpoukelli et al. (2021); Alayrac et al. (2022); Zhai et al. (2022); Liu et al. (2023); Huang et al. (2023), we translate the input sequence of images into a Figure 2: Visual prefix construction. The pretrained image encoder and concept detector are frozen. _FF_ refers to a feed-forward layer, _CD_ and _EL_ denote a concept detector and embedding layer, respectively. sequence of continuous embeddings, aka a visual prefix (see Figure 2). Following previous work Wang et al. (2018); Xu et al. (2021); Chen et al. (2021), we use ResNet-152 He et al. (2016) to extract visual features from images. We next employ a lightweight linear mapping network that consists of a series of feedforward layers, denoted as \(F_{\phi}\), to map image features to \(k\) visual clues: \[p_{1},\dots,p_{k}=F_{\phi}\left(\mathrm{ResNet}\left(v_{1},v_{2},\dots,v_{k} \right)\right) \tag{1}\] where \(k\) is the image sequence length, and each visual clue \(p_{i}\) has the same dimensionality as a token embedding of the pretrained language model. To further instil world knowledge in our visual prefix, we employ a concept detector. The latter identifies specific objects within images, but also actions, scenes, and attributes. For each image \(v_{i}\), we retain the \(K\) concepts \(\{c_{i}^{1},c_{i}^{2},...,c_{i}^{K}\}\) with the highest confidence score: \[c_{i}^{1},\dots,c_{i}^{K}=\mathrm{Concept}\left(\mathrm{ResNet}\left(v_{i} \right)\right) \tag{2}\] Concepts for each image are then concatenated with a \(\langle SEP\rangle\) token and serve as input to the embedding layer of the pretrained model. The visual clues and concept embeddings are concatenated to form the visual prefix \(V\). The image encoder and the concept detector remain frozen during the training phase. Only the parameters in the mapping network \(F_{\phi}\) are updated (see Figure 2). Top-down PlanningThis model takes image sequence \(I\) as input and generates blueprint \(B\) and story \(S\) in one go. More precisely, during decoding, the model first generates the blueprint, which then serves as a prompt guiding subsequent story generation. Our training objective maximizes the log-likelihood of the joint distribution: \[\max_{\theta,\phi}\sum_{i=1}^{N}\log p_{\theta,\phi}\left(B_{i},S_{i}\mid I_{i}\right) \tag{3}\] where \((B,S)\) refers to the concatenation of blueprint \(B\) and story \(S\). \(\theta\) represents the parameters of the pretrained language model, \(\phi\) are the parameters of the mapping visual network \(F_{\phi}\), and \(N\) denotes the size of the dataset. We introduce special tokens _Story:_ and _Plan:_ preceding the story and blueprint, respectively. In experiments, our blueprints consist of answer-question pairs \(\{a_{1},q_{1},\dots,a_{m},q_{m}\}\) (rather than question-answer pairs). We place the answer before its question to encourage the model to zoom in on salient visual concepts depicted in the image sequence. This ordering is intuitive for our story-telling task: We first decide on what the story is about and then elaborate on key concepts. Incidentally, Narayan et al. (2022) also find that generating the answer before the question performs better for their summarization tasks. Finally, the model is trained with the standard maximum likelihood objective to generate the joint target. Iterative PlanningThis model employs an incremental generation strategy to create the story. Rather than generating in one step a global blueprint and the story, planning and generation are interleaved. At each time step, the iterative model considers the image sequence _and_ context from previous steps, including the blueprint and story generated so far. We gradually construct the blueprint and its corresponding story sentence-by-sentence; our planning is informed by generation and vice versa, which we argue should be mutually beneficial (they are conditioned on each other). Let \(S=\{s_{1},s_{2},\dots,s_{k}\}\), denote a target story and \(B=\{b_{1},b_{2},\dots,b_{k}\}\) its blueprint where Figure 3: Iterative blueprint model: in the first iteration, the embedding layer uses _<START>_ as context; in the second iteration, the context includes the blueprint and previously generated story (enclosed by blue dashed line). In contrast, the top-down model takes only the visual prefix as input and predicts a global blueprint and corresponding story in one go. For details on the visual prefix, see Figure 2. \(s_{i}\) represents the \(i\)-th sentence in the story, and \(b_{i}\) its associated blueprint. Each \(b_{i}\) consists of answer-question pairs, denoted as \(\{a_{1}^{i},q_{1}^{i},\ldots,a_{l(i)}^{i},q_{l(i)}^{i}\}\), where \(l_{(i)}\) is the number of pairs in the \(i\)-th blueprint. The training objective for the iterative model is defined as follows: \[\max_{\theta,\phi}\sum_{j=1}^{N}\sum_{i=1}^{k}\log p_{\theta,\phi}\left(b_{i+ 1},s_{i+1}|b_{1:i},s_{1:i},I_{j}\right) \tag{4}\] where \(b_{1:i}\) and \(s_{1:i}\) refer to the blueprint and sentences generated so far, from step \(1\) to step \(i\). At each time step \(i\), the encoder takes image sequence \(I\) as input, and the decoder takes the context (i.e., blueprint and sentences generated so far \(\{b_{1},b_{2},\ldots,b_{i};s_{1},s_{2},\ldots,s_{i}\}\)) as a prompt to predict the next blueprint \(b_{i+1}\) and sentence \(s_{i+1}\). Therefore, the iterative model is trained on samples \(\{I,(b_{1:i},s_{1:i}),b_{i+1},s_{i+1}\}\). We prefix \((b_{1:i},s_{1:i})\), \(b_{i+1}\), and \(s_{i+1}\) with _Context:_, _Plan:_, and _Next Sentence_, respectively. To handle the first time step, we introduce special token \(\langle\textit{START}\rangle\) as context to predict \(b_{1}\) and \(s_{1}\). We also use \(\langle\textit{END}\rangle\) to indicate the completion of an iteration (see Figure 3 for an illustration). It is important to note that \((b_{1:i},s_{1:i})\) are masked out when computing the loss because they serve as prompts to the decoder. We want to avoid the model repeatedly predicting and overly optimizing the blueprints and sentences that appear at the beginning of the output. ## 4 Experimental Setting ### Dataset We performed experiments on the widely used VIST dataset Huang et al. (2016), which contains 10,117 Flickr albums and 210,819 unique photos. Each training sample consists of \(k\!=\!5\) images and a corresponding story of \(k\!=\!5\) sentences. As described in Section 3.1, we augment each story with an automatically generated blueprint. ### Implementation Details Our models are built on top of BART-base Lewis et al. (2020) and finetuned with a learning rate of 3e-5, batch size of 64, and warm-up ratio of 0.05. We select the best checkpoint on the validation set using a QA-based metric which quantifies the extent to which the output story follows its blueprint (see Section 4.4). During inference, we employ beam search (size 5). For our visual prefix, we employed the Clarifai Concept Detector which was trained on a dataset containing 9,098 concepts and 20 million images (multiple concepts are assigned to each image), and is integrated with the InceptionV2 architecture Szegedy et al. (2016). ### Comparison Systems We compared our models against several baselines and state-of-the-art visual storytelling models. These included a vanilla BART-base model with the same encoder and visual prefix as ours but no planning (**VP-BART**; it generates the story directly in an autoregressive manner without the blueprint). **KG-Story**Hsu et al. (2020) predicts a set of words representative of the image sequence, enriches them using external knowledge graphs, and generates stories based on the enriched word set. **PR-VIST**Hsu et al. (2021) is a state-of-the-art model which constructs a graph representing the relations between elements in the image sequence, identifies the best storyline captured therein, and proceeds to generate a story based on it. The process of constructing the story graph can be viewed as a form of planning. Along similar lines, Chen et al. (2021) build a common sense knowledge graph capturing concepts in the image sequence, and use **MCSM**, a Maximal Clique Selection Module to identify which ones to write a story about. They use BART-large to generate the story based on selected concepts (and image features). We also compared against an LLM that generates stories via prompting. We provide GPT-3.5 with a visual prefix, namely the concepts identified in the image sequence, and a prompt which explains how to create the blueprint and generate the story together with examples (in-context learning). Details on the prompt can be found in Appendix A. ### Automatic Evaluation We evaluated our stories using BLEU, ROUGE, METEOR, and CIDER, mainly to compare to previous work. Several studies Hsu et al. (2022, 2021); \begin{table} \begin{tabular}{l r} \hline \hline Length of image sequences & 5.0 \\ Number of sentences in the story & 5.0 \\ Number of tokens per story & 52.3 \\ Number of QA pairs per story & 11.1 \\ Number of tokens per QA pair & 10.3 \\ Number of tokens per story plus QA pair & 166.2 \\ \hline \hline \end{tabular} \end{table} Table 1: VIST dataset Statistics (average values). 2020; Hu et al., 2020; Yang et al., 2019; Modi and Parde, 2019) have demonstrated the inadequacy of lexical matching metrics: they correlate poorly with human judgments, and not do effectively measure the semantic similarity to human-written stories or the lexical richness of the generated stories. We further employ _story-specific_ metrics to assess story quality aspects such as diversity, fluency, naturalness, and grounding. Specifically, we use two types of trigram repetition metrics Yao et al. (2019); Goldfarb-Tarrant et al. (2020). _Intra-story repetition_ is a fluency metric, it measures the proportion of trigrams repeated _within_ a story. _Interstory repetition_ examines trigram repetition _across_ stories. This metric evaluates diversity, high intra-story repetition suggests that the model tends to generate the same story even when conditioned on different image sequences. We also use MAUVE Pillutla et al. (2021) to measure the _naturalness_ of the generated stories. MAUVE is a recently introduced automatic metric for open-ended generation which has high correlation with human judgements. It computes the similarity of the distribution of human-written text and machine-generated text. To quantify the extent to which the generated story is _grounded_, i.e., whether it accurately represents the content of the image sequence, we measure _concept precision_ and _recall_. Precision measures the number of words in the generated story that align with the detected concept set, while recall assesses the number of words in the detected concept set that are present in the generated story. Finally, for our own models we also evaluate whether the generated stories are _faithful_ to their blueprint. Drawing inspiration from recent studies on summary evaluation Deutsch et al. (2021); Fabbri et al. (2022), we measure how well the generated story answers questions from the predicted blueprint. We utilize a RoBERTa-based Liu et al. (2019) QA model finetuned on the SQuAD dataset. ## 5 Results Our results are summarized in Table 2. The first block presents the performance of state-of-the-art storytelling systems. The second block presents variants of our approach: a vanilla BART model, enhanced with a visual prefix (VP), and two blueprint models which vary in the way plans are generated, i.e., in a top-down fashion or iteratively. The third block contains GPT-3.5 models with (+BP) and without blueprints. ### Pretrained Language Models Produce Better Stories We observe that models based on pretrained language models (i.e., our models and MCSM, outperform models trained from scratch (i.e., KG-Story and PR-VIST) in terms of trigram-repetition scores and MAUVE. This indicates that we can maintain strong language modeling capabilities while enabling pretrained language models to process visual signals effectively. ### The Visual Prefix is an Effective Interface between Image and Text MCSM is the only existing model that utilizes a pretrained language model for visual storytelling. However, our baseline model (VP-BART) demonstrates superior performance in most story-specific metrics. Remarkably, this is achieved using a smaller pretrained model (BART-base, 140M parameters); MCSM is built on top of BART-large (400M parameters). This highlights the effectiveness of our visual prefix, indicating it successfully translates the image sequence into a space that BART can understand. Blueprint Models are Most GroundedOur models outperform comparison systems in terms of concept grounding. This confirms that an intermediate planning step allows the model to effectively select salient concepts based on the visual prefix. The top-down model in particular achieves the highest concept grounding recall, it stays close to the image sequence, accurately describing the information conveyed therein. The higher lexical matching scores further support this observation. The iterative blueprint model achieves the best concept grounding precision (excluding GPT-3.5 models) which in turn suggests that the stories generated by this model exhibit a stronger grounding to the images with fewer hallucinations. ### The Iterative Model Generates Most Natural and Faithful Stories Despite not achieving the highest scores in lexical matching metrics, the iterative blueprint model stands out in terms of MAUVE evaluation. Compared to other models, it generates more natural stories, closer to those written by humans. This finding suggests that humans might employ a similar iterative planning strategy, at least for the short stories considered here; they construct a narrative gradually rather than a global plan which they subsequently convert into a story. With regard to faithfulness, we observe that both blueprint models achieve scores higher than 40%, indicating effective translation of blueprints into stories. Notably, the iterative model performs best in terms of faithfulness, which suggests that translating the entire global blueprint into a story is more challenging, whereas breaking down planning into individual steps is more effective. To get an idea of the upper bound performance for blueprint models, we ran the top-down model with silver standard blueprints extracted from the human-written stories (see row +BP (gold) in Table 2). As can be seen, the MAUVE score jumps to 52.24, edging closer to human-written stories (their MAUVE score is 69.6). This further supports our hypothesis that our model successfully leverages the blueprints and retains the information captured in them. Gpt-3.5 Sturgles with BlueprintsWe also compared our approach to GPT-3.5, which we adapted to our task with in-context learning. A GPT-3.5 model enhanced with blueprints performs well at concept grounding, i.e., it generates stories which talk about what is depicted in the image. However, these stories are neither human-like (see the very low MAUVE score) nor faithful to the intermediate blueprints (in fact they are 10% less faithful compared to our iterative model). This suggests that GPT-3.5 tends to ignore the plan, despite being explicitly prompted with blueprints. ## 6 Human Evaluation We conducted a judgment elicitation study to further evaluate the stories generated by our models. Specifically, we compared the best performing blueprint model (iterative) and three other systems: (a) PR-VIST, which represents the current planning-based state of the art; (b) VP-BART, our proposed model without blueprints; and (c) ground truth stories written by humans. Raters were shown an image sequence, alongside two stories and were asked to provide pairwise references along the following dimensions: _Relevance_, _Fluency_, _Coherence_, _Interestingness_, and _Overall_. The full instructions are given in Appendix A. We recruited nine native English speakers and elicited preferences for 100 stories (three judgments per story). Our human evaluation results are summarized in Table 3. The iterative blueprint model outperforms PR-VIST across metrics. Our participants perceive VP-BART stories as marginally more fluent and coherent compared to those created by the iterative model (even though they prefer iterative stories overall). This discrepancy is likely due to the generation process introduced by the iterative model which requires the decoder to produce a mix of questions, answers, and corresponding sentences, deviating from the traditional BART pretraining pattern. This added complexity might result in minor grammatical errors and pose challenges for coherence, given that story generation is broken down into separate steps instead of being a continuous process. Nonetheless, the coherence scores are fairly close. The blueprint model excels in terms of interestingness and grounding, indicating its effectiveness in creating engaging and memorable stories. Our model's superior grounding performance aligns with our hypothesis that blueprints serve not only as a planning strategy but also as a visual concept selector. This is due to the way blueprints are structured (as answer-question pairs), which explicitly forces the model to first identify salient visual concepts and then generate questions based on them. Figure 4 shows example stories created by the models used in our human evaluation study and GPT-3.5+BP. The story generated by the iterative model is coherent, rich in detail, and fluent. VP \begin{table} \begin{tabular}{l||c c|c c|c||c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Repetition (\(\downarrow\))} & \multicolumn{2}{c|}{Grounding (\(\uparrow\))} & \multirow{2}{*}{MAUVE (\(\uparrow\))} & \multirow{2}{*}{Faithful (\(\uparrow\))} & \multicolumn{3}{c}{N-gram-based Metrics (\(\uparrow\))} \\ & Intra & Inter & & & & & & B-4 & RLSum & METEOR & CIDER \\ \hline KG-Story & 1.03 & 88.72 & 4.55 & 3.46 & 3.86 & — & 9.8 & 27.3 & 32.3 & **7.9** \\ PR-VIST & 1.19 & 83.80 & 3.76 & 3.28 & 2.31 & — & 7.5 & 26.1 & 31.4 & 7.6 \\ MCSM & 2.85 & 77.48 & 5.12 & 5.89 & 11.01 & — & 8.1 & 27.7 & 31.4 & 7.6 \\ \hline VP-BART & 0.22 & 83.70 & 4.31 & 3.23 & 11.31 & — & 8.6 & 26.6 & 31.0 & 6.8 \\ + BP (top-down) & **0.08** & 81.51 & 5.17 & **11.56** & 8.32 & 44.73 & **9.9** & **28.5** & **33.6** & 7.2 \\ + BP (iterative) & 0.29 & **72.70** & **5.22** & 3.59 & **28.25** & **51.66** & 7.0 & 26.1 & 30.3 & 5.5 \\ + BP (gold) & 0.12 & 18.61 & 6.81 & 2.97 & 52.24 & — & 29.4 & 52.0 & 58.4 & 36.3 \\ \hline GPT-3.5 & 0.47 & 40.61 & 10.80 & 7.90 & 2.30 & — & 5.0 & 24.4 & 27.3 & 1.9 \\ GPT-3.5 + BP & 1.52 & 31.19 & 14.70 & 10.30 & 2.10 & 34.56 & 4.2 & 23.3 & 25.1 & 2.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Automatic evaluation results. We report intra- and inter-story trigram Repetition (lower is better), precision and recall for concept grounding, MAUVE, Faithfulness, and a suite of commonly used metrics which rely on lexical similarity between system stories and references. Best results are highlighted in bold font. BART generates a grounded and accurate story without hallucination and semantic errors. However, it is a relatively plain narrative, offering limited detail about the market or the experience of the characters. Compared to GPT-3.5+BP, the iterative model's story follows the image sequence more closely, mentioning details like _an array of different fruits_ and _various types of pepper_, which significantly enhances storytelling. ## 7 Controllable Generation In this section we showcase how the blueprint plan allows us to control the content and length of model output without additional mechanisms or training. For example, in cases where the generated story contains entities which do not appear in the image sequence, it is possible to refine the story generation process, mitigating hallucinations. Specifically, we apply a filtering step which removes non-grounded entities (and corresponding QA pair) from the blueprint before generating the story. We consider as _non-grounded_ any blueprint entity which is not included in the output of the concept detector (see Section 3.2). Figure 5 shows how this refinement approach can be used to adjust the model's output. In the first example, we observe that the story generated with a refined blueprint effectively avoids hallucinations (highlighted in blue) and is overall more faithful. However, it is important to note that imagination plays a crucial role in crafting an engaging story, especially when the image sequence provides limited information. Therefore, employing the refinement method may result in shorter and less detailed stories, as illustrated in the second example. While the refined blueprint successfully eliminates all hal \begin{table} \begin{tabular}{l||c|c|c||c|c|c||c|c} \hline \hline \multirow{2}{*}{Choices(\%)} & \multicolumn{2}{c||}{Iterative vs. PR-VIST} & \multicolumn{2}{c||}{Iterative vs. VP-BART} & \multicolumn{2}{c}{Iterative vs. Human} \\ & Win & Lose & Tie & Win & Lose & Tie & Win & Lose & Tie \\ \hline Fluency & **47.0** & 37.6 & 15.4 & 40.5 & **42.9** & 16.6 & 10.9 & **84.3** & 4.8 \\ Coherence & **56.0** & 24.8 & 19.2 & 41.2 & **41.6** & 17.2 & 15.2 & **75.7** & 9.1 \\ Interestingness & **70.5** & 19.2 & 10.3 & **55.5** & 38.1 & 6.4 & 23.0 & **70.0** & 7.0 \\ Grounding & **50.9** & 40.6 & 8.5 & **45.1** & 41.7 & 13.2 & 7.8 & **79.6** & 12.6 \\ \hline Overall & **58.1** & 25.6 & 16.2 & **42.6** & 41.5 & 15.9 & 12.6 & **80.0** & 7.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Human evaluation results. Raters provide pairwise story preferences in terms of fluency, coherence, interestingness, grounding, and overall. VP-BART is a BART model enhanced with a visual prefix (VP) but no planning; Iterative is our best blueprint model and PR-VIST is a state-of-the-art visual storytelling model. We report the percentage of times the Iterative model Wins, Loses or is in a Tie with a comparison system. Unless underlined, differences between systems are statistically significant (\(p<0.05\); using the Wilcoxon signed-rank test). Figure 4: Examples of system output and human-written story for an image sequence. lucinated entities, the resulting story appears plain and lacks depth. Our blueprint method seems to strike the right balance between accurate and captivating story generation, prioritizing faithfulness to the image sequence and creativity in storytelling. Most visual storytelling systems generate 5-sentence stories, following the predefined story structure of the VIST dataset Huang et al. (2016). Nevertheless, our iterative blueprint model can flexibly modulate the length of the story by controlling the number of iterative steps, thereby overcoming the conventional sentence limitation. Figure 6 presents stories generated by this model with a maximum of 10 iterations. Despite the increased length, the stories maintain coherence and are engaging. ## 8 Conclusion In this work, we have introduced a novel approach to visual storytelling which integrates visual representations with pretrained language models and a blueprint-based planning method for story generation. Blueprint models leverage a sequence of question-answer pairs as intermediate plans, enabling better selection of salient concepts from the image sequence and guiding the construction of the final narrative. Specifically, we have showcased two model variants: a top-down model which relies on a global plan, and an iterative model, which interleaves planning with sentence generation. Our experiments have shown that blueprint models excel in concept grounding and their ability to create human-like stories. Additionally, they are controllable: Blueprints can be made shorter or longer and their details can be refined (e.g., by emphasising specific entities or characters), thus enabling human-in-the-loop and personalized storytelling. We showcase examples of controllability in Section 7. In the future, we would like to explore visual storytelling with grounded characters and entities, as well as tackle the generation of more complex narratives, such as long-form stories. AcknowledgmentsThe authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (grant EP/W002876/1). Liu was supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh. Figure 5: Comparison of stories generated by original blueprint and refined blueprint models. Hallucinated words are highlighted in blue. Figure 6: Stories generated within 10 iterations. ### Limitations While our proposed model demonstrates effective story generation, it has certain limitations. Firstly, the grounding relation between the visual concepts and the corresponding text may not always be clear, leading to potential ambiguity in the generated stories. Furthermore, the model can sometimes suffer from hallucinations due to falsely detected visual concepts. It is worth noting that our model was built on top of BART-base Lewis et al. (2020). It would be beneficial to investigate the performance of larger models, as they could potentially enhance the quality of the planning component and overall storytelling capability. ## Ethics Statement Large Language ModelsThis paper uses large pretrained language models, which have been shown to be subject to a variety of biases, to occasionally generate toxic language, and to hallucinate content. Model output used for the human evaluation study (Section 6) was screened by the authors for harmful content. Experimental ParticipantsThe departmental ethics panel judged our human evaluation study to be exempt from ethical approval, as all participants were employees of the University of X, and as such were protected by employment law. Participants were paid at the standard hourly rate for tutors and demonstrators at the university.
2303.03904
The Prym variety of a dilated double cover of metric graphs
We calculate the volume of the tropical Prym variety of a harmonic double cover of metric graphs having non-trivial dilation. We show that the tropical Prym variety behaves discontinuously under deformations of the double cover that change the number of connected components of the dilation subgraph.
Arkabrata Ghosh, Dmitry Zakharov
2023-03-07T14:02:23Z
http://arxiv.org/abs/2303.03904v1
# The Prym variety of a dilated double cover of metric graphs ###### Abstract. We calculate the volume of the tropical Prym variety of a harmonic double cover of metric graphs having non-trivial dilation. We show that the tropical Prym variety behaves discontinuously under deformations of the double cover that change the number of connected components of the dilation subgraph. 2010 Mathematics Subject Classification: 14T20; 14H40 ## 1. Introduction Tropical geometry studies discrete, piecewise-linear analogues of algebro-geometric objects. For example, the tropical analogue of an algebraic curve is a connected finite graph \(G\), and the _Jacobian_\(Jac(G)\) (also known as the _critical group_ of \(G\)) is a finite abelian group. We can endow \(G\) with positive real edge lengths to obtain a _metric graph_\(\Gamma\); this promotes the Jacobian to a real torus \(Jac(\Gamma)\) equipped with an additional integral structure. The dimension of \(Jac(\Gamma)\) is equal to the first Betti number \(g(\Gamma)=b_{1}(\Gamma)\) of \(\Gamma\), also known as the _genus_ of \(\Gamma\). The tropical analogue of a morphism of algebraic curves is a harmonic morphism of graphs. Topological covering spaces are examples of harmonic morphisms. More generally, harmonic morphisms of finite graphs allow nontrivial degrees at vertices, also known as _dilation_, while harmonic morphisms of metric graphs also allow dilation along edges. A harmonic morphism of metric graphs is _free_ if it has no dilation (in other words, if it is a covering isometry), and _dilated_ otherwise. A harmonic morphism has a well-defined global degree, and a _double cover_ is a harmonic morphism of metric graphs of degree two. A classical algebraic construction associates to an etale degree two cover \(\widetilde{X}\to X\) of smooth algebraic curves a principally polarized abelian variety (ppav) of dimension \(g(X)-1\), called the _Prym variety_\(Prym(\widetilde{X}/X)\) of the double cover (see [13]). The kernel of the norm map \(Nm:Jac(\widetilde{X})\to Jac(X)\) has two connected components, and \(Prym(\widetilde{X}/X)\) is the even connected component (containing the identity). The ppav \(Prym(\widetilde{X}/X)\) carries a principal polarization that is half of the polarization induced from \(Jac(\widetilde{X})\). The tropical Prym variety was defined in [11] and further investigated in [11], in complete analogy with the algebraic setting. A double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) of metric graphs induces a norm map \(Nm:Jac(\widetilde{\Gamma})\to Jac(\Gamma)\). It is shown in [11, 11] that the kernel of \(Nm\) has two connected components if \(\pi\) is free and one if \(\pi\) is dilated. In the free case, the connected component of the identity carries a principal polarization that is half the induced polarization, just as in the algebraic setting. In the dilated case, the kernel also carries a principal polarization, whose relationship to the induced polarization was computed in [11]. In either case, the corresponding principally polarized tropical abelian variety is called the _Prym variety_\(Prym(\widetilde{\Gamma}/\Gamma)\) of the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). ### The volume formulas Kirchhoff's matrix tree theorem states that the order of the Jacobian group \(Jac(G)\) of a finite graph \(G\) is equal to the number of its spanning trees. A weighted version of this result for metric graphs was proved in [1]: the square of the volume of the Jacobian ###### Abstract. We consider the problem of finding a family of graphs of finite order. We show that the problem of finding a family of graphs of finite order is NP-hard. We show that the problem of finding a family of graphs of finite order is NP-hard. We show that the problem of finding a family of graphs of finite order is NP-hard. We also show that the problem of finding a family of graphs of finite order is NP-hard. As a final note, we observe that the ogods of a dilated double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) appear to form the bases of a matroid on the set of unilated edges of \(\Gamma\), generalizing the signed graphic matroid introduced by Zaslavsky in [22]. We are not aware if this matroid has been considered before, and we plan to study it in future work. ### Acknowledgements The authors would like to thank Yoav Len and Felix Rohrle for insightful discussions. ## 2. Setup and notation In this section, we recall a number of standard definitions concerning metric graphs, harmonic morphisms, double covers, and tropical abelian varieties. We also recall the Jacobian \(\operatorname{Jac}(\Gamma)\) of a metric graph \(\Gamma\) and the Prym variety \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) of a harmonic double cover \(\widetilde{\Gamma}\to\Gamma\) of metric graphs. Finally, we recall how to compute the volumes of \(\operatorname{Jac}(\Gamma)\) (see [1]) and \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) for a free double cover \(\widetilde{\Gamma}\to\Gamma\) (see [11]). ### Graphs, metric graphs, and double covers A _graph_\(G\) consists of a non-empty finite set of _vertices_\(V(G)\) and a set of _edges_\(E(G)\). We allow loops and multiple edges between a pair of vertices. It is convenient to view each edge \(e\in E(G)\) as consisting of two _half-edges_\(e=\{h,h^{\prime}\}\), so that \(E(G)\) is the set of orbits of a fixed-point-free involution acting on the set of half-edges \(H(G)\). The _root map_\(r:H(G)\to V(G)\) attaches half-edges to vertices, and the set of half-edges \(T_{v}(G)=r^{-1}(v)\) attached to a given vertex \(v\) is the _tangent space_. The _genus_ of a graph is its first Betti number (we do not use vertex weights): \[g(G)=b_{1}(G)=\#E(G)-\#V(G)+1.\] An _orientation_ of \(G\) is a choice of ordering of each edge, and defines _source_ and _target_ maps \(s,t:E(G)\to V(G)\). A _morphism of graphs_\(f:\widetilde{G}\to G\) is a pair of maps \(f:V(\widetilde{G})\to V(G)\) and \(f:H(\widetilde{G})\to H(G)\) that commute with the root map and preserve edges. A _harmonic morphism of graphs_\((f,d_{f}):\widetilde{G}\to G\) is a pair consisting of a morphism of graphs \(f:\widetilde{G}\to G\) and a _degree function_\(d_{f}:V(\widetilde{G})\cup E(\widetilde{G})\to\mathbb{Z}_{>0}\) satisfying \[d_{f}(\widetilde{v})=\sum_{\widetilde{h}\in T_{\widetilde{G}}\widetilde{G} \cap f^{-1}(h)}d_{f}(\widetilde{h})\] for any \(\widetilde{v}\in V(\widetilde{G})\) and any \(h\in T_{f(\widetilde{v})}G\) (where we denote \(d_{f}(\widetilde{h})=d_{f}(\widetilde{h}^{\prime})=d_{f}(\widetilde{e})\) for an edge \(\widetilde{e}=\{\widetilde{h},\widetilde{h}^{\prime}\}\)). If \(G\) is connected, a harmonic morphism has a _global degree_\(\deg(f)\) given by \[\deg(f)=\sum_{\widetilde{v}\in f^{-1}(v)}d_{f}(\widetilde{v})=\sum_{ \widetilde{h}\in f^{-1}(h)}d_{f}(\widetilde{h})\] for any \(v\in V(G)\) or any \(h\in H(G)\). A _double cover_\(p:\widetilde{G}\to G\) is a harmonic morphism of global degree two. There are two types of vertices \(v\in V(G)\): _undilated_, having two pre-images \(p^{-1}(v)=\{\widetilde{v}^{+},\widetilde{v}^{-}\}\) with \(d_{p}(\widetilde{v}^{\pm})=1\), and _dilated_, having a single pre-image \(p^{-1}(v)=\{\widetilde{v}\}\) with \(d_{p}(\widetilde{v})=2\). We similarly define dilated and undilated half-edges and edges of \(G\). A dilated half-edge may only be rooted at a dilated vertex, hence the set of dilated edges and vertices forms a subgraph called the _dilation subgraph_\(G_{\operatorname{dil}}\subset G\) of \(f\). We say that \(f\) is _free_ if \(G_{\operatorname{dil}}\) is empty, _dilated_ if \(G_{\operatorname{dil}}\) is not empty, and _edge-free_ if \(G_{\operatorname{dil}}\) consists of isolated vertices only. We note that dilated double covers should not be thought of as somehow "more degenerate" than free double covers; in fact, arguably the converse is true, since the latter arise as tropicalizations of more degenerate algebraic double covers than the former. A _metric graph_\(\Gamma\) is a compact metric space obtained from a graph \(G\) by identifying each edge \(e\in E(G)\) with a closed interval of positive length \(\ell(e)\), identifying the endpoints with the vertices of \(G\), and endowing \(\Gamma\) with the shortest-path metric. The pair \((G,\ell)\) is called a _model_ for \(\Gamma\). The _genus_\(g(\Gamma)\) of a metric graph is its first Betti number, and is equal to the genus of any model. A _harmonic morphism of metric graphs_\(\phi:\widetilde{\Gamma}\to\Gamma\) is a continuous, piecewise-linear map with nonzero integer-valued slopes \(d_{\phi}(\widetilde{e})=d_{f}(\widetilde{e})\) along the edges \(\widetilde{e}\in E(\widetilde{\Gamma})\) given by the degree function \(d_{f}\) of a harmonic morphism \(f:\widetilde{G}\to G\) between some models \((\widetilde{G},\widetilde{\ell})\) and \((G,\ell)\) of \(\widetilde{\Gamma}\) and \(\Gamma\), respectively. The slope condition imposes the restriction \[\ell(\phi(\widetilde{e}))=d_{\phi}(\widetilde{e})\widetilde{\ell}(\widetilde {e}),\quad\widetilde{e}\in E(\widetilde{G}).\] We similarly define double covers of metric graphs. A double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) is locally an isometry along an undilated edge and a factor \(2\) dilation along a dilated edge, which explains the terminology. Finally, we recall how to contract a harmonic morphism along a subgraph of the target. Let \(f:\widetilde{G}\to G\) be a harmonic morphism and let \(G_{0}\subset G\) be a possibly disconnected subgraph. We define \(G^{\prime}\) from \(G\) by contracting each connected component of \(G_{0}\) to a separate vertex. We similarly define \(\widetilde{G}^{\prime}\) by contracting each connected component of \(f^{-1}(G_{0})\) to a separate vertex. Finally, we define \(f^{\prime}:\widetilde{G}^{\prime}\to G^{\prime}\) by setting \(d_{f^{\prime}}(\widetilde{v})\) on a vertex \(\widetilde{v}\in V(\widetilde{G}^{\prime})\) corresponding to a contracted component of \(f^{-1}(G_{0})\) to be equal to the global degree of \(f\) on that component. The result is a harmonic morphism \(f^{\prime}:\widetilde{G}^{\prime}\to G^{\prime}\), called the _contraction of \(f\) along \(G_{0}\)_. We similarly define contractions of harmonic morphisms of metric graphs. Since we are principally interested in double covers of metric graphs, we henceforth assume that any metric graph \(\Gamma\) comes equipped with a choice of model. Hence we abuse notation and write \(E(\Gamma)\), \(V(\Gamma)\), and so on for a metric graph \(\Gamma\). The principal results of our paper do not depend on the choice of underlying model. **Example 2.1**.: Figure 1 shows a dilated double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) of metric graphs. Fat edges and vertices indicate dilation. ### Tropical abelian varieties The tropical Jacobian of a metric graph and the tropical Prym variety of a harmonic double cover are examples of tropical principally polarized abelian varieties, which are real tori equipped with auxiliary integral structure. We recall their definition (our conventions follow [10] and are a slight modification of the standard definitions found in [11] and [12]). A _real torus with integral structure_\(\Sigma\) of dimension \(n\), or _integral torus_ for short, is determined by a triple \((\Lambda,\Lambda^{\prime},[\cdot,\cdot])\), where \(\Lambda\) and \(\Lambda^{\prime}\) are finitely generated free abelian groups of rank \(n\) and \([\cdot,\cdot]:\Lambda\times\Lambda^{\prime}\to\mathbb{R}\) is a non-degenerate pairing. The pairing defines a lattice embedding \(\Lambda^{\prime}\subset\operatorname{Hom}(\Lambda,\mathbb{R})\) via the assignment \(\lambda^{\prime}\mapsto[\cdot,\lambda^{\prime}]\), and the torus itself is the compact abelian group \(\Sigma=\operatorname{Hom}(\Lambda,\mathbb{R})/\Lambda^{\prime}\). The integral structure refers to the lattice \(\operatorname{Hom}(\Lambda,\mathbb{Z})\subset\operatorname{Hom}(\Lambda, \mathbb{R})\) in the universal cover of \(\Sigma\), and is the tropical analogue of the complex structure on a complex torus. Let \(\Sigma_{1}=(\Lambda_{1},\Lambda_{1}^{\prime},[\cdot,\cdot]_{1})\) and \(\Sigma_{2}=(\Lambda_{2},\Lambda_{2}^{\prime},[\cdot,\cdot]_{2})\) be integral tori. A _homomorphism of integral tori_\(f=(f_{\#},f^{\#}):\Sigma_{1}\to\Sigma_{2}\) is given by a pair of maps \(f_{\#}:\Lambda_{1}^{\prime}\to\Lambda_{2}^{\prime}\) and \(f^{\#}:\Lambda_{2}\to\Lambda_{1}\) satisfying the relation \[[f^{\#}(\lambda_{2}),\lambda_{1}^{\prime}]_{1}=[\lambda_{2},f_{\#}(\lambda_{1}^ {\prime})]_{2}\] for all \(\lambda_{1}^{\prime}\in\Lambda_{1}^{\prime}\) and \(\lambda_{2}\in\Lambda_{2}\). This relation implies that the map \(\operatorname{Hom}(\Lambda_{1},\mathbb{R})\to\operatorname{Hom}(\Lambda_{2}, \mathbb{R})\) dual to \(f^{\#}\) restricts to \(f_{\#}\) on \(\Lambda_{1}^{\prime}\), and hence descends to a group homomorphism \(f:\Sigma_{1}\to\Sigma_{2}\). Let \(f=(f_{\#},f^{\#}):\Sigma_{1}\to\Sigma_{2}\) be a homomorphism of integral tori \(\Sigma_{i}=(\Lambda_{i},\Lambda_{i}^{\prime},[\cdot,\cdot]_{i})\) for \(i=1,2\). The connected component of the identity of the kernel of \(f\), denoted by \((\operatorname{Ker}f)_{0}\), carries the structure of an integral torus, which we now recall. Let \(K=(\operatorname{Coker}\mathsf{f}^{\#})^{\operatorname{tf}}\) be the quotient of \(\operatorname{Coker}\mathsf{f}^{\#}\) by its torsion subgroup, and let \(K^{\prime}=\operatorname{Ker}\mathsf{f}_{\#}\). It is easy to verify that the pairing \([\cdot,\cdot]_{1}\) induces a well-defined pairing \([\cdot,\cdot]_{K}:K\times K^{\prime}\to\mathbb{R}\), and that the natural maps \(\mathfrak{i}_{\#}:K^{\prime}\to\Lambda^{\prime}_{1}\) and \(\mathfrak{i}^{\#}:\Lambda_{1}\to K\) define an injective homomorphism \(\mathfrak{i}=(\mathfrak{i}_{\#},\mathfrak{i}^{\#}):(\operatorname{Ker} \mathsf{f})_{0}\to\Sigma_{1}\) of integral tori. Let \(\Sigma=(\Lambda,\Lambda^{\prime},[\cdot,\cdot])\) be an integral torus. A _polarization_ on \(\Sigma\) is a map \(\zeta:\Lambda^{\prime}\to\Lambda\) having the property that the induced bilinear form \[(\cdot,\cdot):\Lambda^{\prime}\times\Lambda^{\prime}\to\mathbb{R},\quad( \lambda^{\prime},\mu^{\prime})=[\zeta(\lambda^{\prime}),\mu^{\prime}]\] is symmetric and positive definite. The polarization map \(\zeta\) is necessarily injective, and is called _principal_ if it is bijective. The pair \((\Sigma,\zeta)\) is called a _tropical polarized abelian variety_, and a _tppav_ if \(\zeta\) is a principal polarization. Let \(\mathsf{f}=(\mathsf{f}_{\#},\mathsf{f}^{\#}):\Sigma_{1}\to\Sigma_{2}\) be a homomorphism of integral tori, and assume that \(\mathsf{f}\) has finite kernel (equivalently, \(\mathsf{f}_{\#}\) is injective). Given a polarization \(\zeta_{2}:\Lambda^{\prime}_{2}\to\Lambda_{2}\) on \(\Sigma_{2}\), it is easy to verify that the map \(\zeta_{1}=\mathsf{f}^{\#}\circ\zeta_{2}\circ\mathsf{f}_{\#}:\Lambda^{\prime}_ {1}\to\Lambda_{1}\) defines an _induced polarization_ on \(\Sigma_{1}\). The polarization induced by a principal polarization need not itself be principal. Given a tropical polarized abelian variety \(\Sigma=(\Lambda,\Lambda^{\prime},[\cdot,\cdot])\) of dimension \(n\), the associated bilinear form \((\cdot,\cdot)\) on \(\Lambda^{\prime}\) extends to an inner product on the vector space \(V=\operatorname{Hom}(\Lambda,\mathbb{R})\). The volume of \(\Sigma=V/\Lambda^{\prime}\) with respect to this product is the volume of a fundamental parallelotope of \(\Lambda^{\prime}\), and is given by the Grammian determinant \[\operatorname{Vol}^{2}(\Sigma)=\det(\lambda^{\prime}_{i},\lambda^{\prime}_{j}), \tag{2}\] where \(\lambda^{\prime}_{1},\dots,\lambda^{\prime}_{n}\) is any basis of \(\Lambda^{\prime}\). ### Jacobians and Pryms We now recall how to construct the Jacobian of a metric graph and the Prym variety of a harmonic double cover as tppavs. Let \(\Gamma\) be a metric graph of genus \(g\), let \(C_{0}(\Gamma,\mathbb{Z})=\mathbb{Z}^{V(\Gamma)}\) and \(C_{1}(\Gamma,\mathbb{Z})=\mathbb{Z}^{E(\Gamma)}\) be the groups of \(0\)-chains and \(1\)-chains, respectively (with respect to a choice of oriented model), and let \(\mathsf{d}\) be the Figure 1. A dilated double cover simplicial boundary map \[d:C_{1}(\Gamma,\mathbb{Z})\to C_{0}(\Gamma,\mathbb{Z}),\quad d\left[\sum_{e\in \widetilde{E}(\Gamma)}n_{e}e\right]=\sum_{e\in E(\Gamma)}n_{e}[t(e)-s(e)].\] The first simplicial homology group \(H_{1}(\Gamma,\mathbb{Z})=\operatorname{Ker}d\) is a free abelian group of rank \(g\). The _edge length pairing_ on \(H_{1}(\Gamma,\mathbb{Z})\) is given by \[[\cdot,\cdot]_{\Gamma}:H_{1}(\Gamma,\mathbb{Z})\times H_{1}(\Gamma,\mathbb{Z}) \to\mathbb{R},\quad\left[\sum_{e\in E(\Gamma)}a_{e}e,\sum_{e\in E(\Gamma)}b_{e }e\right]=\sum_{e\in E(\mathbb{G})}a_{e}b_{e}\ell(e). \tag{3}\] The _Jacobian_\(\operatorname{Jac}(\Gamma)\) of \(\Gamma\) is the dimension \(g\)\(\operatorname{tppav}\)\((\Lambda,\Lambda^{\prime},[\cdot,\cdot]_{\Gamma})\) where \(\Lambda=\Lambda^{\prime}=H_{1}(\Gamma,\mathbb{Z})\), \([\cdot,\cdot]_{\Gamma}\) is the edge length pairing, and the principal polarization \(\zeta\) is the identity map on \(H_{1}(\Gamma,\mathbb{Z})\). **Remark 2.2**.: We note that the edge length pairing (3) has a physical peculiarity: it is measured in units of edge lengths of \(\Gamma\), while the expected units for an inner product are lengths squared. As a consequence, the units of \(\ell(e)\) double in dimension when we view the Jacobian variety \(\operatorname{Jac}(\Gamma)\) as a Riemannian manifold. For example, the Jacobian \(\operatorname{Jac}(\Gamma)\) of a circle \(\Gamma\) of circumference \(L\) is also a circle, but of circumference \(\sqrt{L}\), not \(L\). Given a harmonic morphism \(\phi:\widetilde{\Gamma}\to\Gamma\) of metric graphs, we define the push-forward and pullback maps (again, with respect to appropriately chosen models) \[\phi_{*}:C_{1}(\widetilde{\Gamma},\mathbb{Z})\to C_{1}(\Gamma,\mathbb{Z}), \quad\widetilde{e}\mapsto\phi(\widetilde{e})\] and \[\phi^{*}:C_{1}(\Gamma,\mathbb{Z})\to C_{1}(\widetilde{\Gamma},\mathbb{Z}), \quad e\mapsto\sum_{\widetilde{e}\in\phi^{-1}(e)}d_{\phi}(\widetilde{e}) \widetilde{e}.\] These maps commute with \(d\) and descend to maps \[\phi_{*}:H_{1}(\widetilde{\Gamma},\mathbb{Z})\to H_{1}(\Gamma,\mathbb{Z}), \quad\phi^{*}:H_{1}(\Gamma,\mathbb{Z})\to H_{1}(\widetilde{\Gamma},\mathbb{Z}).\] The pair \(\operatorname{Nm}_{\#}=\phi_{*},\operatorname{Nm}^{\#}=\phi^{*}\) defines the surjective _norm homomorphism_\(\operatorname{Nm}:\operatorname{Jac}(\widetilde{\Gamma})\to \operatorname{Jac}(\Gamma)\). We are interested in the kernel of the norm homomorphism \(\operatorname{Nm}:\operatorname{Jac}(\widetilde{\Gamma})\to\operatorname{Jac} (\Gamma)\) when \(\pi:\widetilde{\Gamma}\to\Gamma\) is a harmonic double cover of metric graphs. In this case, the map \(\pi^{*}:H_{1}(\Gamma,\mathbb{Z})\to H_{1}(\widetilde{\Gamma},\mathbb{Z})\) is explicitly given on edges by \[\pi^{*}(e)=\begin{cases}2\widetilde{e},&e\text{ is dilated with preimage }\widetilde{e},\\ \widetilde{e}^{+}+\widetilde{e}^{-},&e\text{ is undilated with preimages }\widetilde{e}^{\pm}.\end{cases} \tag{4}\] Unwinding the definitions, the connected component of the identity of the kernel of \(\operatorname{Nm}\) is the integral torus \[(\operatorname{Ker}\operatorname{Nm})_{0}=\frac{\operatorname{Ker}\overline{ \pi}:\operatorname{Hom}(H_{1}(\widetilde{\Gamma},\mathbb{Z}),\mathbb{R})\to \operatorname{Hom}(H_{1}(\Gamma,\mathbb{Z}),\mathbb{R})}{\operatorname{Ker}\pi _{*}:H_{1}(\widetilde{\Gamma},\mathbb{Z})\to H_{1}(\Gamma,\mathbb{Z})},\] where \(\overline{\pi}\) is the \(\operatorname{Hom}\)-dual of the map \(\pi^{*}\). The principal polarization on \(\operatorname{Jac}(\widetilde{\Gamma})\) induces a polarization on \((\operatorname{Ker}\operatorname{Nm})_{0}\), which is not principal in general. We show in Proposition 3.14 that there is a natural principal polarization on \((\operatorname{Ker}\operatorname{Nm})_{0}\), whose type (compared to the induced polarization) depends on the number of connected components of the dilation subgraph \(\Gamma_{\operatorname{dil}}\). The corresponding tropical ppav is the _Prym variety_\(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) of the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). ### Volume formulas and odd genus one decompositions Finally, we recall how to compute the volumes of \(\operatorname{Jac}(\Gamma)\) for a metric graph \(\Gamma\) and \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) for a free double cover \(\widetilde{\Gamma}\to\Gamma\). The formula for the volume of the tropical \(\operatorname{Prym}\) variety of a dilated double cover is proved in the next section and is the principal result of this paper. Let \(\Gamma\) be a metric graph of genus \(g\). Since \(\operatorname{Jac}(\Gamma)\) is a Riemannian manifold of dimension \(g\), one may expect the volume of \(\operatorname{Jac}(\Gamma)\) to be given by a homogeneous degree \(g\) polynomial in the edge lengths \(\ell(e)\) of \(\Gamma\). However, due to the dimensional peculiarity noted in Remark 2.2, it is the _square_ of the volume of \(\operatorname{Jac}(\Gamma)\) that is given by such a polynomial, with monomials corresponding to the complements of spanning trees of \(\Gamma\): **Theorem 2.3** (Theorem 1.5 of [1]).: _Let \(\Gamma\) be a metric graph of genus \(g\). The volume of the tropical Jacobian of \(\Gamma\) is given by_ \[\operatorname{Vol}^{2}(\operatorname{Jac}(\Gamma))=\sum_{\Gamma\subset E( \Gamma)}\prod_{e\in\widetilde{\Gamma}}\ell(e). \tag{5}\] _The sum is taken over all \(g\)-element subsets \(F\subset E(\Gamma)\) with the property that \(\Gamma\backslash F\) is a tree._ It is elementary to verify that the sum on the right-hand side does not in fact depend on the choice of model of \(\Gamma\). We also note that if \(F\) has \(g\) elements, then \(\Gamma\backslash F\) is a tree if and only if it is connected. An analogous formula for the volume of the tropical \(\operatorname{Prym}\) variety of a free double cover is the principal result of [11]. Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a free double cover of metric graphs of genera \(g(\widetilde{\Gamma})=2g-1\) and \(g(\Gamma)=g\), respectively. Since \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) has dimension \(g-1\), we expect (see Remark 2.2) \(\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))\) to be given by a degree \(g-1\) homogeneous polynomial in the edge lengths of \(\Gamma\). The monomials should correspond to certain \((g-1)\)-element subsets of \(E(\Gamma)\), playing the same role that complements of spanning trees do for \(\operatorname{Jac}(\Gamma)\). The correct notion turns out to be the following. **Definition 2.4**.: Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a free double cover of metric graphs of genera \(g(\widetilde{\Gamma})=2g-1\) and \(g(\Gamma)=g\), respectively. A set of \(g-1\) edges \(F\subset E(\Gamma)\) is called an _odd genus one decomposition_, or _good_, if every connected component of \(\Gamma\backslash F\) has connected preimage in \(\widetilde{\Gamma}\). The _rank_\(r(F)\) of an odd genus one decomposition is the number of connected components of \(\Gamma\backslash F\). An elementary calculation shows that for a set \(F\subset E(\Gamma)\) of \(g-1\) edges with corresponding connected component decomposition \(\Gamma\backslash F=\Gamma_{1}\cup\dots\cup\Gamma_{k}\), either \(g(\Gamma_{i})=0\) for some \(i\) or \(g(\Gamma_{i})=1\) for all \(i\). In the former case \(\pi^{-1}(\Gamma_{i})\) is a trivial double cover (because \(\pi_{1}(\Gamma_{i})=0\)) and hence disconnected, while in the latter case \(\pi^{-1}(\Gamma_{i})\) is connected if and only if the double cover \(\pi^{-1}(\Gamma_{i})\to\Gamma_{i}\) is given (under the Galois correspondence) by the odd (i.e. notrivial) element of \(\operatorname{Hom}(\pi_{1}(\Gamma_{i}),\mathbb{Z}/2\mathbb{Z})\simeq\mathbb{Z }/2\mathbb{Z}\). This explains our choice of terminology. The volume of the tropical \(\operatorname{Prym}\) variety is calculated as a sum over the odd genus one decompositions, with each monomial additionally weighted according to the rank. **Theorem 2.5** (Theorem 3.4 in [11]).: _Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a free double cover of metric graphs of genera \(2g-1\) and \(g\), respectively. The volume of the tropical \(\operatorname{Prym}\) variety of \(\pi:\widetilde{\Gamma}\to\Gamma\) is given by_ \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=\sum_{F \subset E(\Gamma)}4^{r(F)-1}\prod_{e\in F}\ell(e), \tag{6}\] _where the sum is taken over all odd genus one decompositions \(F\subset E(\Gamma)\)._ **Remark 2.6**.: Given a metric graph \(\Gamma\) of genus \(g\), the complements of spanning trees are the bases of the _cographic matroid_\(\widetilde{\mathcal{M}}(\Gamma)\). Similarly, a free double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) determines (after choosing a spanning tree for \(\Gamma\)) the structure of a _signed graph_ on \(\Gamma\), and the odd genus one decompositions are in fact the bases of the corresponding _signed cographic matroid_\(\widetilde{\mathcal{M}}(\widetilde{\Gamma}/\Gamma)\) (see [22]). Hence the sums on the right-hand sides of Equations (5) and (6) are indexed by the bases of certain matroids naturally associated to \(\Gamma\) and \(\pi:\widetilde{\Gamma}\to\Gamma\), respectively. It turns out that the matroids \(\widetilde{\mathcal{M}}(\Gamma)\) and \(\widetilde{\mathcal{M}}(\widetilde{\Gamma}/\Gamma)\) play a fundamental role in the polyhedral geometry of \(\operatorname{Jac}(\Gamma)\) and \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\). These tppavs can be effectively described using respectively the Abel-Jacobi map \(\operatorname{Sym}^{9}(\Gamma)\to\operatorname{Jac}(\Gamma)\) and the Abel-Prym map \(\operatorname{Sym}^{9-1}(\widetilde{\Gamma})\to\operatorname{Prym}( \widetilde{\Gamma}/\Gamma)\). The independent sets of the matroids correspond to the cells of the symmetric product on which these maps have full rank. Hence the bases correspond to the top-dimensional cells that define polyhedral decompositions of the tppavs, and thus give geometric meaning to Equations (5) and (6). ## 3. The volume formula In this section, we prove the main result of our paper, Theorem 3.3, which calculates the volume of the tropical Prym variety of a dilated double cover of metric graphs. ### Ogods for dilated double covers Our first task is to extend Definition 2.4 to dilated double covers. This turns out to be straightforward. **Definition 3.1**.: Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover of a connected metric graph \(\Gamma\), and let \(\operatorname{h}=\operatorname{g}(\widetilde{\Gamma})-\operatorname{g}(\Gamma)\). A set \(\operatorname{F}\subset\operatorname{E}(\Gamma)\) of \(\operatorname{h}\) edges of \(\Gamma\) is called an _ogod_ if no edge in \(\operatorname{F}\) is dilated, and if each connected component of \(\Gamma\backslash\operatorname{F}\) has connected preimage in \(\widetilde{\Gamma}\). The _rank_\(\operatorname{r}(\operatorname{F})\) of \(\operatorname{F}\) is the number of connected components of \(\Gamma\backslash\operatorname{F}\). A connected component \(\Gamma_{i}\) of \(\Gamma\backslash\operatorname{F}\) having a dilated vertex automatically has connected preimage in \(\widetilde{\Gamma}\). If a connected component \(\Gamma_{i}\) has no dilated vertices, then \(\pi^{-1}(\Gamma_{i})\) is connected only if \(\operatorname{g}(\Gamma_{i})\geq 1\), since a free double cover of a tree is trivial. To clarify exposition, and for future use, we give a more precise description of ogods for dilated double covers. **Lemma 3.2**.: _Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover of metric graphs, let \(\operatorname{F}\subset\operatorname{E}(\Gamma)\) be a set of \(\operatorname{h}=\operatorname{g}(\widetilde{\Gamma})-\operatorname{g}( \Gamma)\) undilated edges of \(\Gamma\), and let \(\Gamma\backslash\operatorname{F}=\Gamma_{1}\cup\dots\cup\Gamma_{k}\) be the decomposition of \(\Gamma\backslash\operatorname{F}\) into connected components. Then \(\operatorname{F}\) is an ogod if and only if each \(\Gamma_{i}\) satisfies one of the following (mutually exclusive) conditions:_ 1. \(\Gamma_{i}\) _contains a unique connected component of the dilation subgraph_ \(\Gamma_{\operatorname{dil}}\)_, and the genus_ \(\operatorname{g}(\Gamma_{i})\) _is equal to the genus of this component._ 2. \(\Gamma_{i}\) _has no dilated vertices or edges,_ \(\operatorname{g}(\Gamma_{i})=1\)_, and_ \(\Gamma_{i}\) _has connected preimage in_ \(\widetilde{\Gamma}\)_._ Proof.: It is clear that if each \(\Gamma_{i}\) is one of the above two types, then \(\operatorname{F}\) is an ogod. To prove the converse, we first assume that \(\pi:\widetilde{\Gamma}\to\Gamma\) is an edge-free cover, in which case the connected components of the dilation subgraph \(\Gamma_{\operatorname{dil}}\) are the dilated vertices. For a subgraph \(\Gamma_{0}\subset\Gamma\), possibly disconnected, introduce the quantity \[\widetilde{\operatorname{g}}(\Gamma_{0})=\#\operatorname{E}(\Gamma_{0})-\# \{\text{undilated vertices of }\Gamma_{0}\}.\] If \(\Gamma_{0}\) is connected, then \[\widetilde{\operatorname{g}}(\Gamma_{0})=\operatorname{g}(\Gamma_{0})-1+\# \mathsf{V}(\Gamma_{0}\cap\Gamma_{\operatorname{dil}})\geq-1.\] Now let \(\operatorname{F}\subset\operatorname{E}(\Gamma)\) be an \(\operatorname{h}\)-element set of undilated edges, and let \(\Gamma\backslash\operatorname{F}=\Gamma_{1}\cup\dots\cup\Gamma_{k}\) be the decomposition into connected components. It is clear that \[\widetilde{\operatorname{g}}(\Gamma\backslash\operatorname{F})=\sum_{ \operatorname{i}=1}^{k}\widetilde{\operatorname{g}}(\Gamma_{i}).\] On the other hand, we observe that \[h=g(\widetilde{\Gamma})-g(\Gamma)=\#E(\widetilde{\Gamma})-\#V(\widetilde{\Gamma})- (\#E(\Gamma)-\#V(\Gamma))=\#E(\Gamma)-\#\{\text{undilated vertices of }\Gamma\},\] therefore \[\widetilde{g}(\Gamma\backslash F)=\#E(\widetilde{\Gamma})-h-\#\{\text{undilated vertices of }\Gamma\}=0.\] Since each \(\widetilde{g}(\Gamma_{i})\geqslant-1\), it follows that either \(\widetilde{g}(\Gamma_{i})=-1\) for some \(i\) or \(\widetilde{g}(\Gamma_{i})=0\) for all \(i\). In the former case \(F\) is not an ogod, because the component \(\Gamma_{i}\) with \(\widetilde{g}(\Gamma_{i})=-1\) is a tree with no dilated vertices and hence \(\pi^{-1}(\Gamma_{i})\) is disconnected. In the latter case, each \(\Gamma_{i}\) is either a tree with a unique dilated vertex, in which case it satisfies property (1), or a genus one graph with no dilated vertices, in which case it satisfies property (2) if and only if it has connected preimage in \(\widetilde{\Gamma}\). This proves the lemma for the edge-free double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). Now let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a double cover with edge dilation. We consider the edge-free double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\) obtained by contracting each connected component of the dilation subgraph of \(\Gamma\) to a separate dilated vertex. The graphs \(\Gamma\) and \(\Gamma^{\prime}\) have the same sets of undilated edges, and it is clear that the ogods of \(\Gamma\) and \(\Gamma^{\prime}\) are in rank-preserving bijection. Now let \(F\subset E(\Gamma)\) be a set of \(h\) undilated edges, let \(\Gamma\backslash F=\Gamma_{1}\cup\cdots\cup\Gamma_{k}\) be the decomposition into connected components, and let \(\Gamma^{\prime}\backslash F=\Gamma^{\prime}_{1}\cup\cdots\cup\Gamma^{\prime}_ {k}\) be the corresponding decomposition for \(\Gamma^{\prime}\). If \(\Gamma_{i}\) has no dilation then \(\Gamma^{\prime}_{i}=\Gamma_{i}\), while if \(\Gamma_{i}\) has dilation, then it satisfies property (1) if and only if \(\Gamma^{\prime}_{i}\) is a tree with a unique dilated vertex. Hence each \(\Gamma_{i}\) satisfies property (1) or (2) if and only if \(\Gamma^{\prime}_{i}\) does, in which case \(F\) is an ogod. Lemma 3.2 shows that the term "odd genus one decomposition" makes sense for edge-free covers, if we view each dilated vertex as having intrinsic genus one. However, the terminology breaks down for covers with edge dilation, since now the genus of \(\Gamma_{i}\) is determined by the genus of the corresponding dilation subgraph, which may be arbitrary. For this reason, we henceforth use the term "ogod" instead of "odd genus one decomposition". We also note that ogods of dilated double covers also correspond to bases of an associated matroid (see Remark 2.6) on the set of undilated edges of \(\Gamma\), which we plan to investigate in future work. We are now ready to state our main result. **Theorem 3.3**.: _Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover of metric graphs. The volume of the tropical Prym variety of \(\pi:\widetilde{\Gamma}\to\Gamma\) is given by_ \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=2^{1-d( \widetilde{\Gamma}/\Gamma)}\sum_{F\subset E(\Gamma)}4^{\tau(F)-1}\prod_{e \in F}\ell(e). \tag{7}\] _The sum is taken over all \(h\)-element ogods of \(E(\Gamma)\) (see Definition 3.1), where \(h=g(\widetilde{\Gamma})-g(\Gamma)\) is the dimension of \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) and \(\tau(F)\) is the rank of the ogod, and \(d(\widetilde{\Gamma}/\Gamma)\) is the number of connected components of the dilation subgraph \(\Gamma_{\operatorname{dil}}\)._ We first consider several examples. **Example 3.4**.: Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a double cover such that every vertex of \(\Gamma\) is dilated and no edge of \(\Gamma\) is dilated. In this case \(g(\widetilde{\Gamma})=2\#E(\Gamma)-\#V(\Gamma)+1\) so \(h=\#E(\Gamma)\), and the only ogod of the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) is all of \(F=E(\Gamma)\), with \(\tau(F)=\#V(\Gamma)\). Since \(d(\widetilde{\Gamma}/\Gamma)=\#V(\Gamma)\), we see that the volume of the tropical Prym variety of \(\pi:\widetilde{\Gamma}\to\Gamma\) is equal to \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=2^{\#V( \Gamma)-1}\prod_{e\in E(\Gamma)}\ell(e).\] **Example 3.5**.: We consider the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) shown on Figure 1. We have \(g(\widetilde{\Gamma})=6\) and \(g(\Gamma)=3\), so ogods are three-element subsets of the set \(\{e_{1},e_{2},e_{3},e_{4},e_{5}\}\) of undilated edges. An ogod cannot contain both \(e_{1}\) and \(e_{2}\), since the left undilated vertex will then have disconnected preimage, and similarly with \(e_{4}\) and \(e_{5}\). This leaves a total of four ogods of the following ranks: \[r(\{e_{1},e_{3},e_{4}\})=3,\quad r(\{e_{1},e_{3},e_{5}\})=2,\quad r(\{e_{2},e_{ 3},e_{4}\})=4,\quad r(\{e_{2},e_{3},e_{5}\})=3.\] The dilation subgraph has \(d(\widetilde{\Gamma}/\Gamma)=2\) connected components, therefore \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=8x_{1 }x_{3}x_{4}+2x_{1}x_{3}x_{5}+32x_{2}x_{3}x_{4}+8x_{2}x_{3}x_{5},\quad x_{i}= \ell(e_{i}).\] **Example 3.6**.: Equation (7) shows that the Prym variety of a dilated double cover behaves discontinuously under edge contractions that change the number \(d(\widetilde{\Gamma}/\Gamma)\) of connected components of the dilation subgraph. Indeed, consider the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) shown on the left hand side of Figure 2. The double cover \(\pi\) has two ogods, \(\{e\}\) and \(\{f\}\), with \(r(\{e\})=2\) and \(r(\{f\})=1\). The left vertex is dilated, so \(d(\widetilde{\Gamma}/\Gamma)=1\) and \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=4\ell( e)+\ell(f).\] The double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\) on the right hand side is obtained from \(\pi\) by contracting the loop f, creating a second dilated vertex. The edge \(e\) is the unique ogod and \(r(\{e\})=2\), but now \(d(\widetilde{\Gamma}^{\prime}/\Gamma^{\prime})=2\). Hence \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}^{\prime}/ \Gamma^{\prime}))=2\ell(e),\] which is not the limit of \(\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))\) as \(\ell(f)\to 0\), as one may expect. This is problematic from a moduli-theoretic viewpoint, and suggests that the original definition of the Prym variety of a dilated double cover should be revisited. We note that this phenomenon does not occur when deforming from a dilated double cover with connected dilation subgraph to a free double cover, since Equations (6) and (7) agree when \(d(\widetilde{\Gamma}/\Gamma)=1\). The proof of Theorem 3.3 consists of two distinct parts. On one hand, we show that the polynomial on the right-hand side of Equation (7) can be expressed in terms of the spanning trees of \(\widetilde{\Gamma}\) and \(\Gamma\). The idea is to deform a dilated double cover to a free double cover by a series of edge contractions and de-contractions, and use Theorem 2.5 as the base case. This part of the proof is purely graph-theoretic, and the main result is Theorem 3.10. On the other hand, we independently compute the relationship between the volumes of the tppavs \(\operatorname{Jac}(\widetilde{\Gamma})\), \(\operatorname{Jac}(\Gamma)\), and \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) for a dilated double cover \(\pi:\widetilde{\Gamma}\to\Gamma\), by studying the action of the pushforward and Figure 2. Discontinuity of the Prym variety under edge contraction pullback maps on the homology groups \(H_{1}(\widetilde{\Gamma},\mathbb{Z})\) and \(H_{1}(\Gamma,\mathbb{Z})\); the main result is Theorem 3.11. These homology calculations have appeared in [10], sharpening and correcting the results of [11], and we briefly reproduce them here. ### The volume polynomials It is convenient to separate the right hand sides of Equations (5) and (7) into stand-alone definitions: **Definition 3.7**.: Let \(\Gamma\) be a metric graph of genus \(g\). The _Jacobian polynomial_\(J(\Gamma)\) of \(\Gamma\) is the degree \(g\) homogeneous polynomial in the edge lengths of \(\Gamma\) given by \[J(\Gamma)=\sum_{C\subset E(\Gamma)}\prod_{e\in C}\ell(e),\] where the sum is taken over all \(g\)-element subsets \(C\subset E(\Gamma)\) such that \(\Gamma\backslash C\) is a tree. The following contraction-deletion formula for the Jacobian polynomial is elementary to verify, and we omit the proof. **Lemma 3.8**.: _Let \(\Gamma\) be a metric graph and let \(e\in E(\Gamma)\) be an edge of length \(\ell(e)\). Let \(\Gamma_{e}\) and \(\Gamma^{e}\) be the graphs obtained by contracting and removing \(e\), respectively. The Jacobian polynomial of \(\Gamma\) is expressed in terms of the Jacobian polynomials of \(\Gamma_{e}\) and \(\Gamma^{e}\) as follows:_ \[J(\Gamma)=\begin{cases}\ell(e)J(\Gamma_{e}),&e\text{ is a loop},\\ J(\Gamma_{e}),&e\text{ is a bridge},\\ J(\Gamma_{e})+\ell(e)J(\Gamma^{e}),&\text{otherwise}.\end{cases}\] **Definition 3.9**.: Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a double cover of metric graphs, and let \(h=g(\widetilde{\Gamma})-g(\Gamma)\). The _Prym polynomial_\(\Pr(\widetilde{\Gamma}/\Gamma)\) of \(\pi:\widetilde{\Gamma}\to\Gamma\) is the degree \(h\) homogeneous polynomial in the edge lengths of \(\Gamma\) given by \[\Pr(\widetilde{\Gamma}/\Gamma)=\sum_{F\subset E(\Gamma)}4^{\pi(F)-1}\prod_{e \in F}\ell(e), \tag{8}\] where the sum is taken over all ogods (see Definition 3.1) and \(\pi(F)\) is the rank of the ogod. We now determine the relationship between the three polynomials associated with a harmonic double cover. **Theorem 3.10**.: _Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a double cover of metric graphs. Then_ \[J(\widetilde{\Gamma})=2^{1-m_{d}(\widetilde{\Gamma}/\Gamma)+n_{d}(\widetilde{ \Gamma}/\Gamma)-2d(\widetilde{\Gamma}/\Gamma)}\Pr(\widetilde{\Gamma}/\Gamma)J (\Gamma). \tag{9}\] _where \(m_{d}(\widetilde{\Gamma}/\Gamma)\), \(n_{d}(\widetilde{\Gamma}/\Gamma)\), and \(d(\widetilde{\Gamma}/\Gamma)\) denote respectively the number of edges, vertices, and connected components of the dilation subgraph \(\Gamma_{\mathrm{dil}}\)._ We note that for an edge \(\widetilde{e}\in E(\widetilde{\Gamma})\) we have \[\ell(\widetilde{e})=\begin{cases}\ell(\pi(\widetilde{e})),&\pi(\widetilde{e}) \text{ is undilated},\\ \ell(\pi(\widetilde{e}))/2,&\pi(\widetilde{e})\text{ is dilated},\end{cases}\] so we may indeed view \(J(\widetilde{\Gamma})\) as a polynomial in the edge lengths of \(\Gamma\). Of course, it is not a priori clear why \(J(\widetilde{\Gamma})\) should be divisible by \(J(\Gamma)\). Proof of Theorem 3.10.: We prove this result first for free double covers, then for edge-free double covers, and finally for double covers with edge dilation. **Free double covers.** The proof for a free double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) follows directly from the results of [1]. Indeed, according to Equation (5) we have \[\operatorname{Vol}^{2}(\operatorname{Jac}(\widetilde{\Gamma}))=\operatorname{J }(\widetilde{\Gamma}),\quad\operatorname{Vol}^{2}(\operatorname{Jac}(\Gamma))= \operatorname{J}(\Gamma).\] On the other hand, Equation (6) states that \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))= \operatorname{Pr}(\widetilde{\Gamma}/\Gamma),\] and the relationship between the three volumes is given by Proposition 3.6 of [1]: \[\operatorname{Vol}^{2}(\operatorname{Jac}(\widetilde{\Gamma}))=2\operatorname{ Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))\operatorname{ Vol}^{2}(\operatorname{Jac}(\Gamma)). \tag{10}\] It follows that \[\operatorname{J}(\widetilde{\Gamma})=2\operatorname{Pr}(\widetilde{\Gamma}/ \Gamma)\operatorname{J}(\Gamma),\] which is Equation (9) for a free double cover, for which \(\operatorname{m}_{\operatorname{d}}(\widetilde{\Gamma}/\Gamma)=\operatorname{ n}_{\operatorname{d}}(\widetilde{\Gamma}/\Gamma)=\operatorname{d}( \widetilde{\Gamma}/\Gamma)=0\). **Edge-free double covers.** We now prove the theorem for edge-free double covers by induction on the number of dilated vertices. For such a cover \(\pi:\widetilde{\Gamma}\to\Gamma\), we have \(\operatorname{n}_{\operatorname{d}}(\widetilde{\Gamma}/\Gamma)=\operatorname{ d}(\widetilde{\Gamma}/\Gamma)\) and \(\operatorname{m}_{\operatorname{d}}(\widetilde{\Gamma}/\Gamma)=0\). Consider an edge-free cover \(\pi:\widetilde{\Gamma}\to\Gamma\) with a dilated vertex \(v\in V(\Gamma)\), and let \(\widetilde{v}=\pi^{-1}(v)\). We consider the double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\) obtained by resolving the dilated vertex \(v\) into an undilated vertex by a loop attachment (see Figure 3). Specifically, \(\Gamma^{\prime}\) consists of \(\Gamma\) with a loop \(e\) of length \(\ell(e)\) attached to \(v\), while \(\widetilde{\Gamma}^{\prime}\) consists of \(\widetilde{\Gamma}\) with the vertex \(\widetilde{v}\) replaced by a pair of vertices \(\widetilde{v}^{\pm}\) connected by two edges \(\widetilde{e}^{\pm}\). For each edge \(f\in E(\Gamma)\) rooted at \(v\) there are two edges \(\widetilde{f}^{\pm}\in E(\widetilde{\Gamma})\) rooted at \(\widetilde{v}\), and we root one at each of the \(\widetilde{v}^{\pm}\in V(\widetilde{\Gamma}^{\prime})\) arbitrarily. The map \(\pi^{\prime}\) sends \(\widetilde{v}^{\pm}\) to \(v\), \(\widetilde{e}^{\pm}\) to \(e\), and is equal to \(\pi\) on the rest of \(\widetilde{\Gamma}^{\prime}\). The vertex \(v\) on the resulting harmonic double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\) is now undilated, hence \[\operatorname{n}_{\operatorname{d}}(\widetilde{\Gamma}/\Gamma)=\operatorname{ d}(\widetilde{\Gamma}/\Gamma)=\operatorname{n}_{\operatorname{d}}( \widetilde{\Gamma}^{\prime}/\Gamma^{\prime})+1=\operatorname{d}(\widetilde{ \Gamma}^{\prime}/\Gamma^{\prime})+1,\quad\operatorname{m}_{\operatorname{d}}( \widetilde{\Gamma}/\Gamma)=\operatorname{m}_{\operatorname{d}}(\widetilde{ \Gamma}^{\prime}/\Gamma^{\prime})=0, \tag{11}\] and we assume by induction that Equation (9) holds for \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\). Figure 3. Resolving a dilated vertex by adding a loop. We now compare the Jacobian and Prym polynomials of the two covers \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\) and \(\pi:\widetilde{\Gamma}\to\Gamma\). First, we note that \(g(\Gamma^{\prime})=g(\Gamma)+1\). Since \(e\) is a loop, it lies in the complement of each spanning tree, and therefore \[J(\Gamma^{\prime})=\ell(e)J(\Gamma). \tag{12}\] Similarly, we have \(g(\widetilde{\Gamma}^{\prime})=g(\widetilde{\Gamma})+1\), and we evaluate \(J(\widetilde{\Gamma}^{\prime})\) in terms of \(J(\widetilde{\Gamma})\). We classify complements of spanning trees \(C^{\prime}\subset E(\widetilde{\Gamma}^{\prime})\) according to whether they contain the edges \(\widehat{e}^{\pm}\): 1. \(\widehat{e}^{\pm}\in C^{\prime}\). There is a unique path from \(\widehat{\nu}^{+}\) to \(\widehat{\nu}^{-}\) along the spanning tree \(\widetilde{\Gamma}^{\prime}\backslash C^{\prime}\) of \(\widetilde{\Gamma}^{\prime}\), which corresponds to a closed loop in \(\widetilde{\Gamma}\) starting and ending at \(\widehat{\nu}\). Hence it follows that the corresponding subset \(C=C^{\prime}\backslash\{\widehat{e}^{\pm}\}\subset E(\widetilde{\Gamma})\) is not the complement of a spanning tree of \(\widetilde{\Gamma}\). Instead, \(C\) is the complement of a unique spanning tree in the graph \(\widetilde{\Gamma}^{\prime}_{0}\) obtained from \(\widetilde{\Gamma}^{\prime}\) by deleting the edges \(\widehat{e}^{\pm}\), and every such complement is obtained in this way. 2. \(\widehat{e}^{+}\in C^{\prime}\) and \(\widehat{e}^{-}\notin C^{\prime}\). The spanning tree \(T^{\prime}=\widetilde{\Gamma}^{\prime}\backslash C^{\prime}\) contains the edge \(\widehat{e}^{-}\), and \(T=T^{\prime}\backslash\{\widehat{e}^{-}\}\) is a spanning tree of \(\widetilde{\Gamma}\) with complementary edge set \(C=C^{\prime}\backslash\{\widehat{e}^{+}\}\subset E(\widetilde{\Gamma})\). Conversely, if \(C\subset E(\widetilde{\Gamma})\) is the complement of a spanning tree \(T=\widetilde{\Gamma}\backslash C\) of \(\widetilde{\Gamma}\), then \(C^{\prime}=C\cup\widehat{e}^{+}\subset E(\widetilde{\Gamma}^{\prime})\) is the complement of a spanning tree \(T^{\prime}=T\cup\{\widehat{e}^{-}\}\) of \(\widetilde{\Gamma}^{\prime}\). 3. \(\widehat{e}^{-}\in C^{\prime}\) and \(\widehat{e}^{+}\notin C^{\prime}\). This case is symmetric to the one above: \(C=C^{\prime}\backslash\{\widehat{e}^{-}\}\) is the complement of a spanning tree of \(\widetilde{\Gamma}\), and every such complement is obtained in this way. 4. \(\widehat{e}^{\pm}\notin C^{\prime}\). This is not possible, since a spanning tree of \(\widetilde{\Gamma}^{\prime}\) may not contain both edges \(\widehat{e}^{\pm}\). Expressing the sum that defines \(J(\widetilde{\Gamma}^{\prime})\) according to these four types, we see that \[J(\widetilde{\Gamma}^{\prime})=2\ell(e)J(\widetilde{\Gamma})+\ell(e)^{2}J( \widetilde{\Gamma}^{\prime}_{0}). \tag{13}\] Finally, we compare \(\Pr(\widetilde{\Gamma}^{\prime}/\Gamma^{\prime})\) and \(\Pr(\widetilde{\Gamma}/\Gamma)\). Since \(g(\widetilde{\Gamma}^{\prime})-g(\Gamma^{\prime})=g(\widetilde{\Gamma})-g(\Gamma)\), these polynomials have the same degree. Let \(F\subset E(\Gamma)\) be an ogod of the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). The connected component of \(\Gamma\backslash F\) containing \(\nu\) has connected pre-image in \(\widetilde{\Gamma}\) because \(\nu\) is dilated, and this remains true when we undilate \(\nu\) and replace \(\widehat{\nu}\) with \(\widehat{\nu}^{\pm}\). Hence \(F\subset E(\Gamma^{\prime})\) is also an ogod of the double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma\) of the same rank. Conversely, if \(F^{\prime}\subset E(\Gamma^{\prime})\) is an ogod of the double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma\) and \(e\notin F^{\prime}\), then \(F^{\prime}\subset E(\Gamma)\), and the connected component of \(\Gamma^{\prime}\backslash F^{\prime}\) containing \(e\) (and having connected pre-image in \(\widetilde{\Gamma}^{\prime}\)) corresponds to a connected component of \(\Gamma\backslash F^{\prime}\) containing \(\nu\) (which necessarily has connected pre-image). It follows that there is a rank-preserving bijection between the ogods \(F\subset E(\Gamma)\) of the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\) and the ogods \(F^{\prime}\subset E(\Gamma^{\prime})\) of the double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\) not containing \(e\), and therefore \[\Pr(\widetilde{\Gamma}^{\prime}/\Gamma^{\prime})=\Pr(\widetilde{\Gamma}/\Gamma) +\ell(e)Q, \tag{14}\] where the term \(Q\) is irrelevant to us. We now put everything together. By induction, Equation (9) holds for the double cover \(\pi^{\prime}:\widetilde{\Gamma}^{\prime}\to\Gamma^{\prime}\). Plugging in Equations (12), (13), (14), taking the linear in \(\ell(e)\) term, and using (11), we see that Equation (9) holds for the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). ### Double covers with edge dilation Finally, we prove the theorem for arbitrary dilated double covers by induction on the number of dilated edges, the base case being that of edge-free double covers. Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover with a dilated edge \(e\in E(\Gamma)\) of length \(\ell(e)\), and let \(\widehat{e}=\pi^{-1}(e)\) be its preimage of length \(\ell(\widehat{e})=\ell(e)/2\). We contract the edges \(e\) and \(\widehat{e}\) to obtain a dilated double cover \(\pi_{e}:\widetilde{\Gamma}_{e}\to\Gamma_{e}\) with \[m_{d}(\widehat{\Gamma}_{\widehat{e}}/\Gamma_{e})=m_{d}(\widetilde{\Gamma}/ \Gamma)-1,\quad d(\widehat{\Gamma}_{\widehat{e}}/\Gamma_{e})=d(\widetilde{ \Gamma}/\Gamma), \tag{15}\] and we assume by induction that Equation (9) holds for \(\pi_{e}:\widetilde{\Gamma}_{\widehat{e}}\to\Gamma_{e}\). It is clear that \(g(\widehat{\Gamma}_{\widetilde{e}})-g(\Gamma_{e})=g(\widehat{\Gamma})-g(\Gamma)\), so \(\Pr(\widehat{\Gamma}_{\widetilde{e}}/\Gamma_{e})\) and \(\Pr(\widehat{\Gamma}/\Gamma)\) have the same degrees. Since ogods do not contain dilated edges, and the dilation subgraphs of \(\Gamma\) and \(\Gamma_{e}\) have the same sets of connected components, we see that there is a rank-preserving bijection between the ogods of the double covers \(\pi_{e}:\widehat{\Gamma}_{\widetilde{e}}\to\Gamma_{e}\) and \(\pi:\widehat{\Gamma}\to\Gamma\). Therefore \[\Pr(\widehat{\Gamma}_{\widetilde{e}}/\Gamma_{e})=\Pr(\widehat{\Gamma}/\Gamma). \tag{16}\] We now consider the edge types of \(e\) and \(\widetilde{e}\) and apply Lemma 3.8. 1. If \(e\in E(\Gamma)\) is a loop, then \(\widetilde{e}\in E(\widetilde{\Gamma})\) is also a loop (of half the length) because the root vertex of \(e\) is dilated and hence has a unique preimage at which both ends of \(\widetilde{e}\) are rooted. Then by Lemma 3.8 we have \[J(\Gamma)=\ell(e)J(\Gamma_{e}),\quad J(\widetilde{\Gamma})=\frac{\ell(e)}{2}J( \widetilde{\Gamma}_{\widetilde{e}}).\] (17) Plugging Equations (15), (16), and (17) into Equation (9) and noting that \(n_{d}(\widehat{\Gamma}_{\widetilde{e}}/\Gamma_{e})=n_{d}(\widehat{\Gamma}/\Gamma)\), we see that Equation (9) holds for the double cover \(\pi:\widehat{\Gamma}\to\Gamma\). 2. If \(e\in E(\Gamma)\) is a bridge, then \(\widetilde{e}\in E(\widetilde{\Gamma})\) is also a bridge because \(\pi^{-1}(e)=\widetilde{e}\). By Lemma 3.8 we have \[J(\Gamma)=J(\Gamma_{e}),\quad J(\widetilde{\Gamma})=J(\widetilde{\Gamma}_{ \widetilde{e}}),\] (18) and in this case \(n_{d}(\widehat{\Gamma}_{\widetilde{e}}/\Gamma_{e})=n_{d}(\widehat{\Gamma}/ \Gamma)-1\), since contracting \(e\) joins the dilated end vertices of \(e\) into a single dilated vertex. Plugging this and Equations (15), (16) and (18) into Equation (9), we see that Equation (9) holds for the double cover \(\pi:\widehat{\Gamma}\to\Gamma\). 3. Finally, suppose that \(e\in E(\Gamma)\) is neither a bridge nor a loop. There exists a path in \(\Gamma\) between the (dilated) end vertices of \(\widetilde{e}\) that bypasses \(e\), and this path lifts to a path in \(\widetilde{\Gamma}\) connecting the end vertices of \(\widetilde{e}\) and bypassing \(\widetilde{e}\). Hence \(\widetilde{e}\in E(\widetilde{\Gamma})\) is neither a bridge nor a loop, and by Lemma 3.8 we have \[J(\Gamma)=J(\Gamma_{e})+\ell(e)J(\Gamma^{e}),\quad J(\widetilde{\Gamma})=J( \widetilde{\Gamma}_{\widetilde{e}})+\frac{\ell(e)}{2}J(\widetilde{\Gamma}^{ \widetilde{e}}),\] (19) and \(n_{d}(\widehat{\Gamma}_{\widetilde{e}}/\Gamma_{e})=n_{d}(\widetilde{\Gamma}/ \Gamma)-1\) as in the previous case. We need to additionally consider the harmonic double cover \(\pi^{e}:\widehat{\Gamma}^{\widetilde{e}}\to\Gamma^{e}\) obtained by deleting the edges \(e\) and \(\widetilde{e}\). Since \(\Gamma^{e}\) has one fewer dilated edge than \(\Gamma\), we assume by induction that Equation (9) holds for the double cover \(\pi^{e}:\widehat{\Gamma}^{\widetilde{e}}\to\Gamma^{e}\), and we have \[m_{d}(\widehat{\Gamma}^{\widetilde{e}}/\Gamma^{e})=m_{d}(\widehat{\Gamma}/ \Gamma)-1,\quad n_{d}(\widehat{\Gamma}^{\widetilde{e}}/\Gamma^{e})=n_{d}( \widetilde{\Gamma}/\Gamma).\] (20) Since \(e\) is dilated, there is again a bijection between the ogods of the double covers \(\pi^{e}:\widehat{\Gamma}^{\widetilde{e}}\to\Gamma^{e}\) and \(\pi:\widehat{\Gamma}\to\Gamma\). However, the ranks of the ogods may be different, since removing \(e\) may increase the number of connected components of the dilation subgraph. We consider two subcases: 1. The edge \(e\) is not a bridge edge of the dilation subgraph \(\Gamma_{dil}\), in which case \(d(\widehat{\Gamma}^{\widetilde{e}}/\Gamma^{e})=d(\widehat{\Gamma}/\Gamma)\). Given an ogod \(F\subset E(\Gamma)\) of \(\Gamma\), the connected component \(\Gamma_{i}\) of \(\Gamma\backslash F\) containing \(e\) is not disconnected by removing \(e\). It follows that the ranks of \(F\) as an ogod on \(\Gamma\) and \(\Gamma^{e}\) agree, and hence \(\Pr(\widehat{\Gamma}/\Gamma)=\Pr(\widehat{\Gamma}^{\widetilde{e}}/\Gamma^{e})\). Plugging this and Equations (15), (16), (19), and (20) into Equation (9), we see that Equation (9) holds for the double cover \(\pi:\widehat{\Gamma}\to\Gamma\). 2. The edge \(e\) is a bridge edge of the dilation subgraph \(\Gamma_{dil}\), so that \(d(\widehat{\Gamma}^{\widetilde{e}}/\Gamma^{e})=d(\widehat{\Gamma}/\Gamma)+1\). Let \(F\subset E(\Gamma)\) be an ogod of \(\Gamma\), and let \(\Gamma_{i}\) be the connected component of \(\Gamma\backslash F\) containing \(e\). By Lemma 3.2, the dilation subgraph of \(\Gamma_{i}\) is connected and has the same genus as \(\Gamma_{i}\). It follows that \(e\) is in fact a bridge edge of \(\Gamma_{i}\) itself, not just its dilation subgraph. Hence \(\Gamma^{e}\backslash F\) has one more connected component than \(\Gamma\backslash F\), so the rank of \(F\) as an ogod of \(\Gamma^{e}\) is one greater than its rank as an ogod of \(\Gamma\), and therefore \(\Pr(\widetilde{\Gamma}/\Gamma)=\Pr(\widetilde{\Gamma^{e}}/\Gamma^{e})/4\). Plugging this and the remaining formulas into Equation (9), we see that Equation (9) holds for the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). ### The volumes of the tppavs We now compute the relationship between the volumes of the three tppavs associated to a dilated double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). The main result is the following theorem. **Theorem 3.11**.: _Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover of metric graphs. The volume of the tropical Prym variety of \(\pi\) is given by_ \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=2^{ \operatorname{m_{d}}(\widetilde{\Gamma}/\Gamma)-\operatorname{n_{d}}( \widetilde{\Gamma}/\Gamma)+\operatorname{d}(\widetilde{\Gamma}/\Gamma)} \frac{\operatorname{Vol}^{2}(\operatorname{Jac}(\widetilde{\Gamma}))}{ \operatorname{Vol}^{2}(\operatorname{Jac}(\Gamma))}, \tag{21}\] _where \(\operatorname{m_{d}}(\widetilde{\Gamma}/\Gamma)\), \(\operatorname{n_{d}}(\widetilde{\Gamma}/\Gamma)\), and \(\operatorname{d}(\widetilde{\Gamma}/\Gamma)\) are respectively the numbers of edges, vertices, and connected components of the dilation subgraph \(\Gamma_{\operatorname{dil}}\)._ Before giving the proof, we consider an elementary example. **Example 3.12**.: Let \(\Gamma\) be a metric graph of genus \(g\), and let \(\pi:\widetilde{\Gamma}\to\Gamma\) be the double cover such that \(\Gamma_{\operatorname{dil}}=\Gamma\), so that \(\pi\) is a factor two isometry. Since \(g(\widetilde{\Gamma})=g(\Gamma)\) the Prym variety \(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)\) is a point, and its (zero-dimensional) volume is formally equal to one. On the other hand, the exponent in the right hand side of Equation (21) is the genus of \(\Gamma\), so we see that \[\operatorname{Vol}^{2}(\operatorname{Jac}(\widetilde{\Gamma}))=2^{-g(\Gamma)} \operatorname{Vol}^{2}(\operatorname{Jac}(\Gamma)).\] This clearly agrees with Theorem 2.3, since each edge has half the length in \(\widetilde{\Gamma}\) as in \(\Gamma\), and thus the Jacobians of \(\widetilde{\Gamma}\) and \(\Gamma\) differ by scaling by a factor of \(2\). The principal technical result required for the proof is Proposition 3.13, which calculates the pushforward, pullback, and involution maps \[\pi_{*}:H_{1}(\widetilde{\Gamma},\mathbb{Z})\to H_{1}(\Gamma,\mathbb{Z}), \quad\pi_{*}:H_{1}(\widetilde{\Gamma},\mathbb{Z})\to H_{1}(\Gamma,\mathbb{Z}),\quad\iota_{*}:H_{1}(\widetilde{\Gamma},\mathbb{Z})\to H_{1}(\widetilde{ \Gamma},\mathbb{Z})\] in terms of explicit bases of \(H_{1}(\widetilde{\Gamma},\mathbb{Z})\) and \(H_{1}(\Gamma,\mathbb{Z})\). This result recently appeared in [20], improving and correcting earlier results in [10], and we restate it here for convenience. We then calculate the volumes of the tppavs using Equation (2). We also note that, unlike Theorem 3.10, the relationship between the volumes in the case of a free double cover is given by a different formula (Equation (10), which differs from Equation (21) by a factor of two). Morally, this is due to the fact that the kernel of the norm map \(\operatorname{Nm}:\operatorname{Jac}(\widetilde{\Gamma})\to\operatorname{Jac} (\Gamma)\) has two connected components if \(\pi:\widetilde{\Gamma}\to\Gamma\) is free and one if it is dilated (see Theorem 1.5.7 in [10]). Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover, and introduce the invariants \[A=g(\Gamma)-\operatorname{m_{d}}+\operatorname{n_{d}}-\operatorname{d},\quad B =\operatorname{d}-1,\quad C=\operatorname{m_{d}}-\operatorname{n_{d}}+ \operatorname{d}, \tag{22}\] where \(\operatorname{m_{d}}\), \(\operatorname{n_{d}}\), and \(\operatorname{d}\) denote respectively the number of edges, vertices, and connected components of the dilation subgraph \(\Gamma_{\operatorname{dil}}\). We note that \[A+B=g(\Gamma)-\operatorname{m_{d}}+\operatorname{n_{d}}-1=|E(\Gamma)|-|V( \Gamma)|-\operatorname{m_{d}}+\operatorname{n_{d}}=g(\widetilde{\Gamma})-g(\Gamma)\] is the dimension of the Prym variety of the double cover \(\pi:\widetilde{\Gamma}\to\Gamma\). We explicitly describe the induced maps on the homology groups: **Proposition 3.13** (Proposition 4.20 in [222]).: _Let \(\pi:\widehat{\Gamma}\to\Gamma\) be a dilated double cover of metric graphs. There exists a basis \(\alpha_{1},\dots,\alpha_{A}\), \(\gamma_{1},\dots,\gamma_{C}\) of \(H_{1}(\Gamma,\mathbb{Z})\) and a basis \(\widetilde{\alpha}_{1}^{\pm},\dots,\widetilde{\alpha}_{A}^{\pm}\), \(\widetilde{\beta}_{1},\dots,\widetilde{\beta}_{B}\), \(\widetilde{\gamma}_{1},\dots,\widetilde{\gamma}_{C}\) of \(H_{1}(\widehat{\Gamma},\mathbb{Z})\) such that_ \[\iota_{*}(\widetilde{\alpha}_{i}^{\pm}) =\widetilde{\alpha}_{i}^{\mp}, \pi_{*}(\widetilde{\alpha}_{i}^{\pm}) =\alpha_{i}, \pi^{*}(\alpha_{i}) =\widetilde{\alpha}_{i}^{+}+\widetilde{\alpha}_{i}^{-}, i =1,\dots,A,\] \[\iota_{*}(\widetilde{\beta}_{j}) =-\widetilde{\beta}_{j}, \pi_{*}(\widetilde{\beta}_{j}) =0, \mathrm{j} =1,\dots,B,\] \[\iota_{*}(\widetilde{\gamma}_{k}) =\widetilde{\gamma}_{k}, \pi_{*}(\widetilde{\gamma}_{k}) =\gamma_{k}, \pi^{*}(\gamma_{k}) =2\widetilde{\gamma}_{k}, k =1,\dots,C.\] We now show how to define a principal polarization on the tropical Prym variety \(\mathrm{Prym}(\widehat{\Gamma}/\Gamma)\) associated to a dilated double cover \(\pi:\widehat{\Gamma}\to\Gamma\). Recall that the underlying integral torus of \(\mathrm{Prym}(\widehat{\Gamma}/\Gamma)\) is given by the triple \((K,K^{\prime},[\cdot,\cdot]_{P})\), where \(K=(\mathrm{Coker}\pi^{*})^{\mathrm{tf}}\), \(K^{\prime}=\mathrm{Ker}\,\pi_{*}\), and the intersection pairing \([\cdot,\cdot]_{P}:K^{\prime}\times K\to\mathbb{R}\) is induced from the pairing \(H_{1}(\widetilde{\Gamma},\mathbb{Z})\times H_{1}(\widetilde{\Gamma},\mathbb{Z}) \to\mathbb{R}\) on \(\widehat{\Gamma}\). The principal polarization \(\xi=\mathrm{Id}:H_{1}(\widehat{\Gamma},\mathbb{Z})\to H_{1}(\widehat{\Gamma},\mathbb{Z})\) induces a polarization \(\xi_{|P}:K^{\prime}\to K\), which is not principal in general. The structure of this polarization was computed in [222]. **Proposition 3.14** (Proposition 4.21 in [222]).: _There exists a principal polarization \(\zeta:K^{\prime}\to K\) on \(\mathrm{Prym}(\widehat{\Gamma}/\Gamma)\) with respect to which the induced polarization \(\xi_{|P}\) has type (i.e. Smith normal form) \((1,\dots,1,2,\dots,2)\), where the number of \(1\)'s and \(2\)'s is equal to \(B\) and \(A\), respectively._ We are now ready to prove Theorem 3.11. Proof of Theorem 3.11.: Let \(\widetilde{\mathcal{B}}=\{\widetilde{\alpha}_{i}^{\pm},\widetilde{\beta}_{j}, \widetilde{\gamma}_{k}\}\) be the \(\mathbb{Z}\)-basis of \(H_{1}(\widehat{\Gamma},\mathbb{Z})\) constructed in Proposition 3.13, then by Equation (2) we have \[\mathrm{Vol}^{2}(\mathrm{Jac}(\widetilde{\Gamma}))=\mathrm{Gram}(\widetilde{ \mathcal{B}})_{\widehat{\Gamma}},\] where the subscript \(\widetilde{\Gamma}\) is there to remind us that the entries in the Gram determinant are computed using the inner product \((\cdot,\cdot)_{\widehat{\Gamma}}\). Introduce the following alternative \(\mathbb{Q}\)-basis \(\widetilde{\mathcal{B}}^{\prime}\) of \(H_{1}(\widetilde{\Gamma},\mathbb{Z})\): \[\widetilde{\mathcal{B}}^{\prime}=\widetilde{\mathcal{B}}^{\prime}_{1}\cup \widetilde{\mathcal{B}}^{\prime}_{2},\quad\widetilde{\mathcal{B}}^{\prime}_{ 1}=\{\widetilde{\alpha}_{i}^{+}+\widetilde{\alpha}_{i}^{-},2\widetilde{ \gamma}_{k}\},\quad\widetilde{\mathcal{B}}^{\prime}_{2}=\{\widetilde{\alpha}_{ i}^{+}-\widetilde{\alpha}_{i}^{-},\widetilde{\beta}_{j}\}.\] The change-of-basis matrix from \(\widetilde{\mathcal{B}}\) to \(\widetilde{\mathcal{B}}^{\prime}\) has determinant \(\pm 2^{-\Lambda-C}\) (see Equation (22)), hence \[\mathrm{Gram}(\widetilde{\mathcal{B}})_{\widehat{\Gamma}}=2^{-2\Lambda-2C} \,\mathrm{Gram}(\widetilde{\mathcal{B}}^{\prime})_{\widehat{\Gamma}}.\] Now let \(\widetilde{\delta}_{1}\in\widetilde{\mathcal{B}}^{\prime}_{1}\) and \(\widetilde{\delta}_{2}\in\widetilde{\mathcal{B}}^{\prime}_{2}\) be two cycles. By Proposition 3.13 we know that \(\iota_{*}(\widetilde{\delta}_{1})=\widetilde{\delta}_{1}\) and \(\iota_{*}(\widetilde{\delta}_{2})=-\widetilde{\delta}_{2}\). Since \(\iota_{*}\) preserves the inner product \((\cdot,\cdot)_{\widehat{\Gamma}}\), we have \[(\widetilde{\delta}_{1},\widetilde{\delta}_{2})_{\widehat{\Gamma}}=(\iota_{*}( \widetilde{\delta}_{1}),\iota_{*}(\widetilde{\delta}_{2}))_{\widehat{\Gamma}} =-(\widetilde{\delta}_{1},\widetilde{\delta}_{2})_{\widehat{\Gamma}}.\] Hence the elements of \(\widetilde{\mathcal{B}}^{\prime}_{1}\) and \(\widetilde{\mathcal{B}}^{\prime}_{2}\) are orthogonal to one another, and therefore \[\mathrm{Gram}(\widetilde{\mathcal{B}}^{\prime})_{\widehat{\Gamma}}=\mathrm{ Gram}(\widetilde{\mathcal{B}}^{\prime}_{1})_{\widehat{\Gamma}}\,\mathrm{Gram}( \widetilde{\mathcal{B}}^{\prime}_{2})_{\widehat{\Gamma}}.\] Now let \(\mathcal{B}=\{\alpha_{i},\gamma_{k}\}\) be the basis of \(H_{1}(\Gamma,\mathbb{Z})\) given in Proposition 3.13, then \(\mathrm{Gram}(\mathcal{B})_{\Gamma}\) computes \(\mathrm{Vol}^{2}(\mathrm{Jac}(\Gamma))\) by Equation (2). On the other hand, \(\pi^{*}(\mathcal{B})=\widetilde{\mathcal{B}}^{\prime}_{1}\), and for any \(\delta_{1},\delta_{2}\in H_{1}(G,\mathbb{Z})\) by Equation (4) we have \[(\pi^{*}(\delta_{1}),\pi^{*}(\delta_{2}))_{\widehat{\Gamma}}=2(\delta_{1}, \delta_{2})_{\Gamma},\] implying that \[\mathrm{Gram}(\widetilde{\mathcal{B}}^{\prime}_{1})_{\widehat{\Gamma}}=2^{ \Lambda+C}\,\mathrm{Gram}(\mathcal{B})_{\Gamma}=2^{\Lambda+C}\,\mathrm{Vol}^{2} (\mathrm{Jac}(\Gamma)).\] Finally, we note that \(\widehat{\mathcal{B}}^{\prime}_{2}\) is a basis for \(K^{\prime}=\mathrm{Ker}\,\pi_{*}\), so the Gram determinant \(\mathrm{Gram}(\widetilde{\mathcal{B}}^{\prime}_{2})_{\widehat{\Gamma}}\) computes the square of the volume of the tropical Prym variety, but with respect to the polarization \(\xi_{|P}\) induced from \(\operatorname{Jac}(\widetilde{\Gamma})\). By Proposition 3.14, the volume with respect to the intrinsic principal polarization is obtained by re-scaling as follows: \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))= \operatorname{Gram}(\widetilde{B}^{\prime}_{2})_{\mathbb{P}}=2^{-\Lambda} \operatorname{Gram}(\widetilde{B}^{\prime}_{2})_{\widetilde{\Gamma}}.\] Putting everything together, we see that \[\operatorname{Vol}^{2}(\operatorname{Jac}(\widetilde{\Gamma}))= \operatorname{Gram}(\widetilde{B})_{\widetilde{\Gamma}}=2^{-2\Lambda-2C} \operatorname{Gram}(\widetilde{B}^{\prime})_{\widetilde{\Gamma}}=2^{-2 \Lambda-2C}\operatorname{Gram}(\widetilde{B}^{\prime}_{1})_{\widetilde{\Gamma }}\operatorname{Gram}(\widetilde{B}^{\prime}_{2})_{\widetilde{\Gamma}}=\] \[=2^{-2\Lambda-2C}\cdot 2^{\Lambda+C}\operatorname{Gram}( \mathcal{B})_{\Gamma}2^{\Lambda}\operatorname{Gram}(\widetilde{B})_{ \mathbb{P}}=2^{-C}\operatorname{Vol}^{2}(\operatorname{Jac}(\Gamma)) \operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma)),\] which completes the proof. ### Proof of the main theorem We are now ready to put everything together. Proof of Theorem 3.3.: Let \(\pi:\widetilde{\Gamma}\to\Gamma\) be a dilated double cover of metric graphs, and let \(m_{d}\), \(n_{d}\), and \(d\) denote respectively the number of edges, vertices, and connected components of the dilation subgraph \(\Gamma_{dil}\). From Theorems 3.11, 2.3, and 3.10, respectively, we have \[\operatorname{Vol}^{2}(\operatorname{Prym}(\widetilde{\Gamma}/\Gamma))=2^{m_{d }-n_{d}+d}\frac{\operatorname{Vol}^{2}(\operatorname{Jac}(\widetilde{\Gamma}) )}{\operatorname{Vol}^{2}(\operatorname{Jac}(\Gamma))}=2^{m_{d}-n_{d}+d} \frac{J(\widetilde{\Gamma})}{J(\widetilde{\Gamma})}=2^{1-d}\operatorname{Pr}( \widetilde{\Gamma}/\Gamma),\] which by Definition 3.9 is the right hand side of (7).
2307.00722
Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set
NANOGrav, EPTA, PPTA, and CPTA have announced the evidence for a stochastic signal from their latest data sets. Supermassive black hole binaries (SMBHBs) are supposed to be the most promising gravitational-wave (GW) sources of pulsar timing arrays. Assuming an astro-informed formation model, we use the NANOGrav 15-year data set to constrain the gravitational wave background (GWB) from SMBHBs. Our results prefer a large turn-over eccentricity of the SMBHB orbit when GWs begin to dominate the SMBHBs evolution. Furthermore, the GWB spectrum is extrapolated to the space-borne GW detector frequency band by including inspiral-merge-cutoff phases of SMBHBs and should be detected by LISA, Taiji and TianQin in the near future.
Yan-Chen Bi, Yu-Mei Wu, Zu-Cheng Chen, Qing-Guo Huang
2023-07-03T03:01:11Z
http://arxiv.org/abs/2307.00722v2
# Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set ###### Abstract NANOGrav, EPTA, PPTA, and CPTA have announced the evidence for a stochastic signal from their latest data sets. Supermassive black hole binaries (SMBHBs) are supposed to be the most promising gravitational-wave (GW) sources of pulsar timing arrays. Assuming an astro-informed formation model, we use the NANOGrav 15-year data set to constrain the gravitational wave background (GWB) from SMBHBs. Our results prefer a large turn-over eccentricity of the SMBHB orbit when GWs begin to dominate the SMBHBs evolution. Furthermore, the GWB spectrum is extrapolated to the space-borne GW detector frequency band by including inspiral-merge-cutoff phases of SMBHBs and should be detected by LISA, Taiji and TianQin in the near future. _Introduction._ Supermassive black holes (SMBHs), with masses from \(10^{5}M_{\odot}\) to \(10^{11}M_{\odot}\), are thought to reside in the centers of nearly all galaxies [1; 2]. The details of their formation, evolution and interaction with host galaxies need to be clarified. In the scenario of galaxies coalescence, SMBHBs hosted in their nuclei sink to the center of the remnant due to the interaction with the surrounding ambient and eventually form bound supermassive black hole binaries (SMBHBs) [3]. The SMBHBs subsequently harden because of dynamical interaction with the dense background [4] until the gravitational wave (GW) takes over at sub-parsec separation, forming an abundant population GW sources in around nHz frequencies [5]. Moreover, the interaction with the stellar environment and scattering of ambient stars potentially attenuate the GW spectrum at the lower end of PTA ranges and tend to increase the binary eccentricity [6; 7; 8; 9; 10; 11; 12; 13], which leaves imprints on the GW emission. Therefore, detecting or constraining such signals will help to reveal the nature and properties of SMBHBs (for a review see [7]). Since galaxies are observed to merge quite frequently [14; 15] and the observable Universe encompasses several billions of them, a largish cosmological population of SMBHBs is expected to produce the stochastic gravitational-wave background (SGWB) [16]. The SGWB is going to be captured by pulsar timing arrays (PTAs) [17]. SGWBs affect the time of arrivals (TOAs) of radio pulsar from millisecond pulsars (MSPs) in a correlated way [18]. PTAs search for SGWB using the Hellings & Downs curve. Besides detecting the SGWB at nano-Hz frequency, the final goal of PTAs is to extract useful astrophysical information from their data. Recently, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav [19; 20; 21]), the European Pulsar Timing Array (EPTA [22; 23]), the Parkes Pulsar Timing Array (PPTA [24; 25]), and the Chinese Pulsar Timing Array (CPTA [26; 27]) have announced the evidence for a stochastic signal consistent with a SGWB, brings us a great opportunity to prove the imprints of SMBHBs [28; 29; 30]. SMBHBs are also one of the most promising GW sources for space-borne GW detectors, such as LISA [31], Taiji [32] and TianQin [33], in the frequency band around \(10^{-4}\sim 10^{-1}\) Hz. However, the expected event rate of SMBHB sources in space-borne detectors is still unknown. During the SMBHBs evolution, the inspiral phase falls into the range of PTA while the merger phase in the sensitive interval of space-borne GW observatories [5; 31]. This insight indicates that the observation of PTA and space-borne GW detector are complementary [34]. With the joint observation, the complete description of SMBHBs will be well constrained. In this letter, we use the NANOGrav 15-year data set to constrain the GWB from SMBHBs, and find that the SMBHBs orbits have a large eccentricity when GWs begin to dominate the SMBHBs evolution. In addition, we extrapolate the GWB spectrum from \(10^{-9}\) Hz to \(10^{-1}\) Hz (see Fig. 1), providing a promising GW source for the space-borne GW detectors. _SGWB from SMBHB._ SGWB from SMBHBs is the most promising target of PTA. In general SMBHB is supposed to form following the merge of two galaxies. After galaxies merging, their central SMBHs sink into the center of the merger remnant and form a bound binary. Initially the binary orbit shrinks due to the energy and angular momentum exchanging with surrounding stars and cold gas. Then GW radiation will dominate the evolution at turn-over frequency \(f_{t}\) (corresponding to initial eccentricity \(e_{0}\)), bringing the binary to final coalescence [35]. The spectrum of SGWB is composed from the sum of all SMBHBs emitting at a given observed frequency \(f\). The present-day energy density of SGWB \(\Omega_{\rm GW}(f)\) are given by [36] \[\Omega_{\rm GW}(f)=\frac{8\pi Gf}{3H_{0}^{2}c^{2}}\int dzd\mathcal{M}\frac{d^{2 }n}{dzd\mathcal{M}}\frac{dE_{\rm GW}}{df_{r}}\, \tag{1}\] where \(H_{0}=67.4\ {\rm km\ sec^{-1}\ Mpc^{-1}}\)[37] is the Hubble constant, \(f_{r}=(1+z)f\) is the source-frame GW frequency. \(\mathcal{M}=Mq^{3/5}/(1+q)^{1/5}\) is the chirp mass of SMBHB, where \(M\) is the primary SMBH mass and \(q\) is the SMBHB mass ratio. Here, \(d^{2}n/dzd\mathcal{M}\) is the SMBHB population and \(dE_{\rm GW}/df_{r}\) is the energy spectrum of single SMBHB. The relevant ranges in the integrals used here are \(0\leq z\leq 5\) and \(10^{5}M_{\odot}\leq\mathcal{M}\leq 10^{11}M_{\odot}\). The SMBHB population \(d^{2}n/dzd\mathcal{M}\) in Eq. (1) are estimated from astrophysical observation. It is widely acknowledged that nearly every galaxy has a SMBH at their centers. This relation can be used as an astrophysical model to describe the merger rate. In this astro-informed formation model, the astronomical surveys serve relatively strong constrains naturally and SMBHB population is allowed to interact with the environment. The population can be expressed as [38] \[\frac{d^{2}n}{dz^{\prime}d\mathcal{M}}=\int\frac{d^{3}n_{G}}{dz^{\prime}dM_{G} dq_{G}}\frac{dM_{G}}{dM}\frac{dq_{G}}{dq}\frac{dM}{d\mathcal{M}}dq\, \tag{2}\] where \(M_{G}\) is the primary galaxy mass and \(q_{G}\) is the mass ratio between the two paired galaxy, \(q=q_{G}^{\alpha_{*}}\) is the SMBHB mass ratio mentioned before. The relations used for transforming the galaxy mass \(M_{G}\) into the primary black hole mass \(M\) are given as [7; 39] \[\frac{M_{\rm bulge}}{M_{G}}=\left\{\begin{array}{ll}\frac{\sqrt{6.9}}{(\log M _{G}-10)^{1.5}}\exp\left(\frac{-3.45}{\log M_{G}-10}\right)-0.615&\quad{\rm if }\log M_{G}\geq 10\\ 0.615&\quad{\rm if}\log M_{G}\leq 10\end{array}\right. \tag{3}\] Figure 1: The expected SGWB spectrum \(\Omega_{\rm GW}(f)\) from the astrophysical SMBHBs in the frequency range \([10^{-10},10^{-1}]\)Hz. The light-blue violin plots show the posterior of the first 14-frequency bins for the NANOGrav 15-yr data set. The green line shows the medium spectrum obtained from the NANOGrav analysis based on the astro-informed model, and the light-green shaded region indicates the 90% credible interval. We also show the expected power-law integrated (PI) curves of LISA, TianQin and Taiji which represent their detection abilities with solid, dashed, dash-dotted gray lines, respectively. \[M=\mathcal{N}\left\{M_{*}\left(\frac{M_{\rm bulge}}{10^{11}M_{\odot}}\right), \epsilon\right\}\, \tag{4}\] where \(\mathcal{N}\{x,y\}\) is a log normal distribution with mean \(x\) and standard deviation \(y\). \(\{M_{*},\alpha_{*},\epsilon\}\) are the model parameters for translation. The galaxy differential merger rate per unit redshift, galaxy mass and mass ratio \(d^{3}n_{G}/dz^{\prime}dM_{G}dq_{G}\) can be written as [38] \[\frac{d^{3}n_{G}}{dz^{\prime}dM_{G}dq_{G}}=\frac{\Phi(M_{G},z)}{M_{G}\ln 10} \frac{\mathcal{F}(M_{G},z,q_{G})}{\tau(M_{G},z,q_{G})}\frac{dt_{r}}{dz}. \tag{5}\] where \(z\) stands for the redshift of galaxy pair and \(z^{\prime}\) is the redshift at galaxy pair merging. The quantitative relation between \(z\) and \(z^{\prime}\) is addressed in [5]. \(dt_{r}/dz\) is the relationship between time and redshift assuming a \(\Lambda\)CDM flat Universe as follow \[\frac{dt_{r}}{dz}=\frac{1}{H_{0}(1+z)\sqrt{(\Omega_{M}(1+z)^{3}+ \Omega_{k}(1+z)^{2}+\Omega_{\Lambda})}}. \tag{6}\] \(\Phi(M_{G},z)=dn_{G}/d\log_{10}M_{G}\) is the galaxy mass function (GSMF) measured at redshift of galaxy pair \(z\), the explicit expression is [40; 41] \[\Phi(M_{G},z)=10^{\Phi(z)}\ln 10\left(\frac{M_{G}}{M_{G0}}\right)^{1+ \alpha(z)}\exp{\left(-\frac{M_{G}}{M_{\rm G0}}\right)}, \tag{7}\] where the parameters are \(\Phi(z)=\Phi_{0}+z\Phi_{I},\alpha(z)=\alpha_{0}+z\alpha_{I}\). \(\{\Phi_{0},\Phi_{I},\alpha_{0},\alpha_{I},M_{G0}\}\) are the five model parameters for GSMF. \(\mathcal{F}(M_{G},z,q_{G})\) is the differential pair fraction with respect to the mass ratio of galaxy pair \(q_{G}\) and is written as [5] \[\mathcal{F}(M_{G},z,q_{G})=\frac{df_{\rm pair}}{dq_{G}}=f_{0}^{ \prime}\left(\frac{M_{G}}{aM_{G0}}\right)^{\alpha_{f}}(1+z)^{\beta_{f}}\,q_{G }^{\gamma_{f}}. \tag{8}\] where \(aM_{G0}=10^{11}M_{\odot}\) is an arbitrary reference mass [5]. \(\{f_{0}^{\prime},\alpha_{f},\beta_{f},\gamma_{f}\}\) are the four model parameters for pair function. \(\tau(M_{G},z,q_{G})\) is the merger timescale of the pair galaxy and can be expressed as [42] \[\tau(M_{G},z,q_{G})=\tau_{0}\left(\frac{M_{G}}{bM_{G0}}\right)^{ \alpha_{\tau}}(1+z)^{\beta_{\tau}}\,q_{G}^{\gamma_{\tau}}. \tag{9}\] where \(bM_{G0}=0.4/h_{0}\times 10^{11}M_{\odot}\) is an arbitrary reference mass [5]. \(\{\tau_{0},\alpha_{\tau},\beta_{\tau},\gamma_{\tau}\}\) are the four model parameters for merger timescale. To sum up, the present-day energy density \(\Omega_{\rm GW}(f)\) of SGWB from SMBHB in galaxy model is therefore fully specified by a set of eighteen model parameters: \(\{\Phi_{0},\Phi_{I},M_{G0},\alpha_{0},\alpha_{I}\}\) for the GSMF, \(\{f_{0}^{\prime},\alpha_{f},\beta_{f},\gamma_{f}\}\) for the pair fraction, \(\{\tau_{0},\alpha_{\tau},\beta_{\tau},\gamma_{\tau}\}\) for the merger timescale, \(\{M_{*},\alpha_{*},\epsilon\}\) for galaxy-SMBH transforming relation and \(\{e_{0},\zeta_{0}\}\) mentioned below for the SMBHB energy spectrum. The detailed descriptions of parameters are addressed in Table.3. The energy spectrum of single SMBHB \(dE_{\rm GW}/df_{r}\) are calculated using its self-similarity. So spectrum in any configuration can be obtained from a reference spectrum by shift and re-scaling. The fiducial redshift and chirp mass of reference spectrum are set as \(z_{0}=0.02\) and \(\mathcal{M}_{0}=4.16\times 10^{8}M_{\odot}\), respectively. The basic idea of so-called shift and re-scaling could be found in [10]. Past calculations of SMBHBs SGWB spectrum do not consider full spectrum. In most scenarios, only circular inspiral of SMBHBs is seriously taken into account. In this letter, the inspiral-merger-cutoff phases are jointed smoothly following methods proposed by [44; 45; 46; 10]. We express the complete description of energy spectrum \(dE_{\rm GW}/df_{r}\) as follow \[\frac{dE_{\rm GW}}{df_{r}}(f_{r}<\nu_{1}) = \frac{\pi c^{2}f}{4G}h_{\rm c,f}^{2}\left(f\frac{f_{\rm p,0}}{f_{ \rm p,t}}\right)\left(\frac{f_{\rm p,0}}{f_{\rm p,t}}\right)^{-\frac{4}{3}} \tag{10}\] \[\times\left(\frac{\mathcal{M}}{\mathcal{M}_{0}}\right)^{\frac{5}{ 3}}\left(\frac{1+z}{1+z}\right)^{-\frac{1}{3}}\] \[\frac{dE_{\rm GW}}{df_{r}}(f_{r}\in[\nu_{1},\nu_{2})) = \frac{(G\pi)^{2/3}\mathcal{M}^{5/3}}{3}\omega_{1}f_{r}^{2/3}\] (11) \[\frac{dE_{\rm GW}}{df_{r}}(f_{r}\in[\nu_{2},\nu_{3})) = \frac{(G\pi)^{2/3}\mathcal{M}^{5/3}}{3}\omega_{2}\left[\frac{f_{r }}{1+(\frac{f_{r}-\nu_{2}}{\sigma/2})^{2}}\right]^{2}\] where \(\omega_{1}=\nu_{1}^{-1}\), \(\omega_{2}=\nu_{1}^{-1}\nu_{2}^{-4/3}\). The set of parameters \((\nu_{1},\nu_{2},\sigma,\nu_{3})\) can be determined by two physical parameters: the total mass \(M_{\rm total}=M(1+q)\) and the symmetric mass ratio \(\eta=(qM^{2}/M_{\rm total}^{2})\) in terms of \((a\eta^{2}+b\eta+c)/(\pi GM_{\rm total}/c^{3})\), with coefficients \(a,b,c\) given by Table.1. The ratio \(f_{\rm p,0}/f_{\rm p,t}\) for shift is illustrated as [10] \[\frac{f_{\rm p,0}}{f_{\rm p,t}}=\frac{f_{0}}{f_{t}}\left[\left( \frac{e_{\rm ref}}{e_{0}}\right)^{\frac{12}{19}}\frac{1-e_{0}^{2}}{1-e_{\rm ref }^{2}}\left(\frac{304+121e_{\rm ref}^{2}}{304+121e_{0}^{2}}\right)^{\frac{32}{2 299}}\right]^{\frac{3}{2}}\,\] here \(f_{0}=10^{-10}\)Hz and \(e_{\rm ref}=0.9\) are the reference frequency and eccentricity, respectively. \(e_{0}\) is the initial e \begin{table} \begin{tabular}{c c c c} Parameter & \(a_{k}\) & \(b_{k}\) & \(c_{k}\) \\ \hline \(\nu_{1}\) & \(2.9740\times 10^{-1}\) & \(4.4810\times 10^{-2}\) & \(9.5560\times 10^{-2}\) \\ \(\nu_{2}\) & \(5.9411\times 10^{-1}\) & \(8.9794\times 10^{-2}\) & \(1.9111\times 10^{-1}\) \\ \(\nu_{3}\) & \(8.4845\times 10^{-1}\) & \(1.2848\times 10^{-1}\) & \(2.7299\times 10^{-1}\) \\ \(\sigma_{\nu}\) & \(5.0801\times 10^{-1}\) & \(7.7515\times 10^{-2}\) & \(2.2369\times 10^{-2}\) \\ \end{tabular} \end{table} Table 1: Parameters used in Eq. (10), Eq. (11) and Eq. (12) for a complete description of the energy spectrum [43]. centricity we choose. Once \(e_{0}\) is selected, turn-over frequency \(f_{t}\) is obtained by \[f_{t}=0.356\text{nHz}\left(\frac{\rho_{i,100}}{F(e)\sigma_{200}}\zeta_{0}\right) ^{\frac{3}{10}}\mathcal{M}_{9}^{-\frac{2}{6}}\, \tag{14}\] where \[F(e)=\frac{1+(73/24)e^{2}+(37/96)e^{4}}{(1-e^{2})^{7/2}}\, \tag{15}\] \(\mathcal{M}_{9}=\mathcal{M}/(10^{9}M_{\odot})\) is the rescaled chirp mass, \(\rho_{i,100}=\rho_{i}/(100M_{\odot}\ \text{pc}^{-3})\) is the velocity dispersion of stars in the galaxy and \(\zeta_{0}\) describes the density of the stellar environment. \(h_{\rm c,fit}^{2}\) is the analytical fitting function of reference spectrum and takes the form as [10] \[h_{\rm c,fit}=a_{0}\bar{f}^{a_{1}}e^{-a_{2}\bar{f}}+b_{0}\bar{f}^{b_{1}}e^{-b_{ 2}\bar{f}}+c_{0}\bar{f}^{-c_{1}}e^{-c_{2}/\bar{f}}\, \tag{16}\] where \(a_{i},b_{i},c_{i}\), listed in Table.2, are constants determined by fitting and \(\bar{f}=f/(10^{-8})\)Hz. _Data and results._ The NANOGrav collaboration has performed an analysis on the 15-yr data set by employing a free spectrum that enables independent variations in the amplitude of the GW spectrum across different frequency bins. In the analyses, we use the posterior data from NANOGrav [20; 21], and the PTMCMCSampler[47] package to perform the Markov Chain Monte Carlo sampling. All of the parameters and their prior distributions are listed in Table 3. We note that these constrained prior distributions are based on those presented in [5] and are derived from observational and theoretical works on the measurement of the GSMF, galaxy pair fraction, merger timescale and SMBH-host galaxy scaling relations. The resulting posterior distribution is illustrated in Fig. 2. Our analysis reveals that the detected stochastic signal provides some new insights into the differential pair fraction \(\mathcal{F}(M_{G},z,q_{G})\), merger timescale \(\tau(M_{G},z,q_{G})\), galaxy-SMBH mass scaling \(M_{G}-M_{\rm bulge}-M\) relation, and the initial states of the SMBHBs when GW emission takes over, such as the eccentricity and the transition frequency \(f_{t}\), compared to other astrophysical observations. Specifically, the preference for a higher value of the parameter \(f_{0}\) and the positive-skewed parameters \(\alpha_{f}\) and \(\beta_{f}\) suggest larger differential pair fractions in more massive galaxies, while the preference for a lower value of the parameter \(\tau_{0}\) and the negative-skewed parameters \(\alpha_{\tau}\) and \(\beta_{\tau}\) indicate shorter merger timescales in more massive galaxies, and the preference for a higher value of \(M_{*}\) corresponds to a higher normalization between the galaxy bulge mass \(M_{\rm bulge}\) and SMBH mass \(M\). The above parameters entirely contribute to the observed relatively high amplitude of the SGWB spectrum (\(\Omega_{\rm GW}=0.93^{+1.17}_{-0.41}\times 10^{-8}\) at yr\({}^{-1}\)). Note that the posterior distributions of these parameters are very similar to those reported in [48], where the NANOGrav 12.5-yr data set was used. This is because the spectrum amplitudes in both data sets are statistically consistent. On the other hand, the parameters \(e_{0}\) and \(\zeta_{0}\), which determine the shape of the SGWB spectrum, display sharp contrasts in the posteriors obtained from the two data sets. For the NANOGrav 15-yr data set, the distribution of \(e_{0}\) indicates that SMBHs exhibit a large initial eccentricity when transitioning into the GW-emission dominated process, while the larger value of the parameter \(\zeta_{0}\) implies that massive galaxies have, on average, higher densities than what is suggested by a standard Dehnen profile [49]. _Implication for GWs in the space-borne detector frequency band._ In the PTA frequency band, SMBHBs are in the inspiral phase, and their radiation power can be calculated using Eq. (10). After a prolonged period of mutual inspiral, these binaries gradually transition to more circular orbits and enter the merge and ring-down phase, characterized by GW radiation described by Eq. (11) and Eq. (12). Some of these black holes undergo merger and final coalescence at higher frequencies, entering the space-borne GW detector frequency band. Now we can deduce the properties of the supermassive black hole binary population from the PTA results, and further combine Eq. (1) with Eqs.(10-12) to obtain the complete SGWB spectrum generated by MBHBs spanning both the PTA and LISA/Taiji/TianQin frequency bands, as depicted in Fig. 1. We need to emphasize that the general consensus for the GW detection of the cosmic history of SMBHBs is that PTAs primarily detect the SGWB from the ensemble of the SMBHB population, while LISA/Taiji/TianQin is expected to detect the final coalescence stage of individual systems. However, during the initial stages of the detector in operation, we cannot directly resolve individual sources, and it is reasonable to consider these sources as constituting a SGWB. In fact, as depicted in Fig. 1, the spectrum of the SGWB is sufficiently strong that it is very likely to be detected very soon once the detectors are in operation. The relation between merger rate and SMBHB chirp mass is depicted in Fig. 3. The merger rate of SMBHBs with chirp masses in the range \(10^{5}-10^{11}M_{\odot}\) is \(0.049\,\text{yr}^{-1}\lesssim\mathcal{R}\lesssim 30.58\,\text{yr}^{-1}\). _Summary._ In this letter, we use the NANOGrav 15-year data set to constrain the SGWB from SMBHBs, \begin{table} \begin{tabular}{c c c c} \hline \hline parameter & \(i=0\) & \(i=1\) & \(i=2\) \\ \hline \(a_{i}\) & \(7.27\times 10^{-14}\) & \(0.254\) & \(0.807\) \\ \(b_{i}\) & \(1.853\times 10^{-12}\) & \(1.77\) & \(3.7\) \\ \(c_{i}\) & \(1.12\times 10^{-13}\) & \(0.676\) & \(0.6\) \\ \hline \hline \end{tabular} \end{table} Table 2: Constant parameters in the analytic functions of Eq. (16). These parameters are obtained from fitting functions to reference spectrum [10]. implying that the SMBHBs tend to have a large initial eccentricity in the transition phase between interaction domination and GW domination. The SGWB spectrum from SMBHBs is extrapolated from the PTA frequency band to the space-borne GW detector frequency band. Our results indicate that such a SGWB from SMBHBs should be also detected by LISA/Taiji/TianQin in the near future. _Acknowledgements._ We acknowledge the use of HPC Cluster of ITP-CAS. QGH is supported by the grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15). ZCC is supported by the National Natural Science Foundation of China (Grant No. 12247176 and No. 12247112) and the China Postdoctoral Science Foundation Fellowship No. 2022M710429.
2306.17177
Leveraging ChatGPT As Text Annotation Tool For Sentiment Analysis
Sentiment analysis is a well-known natural language processing task that involves identifying the emotional tone or polarity of a given piece of text. With the growth of social media and other online platforms, sentiment analysis has become increasingly crucial for businesses and organizations seeking to monitor and comprehend customer feedback as well as opinions. Supervised learning algorithms have been popularly employed for this task, but they require human-annotated text to create the classifier. To overcome this challenge, lexicon-based tools have been used. A drawback of lexicon-based algorithms is their reliance on pre-defined sentiment lexicons, which may not capture the full range of sentiments in natural language. ChatGPT is a new product of OpenAI and has emerged as the most popular AI product. It can answer questions on various topics and tasks. This study explores the use of ChatGPT as a tool for data labeling for different sentiment analysis tasks. It is evaluated on two distinct sentiment analysis datasets with varying purposes. The results demonstrate that ChatGPT outperforms other lexicon-based unsupervised methods with significant improvements in overall accuracy. Specifically, compared to the best-performing lexical-based algorithms, ChatGPT achieves a remarkable increase in accuracy of 20% for the tweets dataset and approximately 25% for the Amazon reviews dataset. These findings highlight the exceptional performance of ChatGPT in sentiment analysis tasks, surpassing existing lexicon-based approaches by a significant margin. The evidence suggests it can be used for annotation on different sentiment analysis events and taskss.
Mohammad Belal, James She, Simon Wong
2023-06-18T12:20:42Z
http://arxiv.org/abs/2306.17177v1
# Leveraging ChatGPT As Text Annotation Tool For Sentiment Analysis ###### Abstract Sentiment analysis is a well-known natural language processing task that involves identifying the emotional tone or polarity of a given piece of text. With the growth of social media and other online platforms, sentiment analysis has become increasingly crucial for businesses and organizations seeking to monitor and comprehend customer feedback as well as opinions. Supervised learning algorithms have been popularly employed for this task, but they require human-annotated text to create the classifier. To overcome this challenge, lexicon-based tools have been used. A drawback of lexicon-based algorithms is their reliance on pre-defined sentiment lexicons, which may not capture the full range of sentiments in natural language. ChatGPT is a new product of OpenAI and has emerged as the most popular AI product. It can answer questions on various topics and tasks. This study explores the use of ChatGPT as a tool for data labeling for different sentiment analysis tasks. It is evaluated on two distinct sentiment analysis datasets with varying purposes. The results demonstrate that ChatGPT outperforms other lexicon-based unsupervised methods with significant improvements in overall accuracy. Specifically, compared to the best-performing lexical-based algorithms, ChatGPT achieves a remarkable increase in accuracy of 20% for the tweets dataset and approximately 25% for the Amazon reviews dataset. These findings highlight the exceptional performance of ChatGPT in sentiment analysis tasks, surpassing existing lexicon-based approaches by a significant margin. The evidence suggests it can be used for annotation on different sentiment analysis events and tasks. Sentiment Analysis, Social Media, Deep Learning, Zero-shot learning ## 1 Introduction Social media platforms such as Twitter, Facebook, Youtube, and Tiktok have become essential tools for users to share their opinions and interact with others online. Twitter, in particular, is a popular micro-blogging platform where users post short but informative content such as text, emojis, and hashtags. These components provide valuable information for sentiment analysis and opinion mining to evaluate users' polarity towards specific topics or events. Sentiment analysis requires data labeling for the training of the classifiers. These data labels should be of high quality so that the classifier can be closer to the ground truth. One popular method to get the gold standard labeled dataset is using crowd workers for their annotations, such as Amazon Mechanical Turk. The annotation is a time taking and costly process which is sometimes a hindrance for researchers. Moreover, the quality of the crowd workers is also decreased [1]. Recently there has been an increase in the popularity of large language models for different NLP tasks. These language models have zero-shot ability to perform different tasks [2][3]. ChatGPT is a new chatbot by OpenAI trained on GPT 3.5 and using RLHF technique to align it with humans [4]. It is a powerful tool that can write code, poetry and can have meaningful conversations with people. It has shown its ability on various NLP tasks, including arithmetic reasoning, text classification, and question answering [5]. ChatGPT could also be used as a text annotator and shows its superiority among crowd workers for data labeling [6]. Lexicon-based methods are commonly used to determine sentiment in texts due to their simplicity and speed compared to supervised learning approaches. VADER, SentiStrength, and TextBlob are popular lexicon-based algorithms. Among these, VADER, a rule-based algorithm, is efficient and outperforms other algorithms on social me \begin{table} \begin{tabular}{|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|} \hline Text & VADER & TextBlob & ChatGPT & Ground Truth \\ \hline \#MUNLIV & Fred!!!! & & & & \\ \hline Get in there!!!! & \#GGMU & & & & \\ \hline Gini’s turned up, now if the rest of the team could, that’d be great’MUNLIV & & & & \\ \hline This Apex seems to be a fine unit, except the sound quality on the DVD side is quite poor. Quality otherwise has been good, unlike some of the previous reviews. Have they improved these, or did I just get a good one? & & & \\ \hline \end{tabular} \end{table} TABLE I: ChatGPT vs different lexicon-based algorithms dia data, especially Twitter. It also considers emojis when calculating sentiment score. However, TextBlob's algorithm does not consider emojis when determining sentiment. The limitation of lexicon-based algorithms for sentiment analysis is their inability to handle sarcasm, irony, and other forms of figurative language. These algorithms rely on pre-defined lists of words and their associated sentiment scores, which can lead to inaccuracies when processing text that contains words with multiple meanings or words used in a non-literal sense. In addition, these algorithms often struggle with understanding the context and tone of the text, leading to misclassification of the sentiment. For instance, example 3 in table 1 uses sarcastic language to criticize the tweet. Both lexical-based tools could not identify the sarcasm and output the sentiment as positive, whereas ChatGPT understands the sarcasm and explains why it is negative. Our experiments show that the ChatGPT could be used as a tool for sentiment analysis, and it has the ability to understand emojis, sarcasm, and irony. The main findings of this evaluation are as follows 1. ChatGPT impressive performance on zero-shot sentiment analysis 2. Handles emojis and sarcasm into the sentiment calculations 3. More than 94% accurate on the long form of sentiment reviews Sections 1-6 cover the introduction, related works, problem formulation, experimental setup and results, discussion, and conclusion. ## 2 Related Works This section reviews prior work related to large language models and sentiment analysis techniques. **Large Language Models:** There has been an increase in the development of large language models. Specifically after the release of GPT-3 [7]. GPT-3 has been trained auto regressively on a large amount of internet data and has 175 billion parameters. These large language models [7][3][2] can perform a variety of tasks without much training. These models could perform much better on finetuning on a smaller dataset. These models are few-shot learners. OpenAI has recently launched ChatGPT, a conversational artificial intelligence system that has been fine-tuned from GPT-3.5 using reinforcement learning from human feedback (RLHF) [4]. Large language models can capture context, sarcasm, and other subtleties in sentiment expression when finetuned for that specific task. **Sentiment Analysis:** The objective of sentiment analysis is to recognize the viewpoints, feelings, and emotions conveyed in textual data, including customer feedback, social media updates, and news stories [8]. There have been multiple proposals for evaluating the sentiment of text, including the utilization of machine learning methods that can classify sentiment automatically. It has been demonstrated that both supervised and unsupervised learning approaches can be successful in this regard.Unlike traditional lexicon-based approaches, which rely on predefined sentiment lexicons, large language models can capture context, sarcasm, and other subtleties in sentiment expression. Yue et al. [9] conducted a study comparing the performance of supervised and unsupervised machine learning techniques for sentiment analysis. Their findings revealed that supervised methods generally exhibit superior accuracy than unsupervised approaches like lexicon-based algorithms. Nonetheless, acquiring sufficient labeled training data for supervised methods can be costly and time-consuming. ## 3 Problem Formulation Sentiment analysis is vital in natural language processing to identify the text's emotional tone or polarity. As social media and online platforms gain prominence, businesses and organizations encounter the need to monitor and comprehend customer feedback and opinions efficiently. Although traditional supervised learning algorithms have been commonly employed for sentiment analysis, their dependence on labeled training data poses cost and time constraints. Recently, there has been a notable increase in the fascination surrounding language models such as ChatGPT, developed by OpenAI, for various natural language processing (NLP) tasks. ChatGPT, built on the powerful GPT-3.5 architecture and trained on extensive text data, has demonstrated exceptional proficiency in generating human-like responses and comprehending diverse subjects comprehensively. With such impressive capabilities, it becomes imperative to explore the potential utility of ChatGPT as a valuable instrument for text annotation. This research aims to investigate the efficacy of ChatGPT as a text annotation tool for sentiment analysis tasks. Our primary objective is to assess the ability of ChatGPT to accurately detect and classify sentiment in various domains and text sources, including customer reviews, social media posts, and news articles. By harnessing the capabilities of ChatGPT's pre-trained language model, we seek to overcome the limitations associated with conventional supervised learning methods that heavily rely on extensive labeled data. To tackle this issue, a comprehensive set of experiments and evaluations will be conducted using established benchmark datasets tailored for sentiment analysis tasks. The performance of ChatGPT will be compared against other cutting-edge sentiment analysis techniques, encompassing lexicon-based approaches and human labeling. By employing robust evaluation metrics such as accuracy, precision, recall, and F1-score, the effectiveness of ChatGPT in capturing the intricate aspects of sentiment across diverse domains will be thoroughly assessed. ## 4 Experiments and Results This section provides information regarding the dataset's description, setup of the experiment, including the benchmarks, and techniques involved. ### Dataset For our experimentation, two datasets are used. The first one is from a soccer match between Manchester United and Liverpool. The dataset is extensive and comes from a previous study where two independent annotators manually labeled tweets [10]. This dataset contains 6,201 rows, of which 1,214 tweets contain emojis. Among the tweets, 2551 are positive, 2710 are negative, and 943 are neutral. The sports tweets dataset usually contains fewer words, and the use of emojis and abbreviations is common. The second dataset used for experimentation is from Amazon reviews. It contains long text describing Amazon products [11]. A random 2000 text example is taken as a subset from the dataset, with 1002 belonging to the positive category and 998 as negative. The two datasets are different from each other based on the event as well as the length of an individual text. ### _Setup_ For comparison, two lexical-based tools, VADER and TextBlob, are used. The evaluated ChatGPT variant is derived from GPT-3.5, specifically the gpt-3.5-turbo version, which is considered to be both highly capable and cost-effective. The prompt used has the same wording for both datasets. The numerical score has been extracted from the API response and hence used to label the sentiment. ### _Result_ #### 4.3.1 Results: Soccer Tweets Dataset. The accuracy of the VADER algorithm for the dataset was found to be 47%, and the tweets containing emojis have an accuracy of 46%. The accuracy of TextBlob's algorithm for the whole dataset is 40%, and the tweets containing emojis are 33% accurate. In contrast, the chatGPT is 67% accurate for both the overall tweet and the tweet containing emojis. #### 4.3.2 Results: Amazon Reviews Dataset. The accuracy of the VADER algorithm for the Amazon reviews dataset was found to be 69%. The accuracy of TextBlob's algorithm for this dataset is 66%, Whereas the chatGPT is 94% accurate for the text it classified. ChatGPT could not classify 58 tweets as it termed them neutral or it needed more information. ChatGPT for text annotation is its adaptability to various annotation tasks and domains. In terms of limitations, our study revealed that the sentiment value of a text depends on the prompt used for analysis. Furthermore, we observed instances where different sentiment scores were produced for the same prompt and tweet. One of the challenges is the potential bias in the model's responses, as it learns from the vast amount of data available on the internet, which can contain biased or controversial content. Careful consideration and monitoring are necessary to ensure that the annotations produced by ChatGPT are fair, unbiased, and aligned with the intended annotation guidelines. Additionally, the computational speed of ChatGPT is relatively slow compared to some other sentiment analysis tools. The processing time and cost associated with running the model may pose challenges, particularly for organizations with limited computational resources or budget constraints. ## 6 Conclusion In this paper, we have examined the use of ChatGPT as a sentiment analysis tool. Sentiment analysis has become increasingly important in understanding customer feedback and opinions in today's digital world. While supervised learning algorithms have been popularly used for this task, they require human-annotated text to train the classifier, which can be time-consuming and expensive. To address this challenge, lexicon-based tools have been used, and in recent times, zero-shot models have gained popularity. This study evaluated ChatGPT as a tool for sentiment analysis labeling on two distinct datasets: sports tweets and Amazon reviews. It outperforms the lexical-based tools by 20% and 25% on these two datasets. ChatGPT shows promise as a tool for text labeling for various sentiment analysis events and tasks. As for future directions, there is a need to optimize and parallelize the computational tasks of ChatGPT to improve its speed and efficiency. Furthermore, an examination of sarcasm and irony detection by ChatGPT could be evaluated. ## Acknowledgments This work was initiated by the College of Science and Engineering at HBKU.
2304.06603
accelerating wrf i/o performance with adios2 and network-based streaming
With the approach of Exascale computing power for large-scale High Performance Computing (HPC) clusters, the gap between compute capabilities and storage systems is growing larger. This is particularly problematic for the Weather Research and Forecasting Model (WRF), a widely-used HPC application for high-resolution forecasting and research that produces sizable datasets, especially when analyzing transient weather phenomena. Despite this issue, the I/O modules within WRF have not been updated in the past ten years, resulting in subpar parallel I/O performance. This research paper demonstrates the positive impact of integrating ADIOS2, a next-generation parallel I/O framework, as a new I/O backend option in WRF. It goes into detail about the challenges encountered during the integration process and how they were addressed. The resulting I/O times show an over tenfold improvement when using ADIOS2 compared to traditional MPI-I/O based solutions. Furthermore, the study highlights the new features available to WRF users worldwide, such as the Sustainable Staging Transport (SST) enabling Unified Communication X (UCX) DataTransport, the node-local burst buffer write capabilities and in-line lossless compression capabilities of ADIOS2. Additionally, the research shows how ADIOS2's in-situ analysis capabilities can be smoothly integrated with a simple WRF forecasting pipeline, resulting in a significant improvement in overall time to solution. This study serves as a reminder to legacy HPC applications that incorporating modern libraries and tools can lead to considerable performance enhancements with minimal changes to the core application.
Erick Fredj, Yann Delorme, Sameeh Jubran, Mark Wasserman, Zhaohui Ding, Michael Laufer
2023-04-13T15:13:11Z
http://arxiv.org/abs/2304.06603v1
# Accelerating WRF I/O Performance With ADIOS2 and Network-Based Streaming ###### Abstract With the approach of Exascale computing power for large-scale High Performance Computing (HPC) clusters, the gap between compute capabilities and storage systems is growing larger. This is particularly problematic for the Weather Research and Forecasting Model (WRF), a widely-used HPC application for high-resolution forecasting and research that produces sizable datasets, especially when analyzing transient weather phenomena. Despite this issue, the I/O modules within WRF have not been updated in the past ten years, resulting in subpar parallel I/O performance. This research paper demonstrates the positive impact of integrating ADIOS2, a next-generation parallel I/O framework, as a new I/O backend option in WRF. It goes into detail about the challenges encountered during the integration process and how they were addressed. The resulting I/O times show an over tenfold improvement when using ADIOS2 compared to traditional MPI-I/O based solutions. Furthermore, the study highlights the new features available to WRF users worldwide, such as the Sustainable Staging Transport (SST) enabling Unified Communication X (UCX) DataTransport, the node-local burst buffer write capabilities and in-line lossless compression capabilities of ADIOS2. Additionally, the research shows how ADIOS2's in-situ analysis capabilities can be smoothly integrated with a simple WRF forecasting pipeline, resulting in a significant improvement in overall time to solution. This study serves as a reminder to legacy HPC applications that incorporating modern libraries and tools can lead to considerable performance enhancements with minimal changes to the core application. Data Storage, Sustainable Staging Transport (SST), High-Performance Computing (HPC), Message Passing Interface (MPI), Unified Communication X (UCX), Parallel I/O, RDMA Weather Research and Forecasting (WRF) ## I Introduction The use of High Performance Computing (HPC) in scientific applications is facing an increasing I/O challenge, leading to performance bottlenecks in simulation pipelines, such as weather forecasting models. To address this issue, this paper investigates the integration and application of ADIOS2, a high-performance I/O and data management library, with the widely used HPC application, WRF (Weather Research and Forecasting Model). ## II Related Work In the past, various studies have investigated I/O scaling and bottlenecks in WRF. For example, Kyle [1] and NCAR found that I/O time exceeded compute time at scale when running the Hurricane Maria 1km test case on the Cheyenne and Yellowstone supercomputers with more than 2000 compute cores. Balle et al. [2] demonstrated that write times increased as more nodes were added, with I/O time reaching 50% of the total run time when node counts reached about 500. However, the WRF Quilt Server was successfully used to reduce I/O time at the cost of computational resources. Finkenrath et al. [3] found that using the PnetCDF option resulted in a ten-fold speedup compared to the serial-based NetCDF, and running WRF in hybrid MPI+OpenMP mode greatly reduced I/O time. Another study applied the first version of ADIOS to the GRAPES mesoscale Numerical Weather Prediction application [4], achieving a ten-fold increase in I/O times compared to their MPI-I/O approach, but they did not examine in-situ pipelining capabilities. Singhal and Sussman [5] integrated a version of ADIOS into WRF, but it was not upstreamed to the WRF community and did not utilize ADIOS2's code coupling and transport capabilities. Several I/O middleware libraries, such as MPI-I/O [6], NetCDF [7], Parallel-NetCDF [8], and HDF5 [9], have been introduced, aiming to optimize I/O performance using parallel I/O for remote parallel file systems. However, these libraries have limitations and cannot directly take advantage of emerging high-speed node-local storage. The present study investigates the integration of the ADIOS2 library, which offers advanced features and can take advantage of emerging high-speed node-local storage. The paper is structured as follows: related works are discussed in Section II; the background of WRF and ADIOS2 is presented in Section III; implementation details and challenges of integrating a new I/O library into WRF are detailed in Section IV; results of a set of I/O performance evaluations comparing ADIOS2 to other available WRF I/O methods, as well as an example in-situ analysis pipeline, are discussed in Section V; and conclusions and next steps are presented in Section VI. ## III Background The following section details the background of both the WRF model and the ADIOS2 library. ### _Weather Research Forecasting Model_ #### Iii-A1 WRF Background WRF is a cutting-edge mesoscale Numerical Weather Prediction system that is designed for both atmospheric research and forecasting. It is an open-source project that has gained popularity worldwide and is officially supported by the National Center for Atmospheric Research (NCAR). The WRF software framework enables efficient and massively parallel computation across a wide range of computing platforms, making it a true community model. The model is based on the compressible, non-hydrostatic atmospheric motion equations that include multiple physics processes such as cloud and precipitation, boundary layer turbulence, land-ocean air interaction, radiative transfer, and energy transfer at the surface. Finite difference method is used to discretize these equations, and the resulting time-dependent atmospheric motion and physical states are computed through integration. Due to the large number of prognostic variables in the three dimensions, which is a result of the multiple physical processes involved in atmospheric motion, computational and storage resources must be high performance. WRF operates in two phases, the first of which involves configuring the model domain, preparing initial conditions, and ingesting input data, while the second runs the forecast model and outputs solution and checkpoint files. WRF is predominantly written in Fortran and can be built with a variety of compilers. It runs on platforms with UNIX-like operating systems, from laptops to supercomputers, and its software framework handles I/O and parallel-computing communications. #### Iii-A2 WRF I/O Backends WRF's well-defined I/O API provides several different implementations of its I/O layer, the ones relevant for the present work: * Serial NetCDF [7] (_io_form=2_): The default I/O option in the WRF model. When this I/O option is selected, all data is funneled through the first MPI rank, where this rank alone writes out a NetCDF4 based file using the NetCDF library (HDF5 based). While Rank 0 is writing to disk, all other ranks wait until the write has fully concluded before continuing computation. This method performs well at low process counts but at higher counts, the write can quickly dominate the computation time. One of the main advantages of this method is the ability to use lossless compression that is integrated within HDF5. This results in much smaller file sizes, achieving compression ratios close to 4. Still, due to the massive communication overhead, and single write thread, this option achieves poor I/O performance. * Split NetCDF (_io_form=102_): This option also uses the NetCDF library for I/O but instead of sending all data to the first MPI rank, each rank writes its own distinct file. As will be seen later in this work, this method is able to achieve very high throughput at moderate MPI rank counts due to the absence of communication costs, but this file-per-process method (_N-N_) does not scale to high counts due to the immense pressure to the underlying file system and metadata servers. Additionally, as this output method outputs multiple distinct files, the post processing is not trivial, especially when the rank of readers does not match the amount of files. To counter this, a community provided routine can stitch the output files together back into a single file, but this also incurs a non negligible time and resource cost, as well as additional complexity in post processing pipelines. * Parallel NetCDF [8] (_io_form=11_): WRF's primary parallel I/O option that utilizes MPI-I/O. When this method is employed, all MPI ranks cooperate to write a single output file in parallel using PnetCDF, which directly accesses MPI-I/O. As opposed to NetCDF4 based methods, this method does not allow for data compression. Even without compression capabilities, this option has been shown to offer an order of magnitude increase in write bandwidth compared to the Serial NetCDF method at scale, due to coordinated MPI-IO two-phase method [10]. As this is the primary parallel I/O option that allows for operation without requiring additional overhead from stitching multiple files together, or file format conversions, it is treated as the benchmark method, when comparing against the new ADIOS2 approach. * Quilt Servers: The quilt server technique uses dedicated I/O processes ("servers") that deal exclusively with I/O, enabling the compute processes to continue with their work without waiting for data to be written to disk before proceeding. Data from multiple compute ranks are merged ("quilted") together by a dedicated I/O rank by means of MPI communication calls and kept in system memory until they are written to PFS. This was previously found to be high performing I/O option available in WRF, even though compute resources are sacrificed and memory usage can be exceedingly high. This option is not investigated in this work, but should be investigated in future works. ### _Adios2_ ADIOS2, developed as the successor to the Adaptable Input Output System (ADIOS) by Lofstead et al. [11], is a highly flexible library that allows users to configure different I/O techniques, file formats, and transports through a simple XML file. This adaptability makes it a suitable option for use across different scales, ranging from laptops to supercomputers. Although ADIOS2 is coded in C++, it provides support for various programming languages such as C, Fortran, Python, and Matlab. ADIOS2 is primarily focused on high-performance file-based I/O, despite offering in-situ analysis and Wide-Area Network (WAN) and UCX transport capabilities. It uses its proprietary file format, BP5 [12], to assign specific MPI ranks as aggregators that write sub-files to a BP5 output directory while collecting streaming data from the sub-group ranks. The aggregator writes the received data to disk continuously, without encountering file locking issues that MPI-I/O-based approaches like Parallel NetCDF (PnetCDF) [8] and HDF5 [9] often face. The metadata algorithm tracks the location of data buffers within the sub-files to reassemble the data for reading. ADIOS2 provides tunable aggregators and placement at runtime, and the current version defaults to a single aggregator per node for optimal shared memory communication while minimizing the number of processes accessing the underlying filesystem. The combination of sub-files, streaming data, and the absence of global data sharing results in significant enhancements in write bandwidth. Moreover, the BP5 file format offers node-local burst buffer support, where each process writes sub-files to its high-speed local file system, and the burst buffer data is drained back to the Parallel File System via a separate thread. ADIOS2 is continuously evolving and has already been incorporated into numerous essential HPC applications, with more support in the pipeline. * OpenFOAM [13] * LAMMPS [14] * XGC [15] * E3SM: [16] * Trilinos [17] * PETSc [18] At runtime, ADIOS2 provides multiple engines, including SST, which allows direct connection between data producers and consumers, bypassing the filesystem and using the new UCX communication. SST supports a variable number of readers and writers, buffering data in the producer's memory until the consumer is ready to receive it. This feature allows for seamless in-situ postprocessing. ADIOS2 supports in-line data manipulation, including lossless compression through various compressors and codecs such as the Blosc meta-compressor [19]. The library is under continuous development and has already been integrated into several essential HPC applications, with more support expected to come. ### _Ucx_ UCX, which stands for Unified Communication X, is a high-performance communication library for distributed computing systems that enables efficient data transfer between computing nodes in a parallel application. It is designed to provide a unified API for a variety of interconnect technologies, such as InfiniBand, RoCE, Ethernet, and others, and supports both point-to-point and collective communication. UCX is designed to provide low-latency, high-bandwidth communication, and supports a range of communication operations, including send and receive, scatter and gather, reduce, allreduce, broadcast, and others. It also supports asynchronous communication and provides a range of features to optimize communication performance, such as zero-copy data transfers, message batching, and hardware offload. UCX achieves high performance by using a number of techniques, including multi-threading, cache-aware algorithms, and optimized memory management. It also takes advantage of hardware features such as remote direct memory access (RDMA), which allows data to be transferred directly between the memories of two nodes without involving the CPU. Overall, UCX provides a high-performance, portable, and scalable communication framework that can be used to develop parallel applications for a variety of distributed computing systems. ## IV Design and Implementation The ADIOS2 I/O backend implementation in WRF is similar to other external I/O options, with some tiny differences. It substitutes the _ncmpi_put_var_type_ calls of PnetCDF with _adios2_put_ calls of ADIOS2, which are easier to use. However, ADIOS2 and NetCDF differ in how they handle the time dimension. NetCDF treats time as a separate dimension, while ADIOS2 is step-based and places data at each new time step. Thus, the main WRF I/O logic loop was modified to provide ADIOS2 with the start and end step information. To avoid editing the XML file for each output variable, the compressor option was added to the namelist.input file, allowing compression to be applied to all variables. To increase user adoption, a converter program was developed to convert the ADIOS2 output file back into a NetCDF file. This incurs a time penalty of about 10 seconds per time step, but allows the use of existing NetCDF pipelines. However, this tool does not allow for the use of ADIOS2's in-situ analysis pipeline capabilities. NCAR has merged the ADIOS2 I/O module and converter script for use by WRF users worldwide as of February 2023. ## V Results In this section, we present an evaluation of the ADIOS2 backend in the WRF model, which includes a comparison with other I/O options available in the model. The study focuses on investigating the SST with UCX Data transport, the ADIOS2 burst buffer write capabilities, and the impact of in-line compression on write times and output data sizes. Additionally, we showcase a basic ADIOS2-based in-situ post-processing pipeline and compare its end-to-end run times with traditional post-processing methods. To conduct the tests, we utilized a compute cluster consisting of 8 nodes, each equipped with two 18-core Intel Xeon Gold 6240 CPUs, 384 GB DDR4 memory, and a Mellanox ConnectX-6 interconnect. The storage setup involved a dedicated storage node with a BeeGFS file system striped over eight 10K RPM spinning hard disk drives connected to the compute nodes via Mellanox ConnectX-5 NICs. Additionally, each compute node had an Intel DC P4510 1TB NVMe SSD drive. For testing, we used WRF v4.4 in distributed memory mode (i.e., _dmpar_), along with NetCDF v4.8.1, PnetCDF v1.12.1, and ADIOS2 (Github, master branch) compiled using GCC v10.2. The CONUS 2.5km case, a well-known benchmark for WRF, was selected for I/O testing, and we increased the WRF history file frequency to one file every 30 minutes output history file to reflect a relevant data analysis time scale during [20]. We performed the tests on up to 8 compute nodes (288 MPI ranks) five times for each configuration and computed the average I/O times. It is worth noting that the official WRF benchmarks were incompatible with WRF v4.4, and hence we replaced them with the _New_ CONUS 2.5km benchmark developed by Kyle [1]. Our evaluation presents a comprehensive analysis of the ADIOS2 backend within WRF, highlighting its strengths and limitations compared to other I/O options available in the model. ### _ADIOS2 File Write_ To evaluate the performance of three different I/O configurations using the BeeGFS file system as the storage target, the new CONUS 2.5km model was employed. The first configuration was the baseline parallel I/O option, which used PnetCDF. The second configuration utilized the Split NetCDF method, and the third configuration incorporated the new ADIOS2 method. Fig. 1 demonstrates the average write times for the WRF history file for each I/O option at different node counts, highlighting the scalability of each method. The aim of the evaluation was to compare the performance of the three I/O options and determine the most efficient one for the new CONUS 2.5km model. The results of the evaluation could help optimize the I/O performance of the model, which is crucial for timely data analysis and decision making. The write times of PnetCDF increase as more nodes are used due to the additional inter-node communication required by the two-phase MPI-I/O based method used in its implementation. On the other hand, while the Split NetCDF method shows impressive results at low node counts, its write time increases by a factor of 3 between 4 and 8 nodes, demonstrating scaling issues inherent in the file-per-process approach. In contrast, the ADIOS2 method yields the most consistent results across the range of process counts tested, outperforming the PnetCDF results by an order of magnitude and halving the write time of the Split NetCDF method when 8 compute nodes are used. ### _ADIOS2 Burst Buffer_ To evaluate the performance of ADIOS2's burst buffer feature, adjustments were made to the _adios2.xml_ file, targeting the node-local NVMe SSDs on each compute node. This allowed the ADIOS2 aggregators to write data locally, while a background thread continued to drain the burst buffer contents back to the parallel file system (PFS). However, the drain feature was disabled for this set of tests. Fig. 2 illustrates the scaling performance of ADIOS2's burst buffer compared to normal PFS write. At low node counts, the burst buffer results exhibit similar times to the PFS write configuration, but as more nodes are added, there is a dramatic decrease in average write time. This is due to the additional potential write bandwidth of the node-local NVMe SSDs on each additional node. To further demonstrate the performance benefits of the ADIOS2 burst buffer feature, the speedup compared to a single node was plotted in Fig. 3. The results showed ideal write time scaling up to 4 nodes, with only a small deviation from ideal at 8 nodes. This is in stark contrast to the inverse speedup trend observed in the MPI-I/O based results of PnetCDF as more nodes are added. Overall, the ADIOS2 burst buffer functionality greatly accelerates I/O write performance. In this case, it showed a significant _two_ order of magnitude speedup compared to the benchmark WRF parallel I/O option, PnetCDF, on the system used in this work. ### _ADIOS2 Aggregator Count_ The ADIOS2 file based I/O is mainly controlled by the number of sub-files written to the file system, which is called the aggregator ratio. By default, ADIOS2 writes a single sub-file per node, with one MPI rank serving as the aggregator for the remaining ranks on the node. However, depending on the capabilities of the underlying file system, a different number of sub-files/aggregators may be more efficient. ADIOS2 offers the flexibility to adjust the number of aggregators at runtime using the _adios2.xml_ file or programmatically within the application. To find the optimal aggregator ratio for the CONUS 2.5km model, a set of tests were conducted where the aggregator ratio was varied. The results, shown in Fig. 4, indicate that for this system, the ideal aggregator ratio is 36, which corresponds to one aggregator per node (each node has 36 cores). When the aggregator ratio was set to 1, where each rank writes its own sub-file, the results were not favorable, similar to the file-per-process approach observed in the previous Split NetCDF tests. This finding is consistent with the scaling results seen in the XGC application in [12], where the sub-file-per-process approach was not efficient. For all subsequent tests, the number of aggregators/sub-files was set to one per node. ### _ADIOS2 in-line Compression_ ADIOS2 offers the Operator abstraction, which allows for in-line data manipulation. One of the primary uses of this abstraction is the ability to compress data in-line. This can be accomplished using various available lossy and lossless compression backends and codecs. In this study, the Blosc [19] "meta-compressor" was chosen as it is lossless and supports multiple state-of-the-art compression codes. We tested the following Blosc compression codecs: * BloscLZ [19] * LZ4 [21] * Zlib [22] * Zstandard [23] Fig. 5 findings are particularly significant because they demonstrate that using compression with ADIOS2 does not result in any significant performance degradation, even when dealing with large-scale parallel I/O. This is important because, in Fig. 1: The graph illustrates a comparison between the average write times of the WRF history file using ADIOS2 and legacy parallel I/O options for different node and rank counts for the CONUS 2.5km model. With 8 nodes and 288 ranks, ADIOS2 demonstrates a more than tenfold improvement over PnetCDF. Fig. 3: The plot shows the average speedup of history write time when using the ADIOS2 burst buffer feature compared to a single node, as a function of the number of compute nodes. The results demonstrate that the scaling closely approaches ideal values, with an almost perfect speedup up to 4 nodes and a slight deviation from ideal at 8 nodes. Fig. 2: The figure illustrates the average write times for the WRF history file using ADIOS2 with and without the burst buffer feature on the CONUS 2.5km model. The comparison shows the performance of writing to the node-local burst buffer versus writing to the PFS. many scientific domains, data sets can be massive, and writing them to disk can be a significant bottleneck in the computational workflow. By using compression, it is possible to reduce the amount of data that needs to be written, which in turn can lead to significant performance improvements. Furthermore, it is worth noting that the choice of compression codec can have a significant impact on performance. In this study, the Zstandard codec was found to be the most effective, which suggests that it may be a good choice for other scientific applications that require high-performance parallel I/O. However, it is also possible that other codec may be more effective for specific types of data, and further research is needed to fully explore this issue. Overall, the results presented in Fig. 5 suggest that using ADIOS2 compression can be an effective way to improve the performance of parallel I/O, particularly when dealing with large data sets. By choosing the right compression codec, it may be possible to achieve even greater performance improvements, which could have significant implications for a wide range of scientific applications. The results from Fig. 6 demonstrate that both ADIOS2 (Blosc) compression and the NetCDF4-based compression methods achieve compression ratios of about 4, leading to significant reductions in I/O server contention and required storage volume. Furthermore, the Zstandard codec outperforms other Blosc codecs, with the smallest file size and maximal throughput, except for Zlib. These findings indicate that Zstandard is an excellent choice for WRF as a premier compressor codec option. Therefore, it has been selected as the default codec once compression is enabled in the WRF-ADIOS2 I/O backend. ### _Ideal ADIOS2 File Write Configuration for WRF_ The test run conducted on 8 nodes using the optimized ADIOS2 configuration demonstrated significant improvements in I/O time. The use of node-local NVMe SSDs as the ADIOS2 storage target, along with the Blosc compressor and Zstandard codec, led to a perceived I/O time of around half a second. This is a significant improvement compared to the I/O bottleneck observed when using the PnetCDF I/O method. The results of the optimization are summarized in Table I. As can be seen, the I/O bottleneck observed at the beginning while using the PnetCDF I/O method is virtually eliminated when using the ideal ADIOS2 configuration, as the perceived I/O time within the application falls to approximately half a second. ### _ADIOS2 In-situ Post-Processing_ ADIOS2 is not just an I/O library, but also a data management library, and its in-situ analysis capabilities were tested in a simple weather forecasting pipeline. The pipeline was set up for a 2-hour forecasting run, with a history file output every 30 simulation minutes. The aim of the test was to show the significant decrease in total time-to-solution when using an in-situ pipeline with ADIOS2, as compared to the standard process-after-run method with the benchmark PnetCDF. For this test, the ADIOS2 SST engine was selected using the _adios2.xml_ file, which buffers and transfers requested data to a consumer over the network, rather than writing it out to a file on the file system. To perform post-processing, a Python-based analysis script was written, which plotted a slice of the temperature field over the continental United States and generated an image similar to the one in Fig. 7. Two versions of the script were created, one that used the netcdf4-python to read data, and one that used the ADIOS2 high-level Python API. It's worth noting that the ADIOS2-based script didn't need to be modified to support in-situ processing, as the support is inherent when using the stepping mode with the Pythonic _for fstep in adios2_fh_ directive. Fig. 8 shows the runtime progression of the ADIOS2-based end-to-end pipeline, compared to the PnetCDF sequential pipeline. The I/O, compute, and initialization times were extracted and analyzed from the WRF _rsl.out_ output files. The ADIOS2-based pipeline results showed an almost constant block of compute, as the perceived write time by the application was below one second for each of the outputs. The SST engine internally buffered the data and sent it to be processed in parallel while the computation continued. On the other hand, with the PnetCDF pipeline, the computation was stopped for long periods during the writing process. After the computation was completed, the PnetCDF post-processing script was run, which further increased the time-to-solution. In total, the in-situ ADIOS2 approach using the SST engine was able to almost halve the time-to-solution compared to the \begin{table} \begin{tabular}{|l|c|c|} \hline Configuration & Write Time [s] & Speedup \\ \hline PnetCDF & 93 & 1X \\ ADIOS2 & 8.2 & 11X \\ ADIOS2+BB & 1.1 & 84X \\ ADIOS2+BB+Zstandard & 0.52 & 179X \\ \hline \end{tabular} \end{table} TABLE I: Progression of Optimizations. Fig. 4: The impact of varying the ADIOS2 aggregator ratio parameter on the average history write time of the CONUS 2.5km model using 8 compute nodes is shown. The optimal performance is achieved with a single aggregator per node, corresponding to the default ADIOS2 behavior (aggregator ratio of 36 for this cluster). legacy PnetCDF-based approach, providing significant value for time-sensitive applications. ### _UCX network-based streaming over BP file-based_ Network-based streaming and file-based streaming are two different methods of delivering content over the internet. Network-based streaming delivers content in real-time as shown in Fig. 9, where the content is continuously delivered as a stream of data, rather than as a complete file that needs to be downloaded before rerun can begin. File-based streaming, on the other hand, requires the entire file to be downloaded before rerun can begin. Here are some benefits of network-based streaming over file-based streaming: * Faster rerun: Network-based streaming allows for faster run as the content is delivered in real-time. Users can begin streaming the content almost instantly, without having to wait for the entire file to download. * Less storage space: Network-based streaming doesn't require users to download and store large files on their devices, which can be especially beneficial for users with limited storage space. * Smooth run: Network-based streaming can adjust the quality of the stream based on the user's internet connection, ensuring that the playback is smooth and uninterrupted. * Live streaming: Network-based streaming can deliver live content, such as live weather forecast events, in real-time. This allows users to watch events as they happen, rather than having to wait for the file to be made available for download. * Security: Network-based streaming can provide better security, as content is not stored locally on the user's device, making it less vulnerable to theft or loss. Additionally, network-based streaming can implement secure communication protocols such as HTTPS or encryption to ensure that the content is delivered securely. Overall, network-based streaming provides a faster, more seamless, and secure experience for users, making it a preferred method of content delivery over file-based streaming. Fig. 5: This figure compares the average history file write times of ADIOS2 compressed data using various Blosc compression codecs to the write times of uncompressed ADIOS2 data. The results indicate that using ADIOS2 compression leads to a nearly 50% reduction in average write time compared to the uncompressed configuration. Fig. 6: Comparison of the compressed vs uncompressed output data size using different ADIOS2 compression codecs. ### _ADIOS2 In-Situ Enabling UCX data transport_ The development of sustainable and efficient transport engines is crucial for the advancement of high-performance computing and data management. One such engine, the Sustainable Staging Transport (SST), has recently undergone a significant upgrade to enhance its capabilities. In the first step of this upgrade, the SST engine was fortified by incorporating the UCX communication framework as a Remote Direct Memory Access (RDMA)-capable data-plane. This incorporation has expanded the capabilities of the ADIOS2 (Adaptable Input/Output System) and has improved hardware compatibility. The UCX transport, which is a low-level communication library, has provided a more efficient and flexible interface to manage data communication between different nodes. The incorporation of the UCX communication framework as a RDMA-capable dataplane in the Sustainable Staging Transport engine has led to significant performance improvements see Fig. 9. Specifically, the UCX transport has outperformed the traditional transport middleware layer of EVPATH, resulting in faster data transfer rates and a more seamless user Fig. 8: Run time comparison of a WRF run with postprocessing. The ADIOS2 configuration processes the output data in-situ, using data streamed from WRF, while the PnetCDF configuration uses the traditional process-after-job-completion approach. Fig. 7: Run time comparison of a WRF run with postprocessing. The ADIOS2 configuration processes the output data in-situ, using data streamed from WRF, while the PnetCDF configuration uses the traditional process-after-job-completion approach. Fig. 9: A WRF run with postprocessing is compared in terms of run time. The graph shows that the ADIOS streaming engine outperforms the file-based ADIOS2 engine BP4 and BP5 by a factor of 2X-6X on both the Reader and Writer sides. experience. This improvement in performance can be seen in Fig. 10, which demonstrates the superiority of the UCX transport over EVPATH in terms of both throughput and latency. The graph clearly shows that the UCX transport is capable of handling a much higher volume of data and is also able to transmit data with much lower latency than EVPATH. Overall, the integration of the UCX transport into the SST engine has greatly enhanced its capabilities and has made it more efficient and adaptable for data management and high-performance computing. With this upgrade, users can now experience faster data transfer rates and a more seamless experience, ensuring that the SST engine remains a valuable tool for researchers and scientists. As a result of this upgrade, the new UCX SST dataplane has been successfully integrated into ADIOS2 2.9, enabling users to take advantage of the advanced features and improved performance. The incorporation of the UCX transport has made the ADIOS2 engine more adaptable, efficient, and future-proof, ensuring that it remains a valuable asset for data management and high-performance computing. In ADIOS2, the QueueLimit parameter is used with the SST transport method to control the maximum number of items that can be queued for asynchronous data transfers between application and engine. When using the SST transport method, the QueueLimit parameter is set using the \(adios2::transport::SSTParams\) class. The parameter QueueLimit sets the maximum number of items that can be queued for asynchronous data transfers. This value can be changed according to the available resources and the size of the data being transferred. According to Fig. 11, setting the queue limit parameter to 0 _"asynchronous"_ improved UCX optimization, while setting it to 1 _"synchronous"_ improved EVPATH data transfer optimization. As a result, the queue limit parameter significantly influenced the optimization of both UCX and EVPATH Data Transfer. ## VI Conclusions The ADIOS2 data management library has been integrated into WRF, and this work has demonstrated the implementation and notable performance improvements. In addition to accelerating typical PFS writes, the new library also offers a wide range of capabilities, like node-local burst buffer write support, high-performance lossless and lossy compression, potential two-way data coupling, and in-situ analysis. When using the new I/O backend at scale, test findings in this work demonstrate a one to two order of magnitude improvement in perceived write time within WRF. Also, it was demonstrated that, when compared to the old PnetCDF technique at scale, a sample weather forecasting pipeline using the ADIOS2 SST engine cut the overall time to solution in half. Using the ADIOS2 SST engine with UCX data transport, an additional half-time overall solution was achieved. The benchmark CONUS 2.5km example was used to test the new features and performance improvements of the ADIOS2 library within WRF. Future work will make use of these new data streaming capabilities to address large and research-relevant WRF simulation scenarios that are currently stymied and immobilized by slow, sequential analytical pipelines and poor I/O performance. It is also necessary to investigate the impact of using lossy compression methods for numerical weather prediction. It is critical to carefully balance the increase in effective I/O throughput that can be attributed to the lossy compression codecs included in ADIOS2 against the loss in numerical precision. The results of this work demonstrate the enormous benefits gained by combining cutting-edge open source libraries such as ADIOS2 with legacy HPC applications such as WRF, demonstrating how bottlenecks that emerge over time (such as the I/O bottleneck in WRF) can be squashed by using the right set of tools. Fig. 11: Run time comparison of a WRF run with postprocessing with 8 nodes. The ADIOS2 SST engine asynchronous data transfers queuelimit parameter (QueueLimit=0 _asynchronous_, QueueLimit=1 _synchronous_) improved both middleware layer UCX and EVPATH by an order of magnitude. Fig. 10: A WRF run with postprocessing is compared in terms of run time with 4 nodes. The ADIOS2 SST engine with UCX data transport transport (black) outperformed the default EVPATH data transport (red) by an order of magnitude, with QueueLimit=1. ## VII Acknowledgments The authors thank Dr. Swati Sighal from the Department of Computer Science University of Maryland for her assistance in this study.
2305.11056
PETAL: Physics Emulation Through Averaged Linearizations for Solving Inverse Problems
Inverse problems describe the task of recovering an underlying signal of interest given observables. Typically, the observables are related via some non-linear forward model applied to the underlying unknown signal. Inverting the non-linear forward model can be computationally expensive, as it often involves computing and inverting a linearization at a series of estimates. Rather than inverting the physics-based model, we instead train a surrogate forward model (emulator) and leverage modern auto-grad libraries to solve for the input within a classical optimization framework. Current methods to train emulators are done in a black box supervised machine learning fashion and fail to take advantage of any existing knowledge of the forward model. In this article, we propose a simple learned weighted average model that embeds linearizations of the forward model around various reference points into the model itself, explicitly incorporating known physics. Grounding the learned model with physics based linearizations improves the forward modeling accuracy and provides richer physics based gradient information during the inversion process leading to more accurate signal recovery. We demonstrate the efficacy on an ocean acoustic tomography (OAT) example that aims to recover ocean sound speed profile (SSP) variations from acoustic observations (e.g. eigenray arrival times) within simulation of ocean dynamics in the Gulf of Mexico.
Jihui Jin, Etienne Ollivier, Richard Touret, Matthew McKinley, Karim G. Sabra, Justin K. Romberg
2023-05-18T15:50:54Z
http://arxiv.org/abs/2305.11056v1
# PETAL: Physics Emulation Through Averaged Linearizations for Solving Inverse Problems ###### Abstract Inverse problems describe the task of recovering an underlying signal of interest given observables. Typically, the observables are related via some non-linear forward model applied to the underlying unknown signal. Inverting the non-linear forward model can be computationally expensive, as it often involves computing and inverting a linearization at a series of estimates. Rather than inverting the physics-based model, we instead train a surrogate forward model (emulator) and leverage modern auto-grad libraries to solve for the input within a classical optimization framework. Current methods to train emulators are done in a black box supervised machine learning fashion and fail to take advantage of any existing knowledge of the forward model. In this article, we propose a simple learned weighted average model that embeds linearizations of the forward model around various reference points into the model itself, explicitly incorporating known physics. Grounding the learned model with physics based linearizations improves the forward modeling accuracy and provides richer physics based gradient information during the inversion process leading to more accurate signal recovery. We demonstrate the efficacy on an ocean acoustic tomography (OAT) example that aims to recover ocean sound speed profile (SSP) variations from acoustic observations (e.g. eigenray arrival times) within simulation of ocean dynamics in the Gulf of Mexico. ## 1 Introduction Inverse problems arise in many scientific applications where the goal is to reconstruct some unknown signal, image or volume of interest from indirect observations. The forward process, or the mapping from the data to observations, is typically well known usually through modeling the physical process. However, inverting the model is often ill-posed or even non-invertible. More formally, let us consider the task of recovering some signal \(\mathbf{x}\) from observations \(\mathbf{y}\) that are related by some potentially non-linear forward model \(F\) via \[\mathbf{y}=F(\mathbf{x})+\mathbf{\eta}, \tag{1}\] where \(\mathbf{\eta}\) encapsulates noise or other perturbations. Our forward model \(F\) represents a computational model of the underlying physics of the measurement process. Classical solutions involve modeling the forward process with extremely high accuracy and then attempting to invert a stabilized or linearized variant, which often requires heavy domain knowledge. Another approach to handle the ill-posed nature of the inversion task is to cast the problem as an optimization task and incorporate regularization. A regularizer is a measure of how well the proposed solution fits some known, and often hand-crafted, prior. This term makes the inversion well-posed by biasing towards certain solutions. The inversion is typically solved for an in iterative fashion. However, determining a descent direction on the physics-based forward model is often computationally expensive, due to the need to calculate the Jacobian (or more commonly, a Jacobian-vector product). In this paper, we propose a novel architecture trained to emulate the physics-based forward model. This work departs from previous works who also aim to emulate the forward model [4; 10; 30] by explicitly incorporating physics in the form of linearizations of the forward model at a set of reference points in the construction of the emulator rather than treating it as a black box. We then use this trained model in a classical optimization framework to recover the signal given a set of observations. By leveraging existing auto-grad libraries [27], the gradient with respect to the input can be efficiently calculated, making iterative solvers feasible for recovering a solution. Concretely, our paper makes the following contributions: * We propose a novel architecture that learns to emulate the forward model. The model directly embeds physics via linearizations around a subset of reference points * We introduce a learned encoder/decoder scheme to the neural adjoint method to mitigate artifacts from directly optimizing in the input space. * We demonstrate its efficacy for recovering solutions in a classical optimization framework on an ocean acoustic tomography (OAT) example that aims to recover ocean sound speed profile (SSP) variations from acoustic observations (e.g. eigenray arrival times) within simulation of ocean dynamics in the Gulf of Mexico. ## 2 Related Works ### Iterative Inversion The task of directly inverting some potentially non-linear forward model \(F\) is often non-trivial or mathematically impossible. Instead, a more stable alternative is to iteratively solve for \(\mathbf{x}\) given some observations \(\mathbf{y}\). Classically, this is done by formulating the reconstruction as solving a non-linear least squares problem that aims to minimize \[\hat{x}=\underset{x}{\text{arg min}}\;\frac{1}{2}\left\lVert\mathbf{y}-F(\mathbf{x}) \right\rVert^{2}. \tag{2}\] The Gauss-Newton method solves Equation (2) by iteratively computing the Jacobian \(\mathbf{J}_{F}\) of \(F\) at the current best estimate \(\mathbf{x}^{(k)}\) and solving a set of linear equations to generate an update of the form \[\mathbf{x}^{(k+1)}=\mathbf{x}^{(k)}+(\mathbf{J}_{F}^{\top}\mathbf{J}_{F})^{-1}\mathbf{J}_{F}^{\top }(\mathbf{y}-F(\mathbf{x}^{(k)})). \tag{3}\] The Levenberg-Marquardt algorithm [15; 20] presents a more robust alternative to Gauss-Newton by introducing a (non-negative) damping factor \(\lambda\) leading to an update of the form \[\mathbf{x}^{(k+1)}=\mathbf{x}^{(k)}+(\mathbf{J}_{F}^{\top}\mathbf{J}_{F}+\lambda\mathbf{I})^{-1} \mathbf{J}_{F}^{\top}(\mathbf{y}-F(\mathbf{x}^{(k)})). \tag{4}\] Another approach to address the ill-posed nature of \(F\) is to explicitly introduce a regularization function \(R(\mathbf{x})\) as an additional penalty term to Equation (2). The regularizer \(R\) addresses the ill-posed nature by stabilizing the inversion process and biasing the solution towards those with expected or desired structure. Some common examples include the \(\ell_{2}\) norm to encourage smaller norm solutions (similar to Equation 4), \(\ell_{1}\) to promote sparsity, and TV to induce homogeneous regions separated by sharp boundaries. For differentiable \(R\), the augmented optimization problem can be solved via steepest descent, computing updates of the form \[\mathbf{x}^{(k+1)}=\mathbf{x}^{(k)}+\gamma\mathbf{J}_{F}^{\top}(\mathbf{y}-F(\mathbf{x}^{(k)}))- \lambda\nabla R(\mathbf{x}^{(k)}), \tag{5}\] where \(\gamma\) is a tuned step size, \(\lambda\) is a non-negative factor controlling the strength of regularization, and \(\mathbf{J}_{F}\) is the Jacobian of \(F\) evaluated at \(\mathbf{x}^{(k)}\). However all these approaches can be undesirable in practice due to the need to re-compute the Jacobian (or a Jacobian-vector product) at each iteration. Computing a Jacobian-vector product to determine a descent direction often involves solving an auxiliary set of PDEs, which can be computationally expensive if many iterations are required to achieve an acceptable level of accuracy. ### Learned Inversion An increasingly popular approach for solving inverse problems takes advantage of machine learning. Deep learning has achieved tremendous success in areas of natural language processing, computer vision and other tasks, in part due to the availability of large labelled datasets. Recent works attempt to tackle inverse problems using these data-driven methods [7; 11; 22; 26; 34; 35]. Unlike typical supervised learning tasks that attempt to learn a mapping purely from examples, deep learning for inverse problems can leverage our understanding of the physics in the forward model. One common approach to embed the physics is the so-called "Loop Unrolled" architecture heavily inspired by existing iterative algorithms [1; 2; 8; 9; 32; 36; 37]. The learned model alternates between taking a gradient step computed directly from existing forward models and applying a learned regularizer step. However, such approaches have to apply the forward model multiple times for a single forward pass through the architecture, making the method infeasible for more complex non-linear simulators. Our approach bypasses this obstacle by using more computationally tractable approximations of the forward model. An alternative approach to incorporating physics is termed "Physics Informed Neural Networks (PINN)" [14; 29]. These methods incorporate the governing partial differential equations directly in the loss, guiding a parameterized model towards physics obeying solutions. However, an important distinction between PINNs and more generalized machine learning for inverse problems is that each model is trained for a _single instance_ of a PDE solve given some boundary/initial conditions and must be re-trained for any new conditions or observables (in the case of inverse problems). Every training iteration involves a PDE misfit calculation, which can be expensive and ill-posed, making scaling to larger dimensions difficult. Unlike PINNs, the linearizations we use only need to be computed once before training rather than at each iteration of the training process. ### Neural Adjoint The neural adjoint method [30] was proposed to tackle inverse problems with more computationally intensive forward models. A parameterized model \(G_{\theta}\) is trained to emulate the forward model [4; 10]. This is done in a supervised learning fashion, often with a simple mean-squared error loss. Not only does this provide a cheaper/quicker alternative to the physics based forward model, if trained with existing auto-grad libraries such as Pytorch [27], this also allows for efficient computation of the gradient with respect to the input, bypassing the need to explicitly solve for the adjoint when calculating the gradient. Once trained, the parameters are fixed and the model \(G_{\theta}\) is substituted for \(F\) in Equation (2) or other similar optimization frameworks. The auto-grad libraries are then used to efficiently compute a gradient with respect to the input \(\mathbf{x}\), making it possible to iteratively solve for the best estimate \(\hat{\mathbf{x}}\) that fits some observations \(\mathbf{y}\). Existing works primarily focused on lower dimensional examples (on the order of 5-15) where the test set was drawn from the same distribution as the training set [5; 19; 25; 28; 30], thus a simple "boundary-loss" regularizer was often sufficient to produce accurate results. Direct learned inversion methods are much faster than this iterative based method, but yield only a single estimate and are susceptible to overfitting to the training set. The neural adjoint method allows for exploration of the solution space with different initializations and the incorporation of various regularizers such as \(\ell_{1}\), \(\ell_{2}\), or even gradient based [18], to guide the optimization towards specific solutions. In addition, one can also restrict the optimization to some pre-determined basis that better represents the data while reducing the dimensionality [13]. Our proposed architecture extends the neural adjoint method in two notable ways. It incorporates knowledge of the physics-based forward model into the learned emulator and while also jointly learning a subspace to improve the accuracy of the optimization stage. ## 3 Method The neural adjoint (NA) method is typically decomposed into two stages: training an emulator of the forward model and then using it for inference. We motivate and describe the proposed architecture for the forward model in Section 3.1. Next, we formulate the optimization problem that uses the trained model for inference in Section 3.2. Finally, we propose an augmentation to the optimization formulation to incorporate a learned subspace in Section 3.3. ### Embedding Physics in the Learned Forward Model The neural adjoint (NA) method aims to learn an accurate emulator of the forward model to replace existing physics simulators. More formally, assume that we are given a forward model \(F:\mathbf{x}\rightarrow\mathbf{y}\) that maps our input \(\mathbf{x}\in\mathbb{R}^{m}\), where \(m\) denotes the size of the discretization, to some observations \(\mathbf{y}\in\mathbb{R}^{n}\), where \(n\) denotes the total number. In our computational ocean tomography examples in Section 4, \(F(\mathbf{x})\) is computed using a ray tracer; in other applications, a PDE might have to be solved. We then train a neural net \(G_{\theta}\), whose architecture is illustrated in Figure 1 and described in detail below, to approximate the mapping \(F\). We first motivate the design of the architecture by discussing the more classical approach of linearizing the forward model \(F\). Given a reference point \(\mathbf{x}_{\text{ref}}^{i}\), we perform a first order Taylor series expansion \[\begin{split}\hat{\mathbf{y}}&\approx F(\mathbf{x}_{\text{ ref}}^{i})+J_{F}(\mathbf{x}_{\text{ref}}^{i})^{\top}(\mathbf{x}-\mathbf{x}_{\text{ref}}^{i})\\ &=\mathbf{y}_{\text{ref}}^{i}+\mathbf{A}_{\text{ref}}^{i}(\mathbf{x}-\mathbf{x}_ {\text{ref}}^{i}).\end{split} \tag{6}\] Figure 1: Architecture of PETAL for (a) training the forward model and (b) inversion. Learned modules are denoted in green. This linearization approximates the forward model with varying levels of accuracy depending on the proximity of the input \(\mathbf{x}\) to the reference point \(\mathbf{x}_{\text{ref}}^{i}\). Rather than learning the mapping from \(\mathbf{x}\) to \(\mathbf{y}\) in a pure data-driven fashion, we propose to leverage a set of these linearizations that already perform the mapping. We mitigate the modeling inaccuracies by using an ensemble of reference models rather than a single linearization. Only a small subset of linearizations need to be performed once in order to construct the proposed emulator \(G_{\theta}\), so the additional computational cost is minimal relative to attempting to invert the actual physics based forward model \(F\) for arbitrary measurements \(\mathbf{y}\). The operation of the architecture acts as follows: Given some input \(\mathbf{x}\), we first pass \(\mathbf{x}\) through the embedded physics ensemble \(\mathbf{A}_{\text{ref}}^{1},...,\mathbf{A}_{\text{ref}}^{N}\) to produce \(N\) predicted observations \(\hat{\mathbf{y}}^{1},...,\hat{\mathbf{y}}^{N}\) through the application of Equation (6). The predicted observations are then combined through a learned weighted average module to produce the final predicted observation. A natural way to compute the weights \(w^{i}\) is by calculating the dot product similarity between the input point \(\mathbf{x}\) and the reference points \(\mathbf{x}_{\text{ref}}^{1},...,\mathbf{x}_{\text{ref}}^{N}\) used to generate the linearizations; higher similarity implies that the linearization is a better approximation and thus the prediction is more "trustworthy". We follow an approach similar to attention-based models [33] by learning an embedding space to perform the dot-product and applying a softmax to normalize the weights. Thus for each \(\hat{\mathbf{y}}^{i}\), we compute the corresponding weight \(w^{i}\) as \[w^{i}=\frac{\exp{\langle\mathbf{x},\mathbf{x}_{\text{ref}}^{i}\rangle}\mathbf{P}_{\mathbf{x}} }{\sum_{j}\exp{\langle\mathbf{x},\mathbf{x}_{\text{ref}}^{j}\rangle}\mathbf{P}_{\mathbf{x}}}, \tag{7}\] where \(\mathbf{P}_{\mathbf{x}}\) is the learned projection for the dot product space. We also simultaneously learn a transformation \(\mathbf{P}_{y}\) on the predicted \(\hat{\mathbf{y}}^{i}\). Thus the output of the proposed model is \[G_{\theta}(\mathbf{x})=\mathbf{W}\sum_{i}w^{i}\mathbf{P}_{y}(\mathbf{y}_{\text{ref}}^{i}+\mathbf{ A}_{\text{ref}}^{i}(\mathbf{x}-\mathbf{x}_{\text{ref}}^{i})), \tag{8}\] where we distinguish components related to the embedded physics module with superscripts and denote the final linear layer as \(\mathbf{W}\). The encoder-decoder layers \(\mathbf{E}_{\mathbf{x}}\) and \(\mathbf{D}_{\mathbf{x}}\) are treated as the identity mapping for this section. They are described in further detail in Section 3.3. Note that the learned weights \(w^{i}\) depend on the input \(\mathbf{x}\). To prevent saturation of the learned softmax distribution, we apply spectral norm to all \(\mathbf{x}\) projection layers (\(\mathbf{E}_{\mathbf{x}}\) and \(\mathbf{P}_{\mathbf{x}}\)). The full architecture is outlined in Figure 0(a). The model is trained using the mean squared error loss on a dataset of paired example \((\mathbf{x}_{k},\mathbf{y}_{k})\) \[\min_{\theta}\sum_{k}\left\|G_{\theta}(\mathbf{x}_{k})-\mathbf{y}_{k}\right\|^{2}. \tag{9}\] ### Performing Inference In order to perform inference on a set of observations \(\mathbf{y}\), we solve the following optimization problem that incorporates our trained network \[\hat{\mathbf{x}}=\underset{\mathbf{x}}{\text{arg min}}\frac{1}{2}\left\|G_{\theta}(\mathbf{x})- \mathbf{y}\right\|^{2}+R(\mathbf{x}). \tag{10}\] We solve this iteratively by fixing the weights of the network and computing the gradient with respect to its input \(\mathbf{x}\). Note that this same optimization problem can be set up with the original forward model \(F\), but computing a gradient is often non-trivial and computationally expensive. By training a forward model approximation, we can leverage existing auto-grad libraries [27] to efficiently compute the gradient. Having an accurate descent direction is critical for solving Equation 10. However, these black box models are only trained to match outputs, and thus performing gradient descent can lead to many undesireable local minimas. Due to the construction of our emulator, a convex combination of the gradients from using the individual linear forward model approximations (slightly modulated by the learned weights) arises in the calculation, providing some physics-based descent directions which may help alleviate these issues (See Appendix for derivations). Equation (10) can be solved with a variety of optimization algorithms to converge on some locally optimal \(\mathbf{x}\). Since we are substituting the forward model with an approximation, we can account for any inaccuracies by introducing a tolerance level. Once the observation loss drops below a pre-determined level, the optimization terminates early. Note that we incorporated a regularizer \(R(\cdot)\) as an additional cost term. The regularizer encourages certain properties (e.g. smaller values) and helps guide the optimization towards particular solutions. In our experiments, we used \(\ell_{2}\) as well as a Sobolev norm (\(\ell_{2}\) norm performed on the discrete x and y gradient of our input \(\mathbf{x}\)). Finally, it should be noted that the iterative nature of this method requires that we initialize our guess with some estimate. When optimizing from scratch, a reasonable candidate would be the average from the training set. Alternatively, we can leverage an estimated \(\hat{\mathbf{x}}\) from other inverse methods by first initializing with that estimate and then refining it with the NA procedure. We note that there is an increase in computation time compared to directly learning the inverse due to the iterative nature of the algorithm, but the NA method offers certain trade offs outlined above that might be more beneficial in practice than the single estimate provided by learned direct inverse methods. ### Learning a Subspace for Reconstruction Directly inverting on the input space of neural networks often generate artifacts in the final result or gets trapped in local minima due to the highly non-convex model. One way to address this issue is by optimizing in some lower dimensional subspace, such as one determined by principle component analysis (PCA) [13]. The PCA basis acts as a de-noiser, removing any artifacts due to the optimization and helps reduce the dimensionality, simplifying the optimization process. Rather than pre-computing a basis, we instead propose to jointly learn a linear projection and reconstruction layer along with the forward model. The model as described in Subsection 3.1 can be augmented with a linear encoder \(\mathbf{E}_{\mathbf{x}}\) and decoder \(\mathbf{D}_{\mathbf{x}}\) layer. The encoder layer projects our input \(\mathbf{x}\) onto a learned subspace, producing the latent code \(\mathbf{z}\). This code is then passed through the decoder layer to reconstruct \(\hat{\mathbf{x}}\). An additional input reconstruction loss in the form of a mean squared error loss between \(\mathbf{x}\) and \(\hat{\mathbf{x}}\) is included during training time, forcing the model to not only learn to approximate the forward model but also to learn a linear subspace of the input. We leave learning more complex non-linear encoder-decoder schemes for future work. During inference, we then optimize in this subspace. More concretely, we rearrange the proposed architecture so that the \(\mathbf{x}\) Decoder layer becomes the first input layer as shown in Figure 0(b). The optimization variable \(\mathbf{z}\) is passed through this layer to produce an estimated \(\hat{\mathbf{x}}\), that is then passed through the rest of the model as described in Section 3.1. Our optimization framework thus becomes \[\hat{\mathbf{z}}=\underset{\mathbf{z}}{\text{arg min}}\;\frac{1}{2}\left\|G_{\theta}( \mathbf{D}_{\mathbf{x}}\mathbf{z})-\mathbf{y}\right\|^{2}+R(\mathbf{D}_{\mathbf{x}}\mathbf{z}), \tag{11}\] where we recover our final estimate with \(\hat{\mathbf{x}}=\mathbf{D}_{\mathbf{x}}\hat{\mathbf{z}}\). ## 4 Experimental Set Up Although the method described in this article should be generalizable to any forward model, we demonstrate on an ocean acoustic tomography problem. Sound speed variations in the ocean is essential for accurate predictions of sound propagation in the ocean and the various acoustic applications that rely on these predictions [6; 12; 21]. Typically, the ocean sound speed is estimated using empirical formulas based on the temperature, salinity and density. However, this would require a dense sampling both spatially and temporally throughout the whole volume of interest. Alternatively, the fundamental relationship between acoustic observations (e.g. arrival time measurements) can be leveraged to indirectly estimate the volumetric spatio-temporal variability of the ocean sound speed profiles (SSPs), bypassing the need to densely sample. Ocean acoustic tomography (OAT) aims to reconstruct the SSP variations (with respect to a known reference environment) within an ocean slice given the changes in acoustic measurements from the propagating acoustic waves between multiple pairs of sources and receivers [23]. The "forward model" that computes arrival time measurements between source-receiver pairs given a SSP is fundamentally non-linear and would require solving the wave equation. However, SSP fluctuations are typically small compared to baseline values (typically \(<1\%\)). Modeling assumptions can be made to simplify (e.g. ray-tracing methods). Classical OAT methods also further simplify by linearizing the relationship between SSP perturbations and the measured variations in arrival times of stable "paths" propagating between source and receiver pairs, providing a more numerically tractable solution (inverting a linearized model) [3; 23; 24; 31]. We perform our experiments on a high fidelity month long simulation of the Gulf of Mexico as seen in Figure 2[16; 17]. We restrict our experiments to 10 2D range dependent slices within the data cube (Figure 1(b)). The first 1000 time samples are used for training, the next 200 for validation and the remaining 239 for testing. This particular train/test/split in time was selected to mimic existing conditions (i.e. collect data for a set amount of time to train models and then deploy on future samples). Note that this creates a slightly more difficult problem due to the temporally changing dynamics in the test/validation set not present in the training set. We hope that mixing examples from different physical locations will help mitigate these issues. We construct a forward model consisting of 20 sources and receivers placed approximately every 50 m in depth and 5 km apart in range as shown in Figure 3. To counter the differing bathymetry between the 10 slices, we restrict ourselves to the upper 1000 m portion of the ocean (where a majority of the SSP variation lies as shown in Figure 1(c)) and consider only direct or surface bounce arrival time paths. The 2D ssp is discretized into a range by depth grid of \(11\times 231\). Thus, the linearized forward models (LFM) are of dimension \(800\times 2541\), yielding an ill-posed matrix for inversion. We construct our proposed model with 10 reference SSPs: the last available SSP in the trainset (time 1000) for each of the 10 slices. Once trained, the weights are fixed and the ssp \(\mathbf{x}\) is solved for iteratively given some observations \(\mathbf{y}\). We perform the optim Figure 3: An example OAT forward model set up with 20 sources and 20 receivers. The direct and surface bounce paths are denoted in blue and red respectively. The average SSP influencing the arrival times at the receivers can be seen on the right. Figure 2: An overview of data from a month long simulation of the (a) Gulf of Mexico. (b) Example 2D slices of the sound speed profile (top) and its deviation from the mean (bottom). The 10 slices used for experiments are separated by white lines. (c) The average SSP as a function of depth. Note that most of the fluctuations occur near the surface. subspace and with \(\ell_{2}\) as well as Sobolev regularization. All optimized models are given a budget of 1000 gradient descent iterations with a learning rate of 50. ## 5 Results We compare our proposed model against three baselines. First we compare ourselves against the pseudo-inverse performed in the PCA space and using Tikhonov Regularization as proposed in [13], hereby referred to as "Tik". We select the last available SSP in the train set (time 1000) for each respective slice as the reference point for linearization when evaluating on the test set. We perform PCA on the first 1000 SSPs for each respective slice to construct the basis. Next, we compare against using the linearized forward model (LFM) in an iterative method using the same regularization as the proposed model, but with the linearization around a single reference as the forward model. Similar to the Tik method, we linearize around the last available SSP in the training set for each respective slice. Finally, we compare ourselves with a simple multi-layer perceptron (MLP) trained to emulate the forward model. The MLP does not incorporate any physical information and is simply trained to map SSPs to arrival times in a black box manner. All iterative methods are initialized with: the average SSP, the Tik solution and the LFM solution. The full results are summarized in Table 1. When provided no initialization, the proposed method performs the best at 0.343 m/s RMSE. MLP achieves the second best average performance at 0.398 m/s RMSE, where the loss in performance is likely due to its inability to emulate the forward Figure 4: Example visualizations of predicted sound speed profiles for each method. Tik manages to recover general structure, but often gets the magnitude wrong. LFM introduces many artifacts during the optimization process due to the ill-posed matrix. MLP only captures coarse features but fails to capture subtler details like PETAL (ours). model as accurately as the proposed model. Despite using the same forward model set up, LFM (optimized with different regularization) is able to outperform Tik. We hypothesize that this is due to the basis computed by applying PCA to the training set failing to generalize to the dynamics of the test set. The nature of this method allows us to provide initializations for refinement. Note that LFM is convex and thus a globally optimal least square solution exists. Rather than computing and applying the direct pseudo-inverse, we choose to use it within the same framework for optimizing our trained surrogate forward models for a fixed budget. Thus LFM initialized with LFM would be the equivalent of allowing the model to further optimize. In this instance, doing so actually caused the predictions to perform worse on average, increasing from 0.608 to 0.625, suggesting that the linear forward model approximation begins to break down if optimized for too long. On the other hand, MLP manages to refine the solution for both the LFM as well as the Tik initialization, suggesting that the learned non-linear forward model provides some advantages. However, it gets easily trapped in local minimas and is heavily reliant on having a good initialization, as shown by the LFM initialization where it fails to out-perform average initialization. Our proposed model is more robust to initialization, though was still able to achieve better results when provided with a slightly better initialization, dropping the RMSE to 0.342 and 0.339 for LFM and Tik initializations respectively. Visualizations of the recovered SSPs can be seen in Figure 4. All models are able to recover the background levels and other coarse features. Tik is able to recover some finer details, but often fails to achieve the correct magnitudes. LFM introduces many artifacts due to the ill-posed forward matrix, an issue mitigated by using a subspace in Tik and PETAL. MLP is able to capture some of the finer details, but also suffers from artifacts introduced during the non-convex optimization. ## 6 Ablations In this section, we explore which design decisions contribute to the overall success of PETAL. More specifically, we try to determine whether the learned weighted averaging, learned transformation of the predicted arrival times or the learned encoding/decoding of the SSP are all necessary components. The baseline will be the proposed model which incorporates all three design choices. We compare this against (1) the average of all reference linearizations (A-LFM), (2) weighted average of linearizations (WA-LFM), (3) weighted average combined optimized in the learned subspace (WA-LFM + Dec), and (4) a learned weighted average network (WAN). A full summary as well as the average RMSE on a test set can be found in Table 2. This study shows that each component is essential in the success of the proposed model. A-LFM performs the worst overall, though still noticeably better than the single LFM presented in Table 1, suggesting that incorporating more references is crucial for representing the varied dynamics present in the test set. Simply adjusting the weights of the average (WA-LFM) as learned by a fully trained \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Weighted Avg & AT Transform & SSP Subspace & RMSE (m/s) \\ \hline PETAL (ours) & & & & **0.343** \\ A-LFM & & & & 0.585 \\ WA-LFM & & & & 0.577 \\ WA-LFM + Dec & & & & 0.508 \\ WAN & & & & 0.372 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of PETAL \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Avg Init & LFM Init & Tik Init \\ \hline Tik & 0.760 & — & — \\ LFM & 0.608 & 0.625 & 0.602 \\ MLP & 0.398 & 0.402 & 0.398 \\ PETAL (ours) & **0.343** & **0.342** & **0.339** \\ \hline \hline \end{tabular} \end{table} Table 1: RMSE (m/s) of inversion with various initializations. PETAL model already leads to an improvement in performance, dropping the RMSE from 0.585 to 0.577. Incorporating the learned SSP subspace improves even further, dropping the RMSE to 0.508. Learning an AT transform allows the surrogate model to better approximate the true model, leading to a more dramatic improvement in RMSE at 0.372 for WAN. And finally, incorporating all three components leads to the best performance overall at an RMSE of 0.343 ## 7 Conclusions In this study, we propose a novel architecture to embed physics in learned surrogate models by incorporating linearizations around a set of reference points. We also include an encoder-decoder structure to learn a subspace to optimize in when solving inverse problems, mitigating issues arising from the non-convex optimization. We demonstrate the efficacy of our approach on an Ocean Acoustic Tomography example, out-performing classical methods as well as learned forward models that do not incorporate any known physics. We validate the necessity of each component in an ablation study, confirming that each contribute to the success of the proposed model. ## Acknowledgments and Disclosure of Funding This project was supported by the Office of Naval Research Task Force Ocean under Grant No. N00014-19-1-2639. We would also like to thank Dr. Guangpeng Liu and Dr. Annalisa Bracco (EAS, GaTech) for sharing Gulf of Mexico simulations of the SSPs.
2306.08057
Symbolic Regression via Control Variable Genetic Programming
Learning symbolic expressions directly from experiment data is a vital step in AI-driven scientific discovery. Nevertheless, state-of-the-art approaches are limited to learning simple expressions. Regressing expressions involving many independent variables still remain out of reach. Motivated by the control variable experiments widely utilized in science, we propose Control Variable Genetic Programming (CVGP) for symbolic regression over many independent variables. CVGP expedites symbolic expression discovery via customized experiment design, rather than learning from a fixed dataset collected a priori. CVGP starts by fitting simple expressions involving a small set of independent variables using genetic programming, under controlled experiments where other variables are held as constants. It then extends expressions learned in previous generations by adding new independent variables, using new control variable experiments in which these variables are allowed to vary. Theoretically, we show CVGP as an incremental building approach can yield an exponential reduction in the search space when learning a class of expressions. Experimentally, CVGP outperforms several baselines in learning symbolic expressions involving multiple independent variables.
Nan Jiang, Yexiang Xue
2023-05-25T04:11:14Z
http://arxiv.org/abs/2306.08057v1
# Symbolic Regression via Control Variable Genetic Programming ###### Abstract Learning symbolic expressions directly from experiment data is a vital step in AI-driven scientific discovery. Nevertheless, state-of-the-art approaches are limited to learning simple expressions. Regressing expressions involving many independent variables still remain out of reach. Motivated by the control variable experiments widely utilized in science, we propose **C**ontrol **V**ariable **G**enetic **P**rogramming (CVGP) for symbolic regression over many independent variables. CVGP expedites symbolic expression discovery via customized experiment design, rather than learning from a fixed dataset collected a priori. CVGP starts by fitting simple expressions involving a small set of independent variables using genetic programming, under controlled experiments where other variables are held as constants. It then extends expressions learned in previous generations by adding new independent variables, using new control variable experiments in which these variables are allowed to vary. Theoretically, we show CVGP as an incremental building approach can yield an exponential reduction in the search space when learning a class of expressions. Experimentally, CVGP outperforms several baselines in learning symbolic expressions involving multiple independent variables. ## 1 Introduction Discovering scientific laws automatically from experiment data has been a grand goal of Artificial Intelligence (AI). Its success will greatly accelerate the pace of scientific discovery. Symbolic regression, _i.e._, learning symbolic expressions from data, consists of a vital step in realizing this grand goal. Recently, exciting progress [1, 2, 3, 4, 5, 6, 7, 8] has been made in this domain, especially with the aid of deep neural networks. Despite great achievements, state-of-the-art approaches are limited to learning relatively simple expressions, often involving a small set of variables. Regressing symbolic expressions involving many independent variables still remains out of reach of current approaches. The difficulty mainly lies in the exponentially large search space of symbolic expressions. Our work attacks this major gap of symbolic regression, leveraging control variable experimentation - a classic procedure widely implemented in the science community [9, 10]. In the analysis of complex scientific phenomena involving many contributing factors, control variable experiments are conducted where a set of factors are held constant (_i.e._, controlled variables), and the dependence between the output variable and the remaining input variables is studied [11, 12]. The result is a reduced-form expression that models the dependence only among the output and the non-controlled variables. Once the reduced-form equation is validated, scientists introduce more variables into play by freeing a few controlled variables in previous experiments. The new goal is to extend the previous equation to a general one including the newly introduced variables. This process continues until all independent variables are introduced into the model. Our proposed **C**ontrol **V**ariable **G**enetic **P**rogramming (CVGP) approach implements the aforementioned scientific discovery process using Genetic Programming (GP) for symbolic regression over many independent variables. The key insight of CVGP is to learn from _a customized set of control variable experiments_; in other words, the experiment data collection adapts to the learning process. This is in contrast to the current learning paradigm of most symbolic regression approaches, where they learn from a fixed dataset collected a priori. In CVGP, first, we hold all independent variables except for one as constants and learn a symbolic expression that maps the single variable to the dependent variable using GP. GP maintains a pool of candidate equations and improves the fitness of these equations via mating, mutating, and selection over several generations. Mapping the dependence of one independent variable is easy. Hence GP can usually recover the ground-truth reduced-form equation. Then, CVGP frees one independent variable at a time. In each iteration, GP is used to modify the equations learned in previous generations to incorporate the new independent variable. This step is again conducted via mating, mutating, and selection. Such a procedure repeats until all the independent variables have been incorporated into the symbolic expression. After discovering CVGP independently, the authors learned in private communications a line of research work [13, 14, 15, 16, 17, 18] that also implemented the human scientific discovery process using AI, pioneered by the BACON systems developed by Langley, P. in 1978-1981 [13, 14, 15]. While BACON's discovery was driven by rule-based engines and our CVGP uses modern machine learning approaches such as genetic programming, indeed both approaches share a common vision - the _integration of experiment design and model learning_ can further expedite scientific discovery. Theoretically, we show CVGP as an incremental builder can reduce the exponential-sized search space for candidate expressions into a polynomial one when fitting a class of symbolic expressions. Experimentally, we show CVGP outperforms a number of state-of-the-art approaches on symbolic regression over multiple independent variables. Our contributions can be summarized as: * We propose CVGP, an incremental builder for symbolic regression over many independent variables. CVGP fits increasingly more complex equations via conducting control variable experiments with fewer and fewer controlled variables. * Theoretically, we show such an incremental builder as CVGP can reduce exponential-sized search spaces for symbolic regression to polynomial ones when searching for a class of symbolic expressions. * Empirically, we demonstrate CVGP outperforms state-of-the-art symbolic regression approaches in discovering multi-variable equations from data. ## 2 Preliminaries **Symbolic Expression.** A symbolic expression \(\phi\) is expressed as variables and constants connected by a set of operators. Variables are allowed to change across different contexts, while constants remain the same. Each operator has a predetermined arity, _i.e._, the number of operands taken by the operator. Each operand of an operator is either a variable, a constant, or a self-contained symbolic expression. A symbolic expression can also be drawn as a tree, where variables and constants reside in leaves, and operators reside in inner nodes. See Figure 1(a) for an example. In this paper, we deal with symbolic expressions involving real numbers. The semantic meaning of a symbolic expression follows its standard definition in arithmetics and thus is omitted. **Symbolic Regression.** Given a dataset \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) and a loss function \(\ell(\cdot,\cdot)\), where \(\mathbf{x}_{i}\in\mathbb{R}^{m}\) and \(y_{i}\in\mathbb{R}\), the objective of symbolic regression (SR) is to search for the optimal symbolic expression \(\phi^{*}\) within the space of all candidate expressions \(\Pi\) that minimizes the average loss: \[\phi^{*}=\arg\min_{\phi\in\Pi}\ \frac{1}{n}\sum_{i=1}^{n}\ell(\phi(\mathbf{x}_ {i}),y_{i}), \tag{1}\] in addition to regularization terms. Symbolic regression is challenging and is shown to be NP-hard [19], due to the exponentially large space of candidate symbolic expressions. **Genetic Programming for Symbolic Regression.** Genetic Programming (GP) has been a popular method to solve symbolic regression. Recently, a few other approaches based on neural networks surpassed the performance of GP in symbolic regression. We leave the discussions of these methods to the related work section. The high-level idea of GP is to maintain a pool of candidate symbolic expressions. In each generation, candidate expressions are _mutated_ with probability \(P_{mu}\) and _mated_ with probability \(P_{ma}\). Then in the _selection_ step, those with the highest fitness scores, measured by how each expression predicts the output from the input, are selected as the candidates for the next generation, together with a few randomly chosen ones to maintain diversity. After several generations, expressions with high fitness scores, _i.e._, those fit data well survive in the pool of candidate solutions. The best expressions found in all generations are recorded as hall-of-fame solutions. ## 3 Control Variable Genetic Programming In this section, we present our control variable genetic programming algorithm. Before we dive into the algorithm description, we first need to study what are the outcomes of a control variable experiment and what conclusions we can draw on the symbolic regression expression by observing such outcomes. ### Control Variable Experiment A control variable experiment \(\texttt{CVExp}(\phi,\mathbf{v}_{c},\mathbf{v}_{f},\{T_{k}\}_{k=1}^{K})\) consists of the trial symbolic expression \(\phi\), a set of controlled variables \(\mathbf{v}_{c}\), a set of free variables \(\mathbf{v}_{f}\), and \(K\) trial experiments \(T_{1},\ldots,T_{K}\). The expression \(\phi\) may have zero or multiple _open constants_. The value of an open constant is determined by fitting the equation to the training data. **One Trial in a Control Variable Experiment.** A single trial of a control variable experiment \(T_{k}\) fits the symbolic expression \(\phi\) with a batch of data. To avoid abusing notations, we also use \(T_{k}\) to denote the batch of data. In the generated data \(T_{k}\), every controlled variable is fixed to the same value while the free variables are set randomly. We assume that the values of the dependent variables in a batch are (noisy observations) of the ground-truth expressions with the values of independent variables set in the batch. In science, this step is achieved by conducting real-world experiments, _i.e._, controlling independent variables, and performing measurements on the dependent variable. For example, Fig. 1(c,d) demonstrates two trials of a control Figure 1: An example of two trials of a control variable experiment. **(a)** The data of the experiment is generated by the ground-truth expression \(\phi=x_{1}x_{3}-x_{2}x_{4}\). **(b)** If we control \(\mathbf{v}_{c}=\{x_{2},x_{3},x_{4}\}\) and only allow \(\mathbf{v}_{f}=\{x_{1}\}\) to vary, it _looks like_ the data are generated from the reduced-form equation \(\phi^{\prime}=C_{1}x_{1}-C_{2}\). **(c, d)** The generated data in two trials of the control variable experiments. The controlled variables are fixed within each trial but vary across trials. variable experiment in which variable \(x_{2},x_{3},x_{4}\) are controlled, _i.e._, \(\mathbf{v}_{c}=\{x_{2},x_{3},x_{4}\}\). They are fixed to one value in trial \(T_{1}\) (in Fig. 1(c)) and another value in trial \(T_{2}\) (in Fig. 1(d)). \(x_{1}\) is the only free variable, _i.e._, \(\mathbf{v}_{f}=\{x_{1}\}\). The value of \(x_{1}\) is varied in trials \(T_{1},T_{2}\). **Reduced-form Expression in a Control Variable Setting.** We assume there is a ground-truth symbolic expression that produces the experiment data. In other words, the observed output is the execution of the ground-truth expression from the input, possibly in addition to some noise. In control variable experiments, because the values of controlled variables are fixed in each trial, what we observe is the ground-truth expression in its _reduced form_, where sub-expressions involving only controlled variables are replaced with constants. Fig. 1(b) provides an example of the reduced form expression. Assume the data is generated from the ground-truth expression in (a): \(\phi=x_{1}x_{3}-x_{2}x_{4}\). When we control the values of variable in \(\mathbf{v}_{c}=\{x_{2},x_{3},x_{4}\}\), the data _looks like_ they are generated from the _reduced_ expression: \(\phi^{\prime}=C_{1}x_{1}-C_{2}\). Here \(C_{1}\) replaces the controlled variable \(x_{3}\), and \(C_{2}\) replaces a sub-expression \(x_{2}x_{4}\) in the ground-truth expression. We can see both \(C_{1}\) and \(C_{2}\) hold constant values in each trial. However, their values vary across trials because the values of controlled variables change. Back to the example, in trial \(T_{1}\), when \(x_{2}\), \(x_{3}\), and \(x_{4}\) are fixed to 0.5, 0.1, 0.7, \(C_{1}\) takes the value of \(x_{3}\), _i.e._, 0.1, while \(C_{2}\) takes the value of \(x_{2}x_{4}\), _i.e._, 0.35. In trial \(T_{2}\), \(C_{1}=0.8\) and \(C_{2}=0.06\). We call constants which represent sub-expressions involving controlled variables in the ground-truth expression _summary constants_, and refer to constants in the ground-truth expression _stand-alone constants_. For example, \(C_{1}\) and \(C_{2}\) in Fig. 1(b) are both summary constants. Notice that the types of constants are _unknown_ in the process of fitting an expression to control variable experiment data. However, the best-fitted values of these constants across several trials reveal important information: a constant is probably a summary constant if its fitted values vary across trials, while a constant that remains the same value across trials is probably stand-alone. **Outcome of a Single Trial.** The outcomes of the \(k\)-th trial are two-fold: (1) the values of the open constants which best fit the given batch of data. We denote these values as vector \(\mathbf{c}_{k}\). (2) the fitness score measuring the goodness-of-fit, denoted as \(o_{k}\). One typical fitness score is the Negative normalized root mean squared error (NRMSE). In the example in Fig. 1, if we fit the reduced expression in (b) to data in trial \(T_{1}\), the best-fitted values are \(\mathbf{c}_{1}=(C_{1}=0.1,C_{2}=0.35)\). For trial \(T_{2}\), the best-fitted values are \(\mathbf{c}_{2}=(C_{1}=0.8,C_{2}=0.06)\). In both trials, the fitness scores (_i.e._, the NRMSE value) are \(0\), indicating no errors. **Outcome of Multiple trials.** We let the values of control variables vary across different trials. This corresponds to changing experimental conditions in real science experiments. The outcomes of an experiment with \(K\) trials are: (1) \(\phi.\mathbf{o}=(o_{1},\ldots,o_{K})\), where each \(o_{k}\) is the fitness score of trial \(k\) and (2) \(\phi.\mathbf{c}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{K})\), the best-fitted values to open constants across trials. Critical information is obtained by examining the outcomes of a multi-trial control variable experiment. First, consistent close-to-zero fitness scores \(o_{1},\ldots,o_{K}\) suggest the fitted expression is close to the ground-truth equation in the reduced form. Second, given the equation is close to the ground truth, an open constant having similar best-fitted values across \(K\) trials \(\mathbf{c}_{1},\ldots,\mathbf{c}_{K}\) suggests the constant is stand-alone. Otherwise, it is probably a summary constant. ### Control Variable Genetic Programming The high-level idea of the CVGP algorithm is to build more complex symbolic expressions involving more and more variables based on control variable experiments with fewer and fewer controlled variables. To fit an expression of \(m\) variables, initially, we control the values of all \(m-1\) variables and allow only one variable to vary. Using Genetic Programming (GP), we find a pool of expressions \(\{\phi_{1,1},\ldots,\phi_{1,M}\}\) which best fit the data from this controlled experiment. Notice \(\phi_{1,1},\ldots,\phi_{1,M}\) are restricted to contain the only one free variable. This fact renders fitting them a lot easier than fitting the expressions involving all \(m\) variables. Next, for each \(\phi_{1,l}\), we examine: 1. if the errors of the fitting are consistently small across all trials. A small error implies \(\phi_{1,l}\) is close to the ground-truth formula reduced to the one free variable. We hence freeze all operands of \(\phi_{1,l}\) in this case. Freezing means GP in later steps cannot change these operands. 2. In the case of a small fitting error, we also inspect the best-fitted values of each open constant in \(\phi_{1,l}\) across different trials. The constant probably is a summary constant if its values vary across trials. In other words, these constants represent sub-expressions involving the controlled variables. We thus mark these constants as expandable for later steps. The remaining constants are probably stand-alone. Therefore we also freeze them. After the first step, CVGP adds a second free variable and starts fitting \(\{\phi_{2,1},\ldots,\phi_{2,M}\}\) using the data from control variable experiments involving the two free variables. Similar to the previous step, all \(\phi_{2,l}\) are restricted to only contain the two free variables. Moreover, they can only be mated or mutated by GP from the first generation \(\{\phi_{1,1},\ldots,\phi_{1,M}\}\). The mutation can only happen on non-frozen nodes. After GP, a similar inspection is conducted for every equation in the GP pool, and corresponding variables and/or operands are frozen. This process continues to involve more and more variables. Eventually, the expressions in the GP pool consider all \(m\) variables. The whole procedure of CVGP is shown in Algorithm 1. Here, \(x_{1},\ldots,x_{m}\) are moved from the controlled to free variables in numerical order. We agree other orders may boost its performance even further. However, we leave the exploration of this direction as future work. When a new variable becomes free, the control variable experiment CVExp needs to be repeated for every equation \(\phi\) in the GP pool \(\mathcal{P}_{gp}\) (Line 5-9 in Algorithm 1). This is because the fitness scores and the fitted variable values will both change when the set of controlled variables is updated. Then function GP is called. GP is a minimally modified genetic programming algorithm for symbolic regression whose pseudo-code is in Algorithm 2. The only differences are that it uses data from control variable experiments and the mutation operation at step \(i\) only allows to use all the operands, the constant node, and variable \(x_{i}\) at non-frozen nodes. Finally, in Lines 12-14 of Algorithm 1, FreezeEquation is called for every equation in the GP pool. The high-level idea of freezing is discussed above. \(\mathcal{H}\) is returned as the set of "hall of fame" expressions. ``` 0: Initial GP Pool \(\mathcal{P}_{gp}\); Data oracle \(\mathcal{D}^{o}\); #trials \(K\); GP pool size \(M\); #generations #Gen; #expressions in hall-of-fame set #Hof; mutate probability \(P_{mu}\); mate probability \(P_{ma}\); mutation node library \(O_{p}\). 1:\(\mathcal{H}\leftarrow\texttt{TopK}(\mathcal{P}_{gp},K=\texttt{\#Hof})\); 2:for\(j\gets 1\)to #Gendo 3:\(\mathcal{P}_{new}\leftarrow\emptyset\); 4:for\(\phi\in\mathcal{P}_{gp}\)do 5:if with probability \(P_{mu}\)then\(\triangleright\) Mutation 6:\(\phi\leftarrow\texttt{Mutate}(\phi,O_{p})\); 7:\(\{T_{k}\}_{k=1}^{K}\leftarrow\texttt{GenData}(\mathcal{D}^{o})\); 8:\(\phi.\mathbf{o},\phi.\mathbf{c}\leftarrow\texttt{CVExp}(\phi,\mathbf{v}_{c}, \mathbf{v}_{f},\{T_{k}\}_{k=1}^{K})\); 9:\(\mathcal{P}_{new}\leftarrow\mathcal{P}_{new}\cup\{\phi\}\); 10:\(\mathcal{P}_{gp}\leftarrow\mathcal{P}_{new}\); \(\mathcal{P}_{new}\leftarrow\emptyset\); 11:for\(\phi_{l},\phi_{l+1}\in\mathcal{P}_{gp}\)do 12:if with probability \(P_{ma}\)then\(\triangleright\) Mating 13:\(\phi_{l},\phi_{l+1}\leftarrow\texttt{Mate}(\phi_{l},\phi_{l+1})\); 14:\(\{T_{k}\}_{k=1}^{K}\leftarrow\texttt{genData}(\mathcal{D}^{o})\); 15:\(\phi_{l}.\mathbf{o},\phi_{l}.\mathbf{c}\leftarrow\texttt{CVExp}(\phi_{l}, \mathbf{v}_{c},\mathbf{v}_{f},\{T_{k}\}_{k=1}^{K})\). 16:\(\phi_{l+1}.\mathbf{o},\phi_{l+1}.\mathbf{c}\leftarrow\texttt{CVExp}(\phi_{l+1}, \mathbf{v}_{c},\mathbf{v}_{f},\{T_{k}\}_{k=1}^{K})\). 17:\(\mathcal{P}_{new}\leftarrow\mathcal{P}_{new}\cup\{\phi_{l},\phi_{l+1}\}\); 18:\(\mathcal{H}\leftarrow\texttt{TopK}(\mathcal{P}_{new}\cup\mathcal{H},K=\texttt{ \#Hof})\); \(\triangleright\) Update the hall of fame set. 19:\(\mathcal{P}_{gp}\leftarrow\texttt{selection}(\mathcal{P}_{new},M)\); 20:return GP pool and hall-of-fame \(\mathcal{P}_{gp},\mathcal{H}\). ``` **Algorithm 2**\(\texttt{GP}(\mathcal{P}_{gp},\mathcal{D}^{o},K,M,\texttt{\#Gen},\texttt{\#Hof},P_{ mu},P_{ma},O_{p})\) Figure 2 shows the high-level idea of fitting an equation using CVGP. Here the process has four stages, each stage with a decreased number of controlled variables. The trial data in each stage is shown at the bottom Figure 2: Running example of Algorithm 1. **(a)** Initially, a reduced-form equation \(\phi^{\prime}=C_{1}x_{1}-C_{2}\) is found via fitting control variable data in which \(x_{2},x_{3},x_{4}\) are held as constants and only \(x_{1}\) is allowed to vary. Two leaves nodes \(C_{1},C_{2}\) are as summary constants (colored blue). **(b)** This equation is expanded to \(C_{3}x_{1}-C_{4}x_{2}\) in the second stage via fitting the data in which only \(x_{3},x_{4}\) are held as constants. **(c,d)** This process continues until the ground-truth equation \(\phi=x_{1}x_{3}-x_{2}x_{4}\) is found. The data generated for control variable experiment trials in each stage are shown at the bottom. and the best expression found is shown at the top. The expandable constants are bold and blue. The readers can see how the fitted equations grow into the final ground-truth equation, with one variable added at a time. **The Availability of a Data Oracle.** A crucial assumption behind the success of CVGP is the availability of a data oracle \(\mathcal{D}^{o}\) that returns a (noisy) observation of the dependent variable with input variables in \(\mathbf{v}_{c}\) controlled and \(\mathbf{v}_{f}\) free. This differs from the classical setting of symbolic regression, where a dataset is obtained before learning [20, 21]. Such a data oracle represents conducting control variable experiments in the real world, which can be expensive. However, we argue that the integration of experiment design in the discovery of scientific knowledge is indeed the main driver of the successes of CVGP. This idea has received tremendous success in early works [13, 14, 15] but unfortunately has been largely forgotten in today's symbolic regression community. Our work does not intend to show the superiority of one approach. Instead, we would like to point out that carefully designed experiments can improve any method, and GP is used as an example. We acknowledge that fully controlled experiments may be difficult in some scenarios. In cases where it is difficult to obtain such a data oracle, we propose to leverage counterfactual reasoning to construct datasets corresponding to control variable trials by sampling from existing training data. We leave such effort as future work. ### Theoretical Analysis We demonstrate in this section that the idea of control variable experiments may bring an exponential reduction in the search space for particular classes of symbolic expressions. To see this, we assume that the learning algorithm follows a search order from simple to complex symbolic expressions. **Definition 3.1**.: _The search space of symbolic expression trees of \(l\) nodes \(S(l)\) is made up of all symbolic expression trees involving at most \(l\) nodes._ **Lemma 3.2**.: _Assuming that all operands are binary, \(o\) is the number of operands, and \(m\) is the number of input variables. The size of the search space of symbolic expression trees of \(l\) nodes scales exponentially; more precisely at \(\mathcal{O}((4(m+1)o)^{\frac{l-1}{2}})\) and \(\Omega((4(m+1)o)^{\frac{l-1}{4}})\)._ Proof.: Because all operands are binary, a symbolic expression tree of \(l\) nodes has \(\frac{l+1}{2}\) leaves and \(\frac{l-1}{2}\) internal nodes. The number of binary trees of \(\frac{l-1}{2}\) internal nodes is given by the Catalan number \(C_{(l-1)/2}=\frac{l-1}{l+1}\binom{\binom{l-1}{2}}{\frac{(l-1)}{2}}\), which asymptotically scales at \(\frac{2^{l-1}}{\frac{(l-1)}{2}^{3/2}\sqrt{\pi}}\). A symbolic expression replaces each internal node of a binary tree with an operand and replaces each leaf with either a constant or one of the input variables. Because there are \(o\) operands and \(m\) input variables, the total number of different symbolic expression trees involving \(l\) nodes is given by: \[A(l)=C_{(l-1)/2}(m+1)^{\frac{l+1}{2}}o^{\frac{l-1}{2}}\sim\frac{(4(m+1)o)^{ \frac{l-1}{2}}}{\left(\frac{l-1}{2}\right)^{3/2}}. \tag{2}\] Hence, the total number of trees up to \(l\) nodes is: \[S(l)=\sum_{i=0}^{(l-1)/2}A(2i+1)\sim\sum_{i=0}^{(l-1)/2}\frac{(4(m+1)o)^{i}}{ i^{3/2}}. \tag{3}\] When \(i\) is sufficiently large, \[(4(m+1)o)^{i/2}\leq\frac{(4(m+1)o)^{i}}{i^{3/2}}\leq(4(m+1)o)^{i}. \tag{4}\] Therefore, \(S(l)\leq\sum_{i=0}^{(l-1)/2}(4(m+1)o)^{i}\in\mathcal{O}((4(m+1)o)^{(l-1)/2})\), and \(S(l)\geq(4(m+1)o)^{(l-1)/2}/((l-1)/2)^{3/2}\geq(4(m+1)o)^{(l-1)/4}\), which implies \(S(l)\in\Omega((4(m+1)o)^{\frac{l-1}{4}})\). The proof of Lemma 3.2 mainly involves counting binary trees. For our purposes, it is sufficient to know that the size is exponential in \(l\). **Definition 3.3** (Simple to complex search order).: A symbolic regression algorithm follows a simple to complex search order if it expands its search space from short to long symbolic expressions; _i.e._, first search for the best symbolic expressions in \(S(1)\), then in \(S(2)\setminus S(1)\), etc. It is difficult to quantify the search order of any symbolic regression algorithms. However, we believe the simple to complex order reflects the search procedures of a large class of symbolic regression algorithms, including our CVGP. In fact, [22] explicitly use regularizers to promote the search of simple and short expressions. Our CVGP follows the simple to complex search order approximately. Indeed, it is possible that genetic programming encounters more complex equations before their simpler counterparts. However, in general, the expressions are built from simple to complex equations by mating and mutating operations in genetic programming algorithms. **Proposition 3.4** (Exponential Reduction in the Search Space).: _There exists a symbolic expression \(\phi\) of \((4m-1)\) nodes, a normal symbolic regression algorithm following the simple to complex search order has to explore a search space whose size is exponential in \(m\) to find the expression, while CVGP following the simple to complex order only expands \(\mathcal{O}(m)\) constant-sized search spaces._ Proof.: Consider a dataset generated by the ground-truth symbolic expression made up of 2 operands (\(+,\times\)), \(2m\) input variables and \((4m-1)\) nodes: \[(x_{1}+x_{2})(x_{3}+x_{4})\ldots(x_{2m-1}+x_{2m}). \tag{5}\] To search for this symbolic regression, a normal algorithm following the simple to complex order needs to consider all expressions up to \((4m-1)\) nodes. According to Lemma 3.2, the normal algorithm has a search space of at least \(\Omega((16m+8)^{m-1/2})\), which is exponential in \(m\). On the other hand, in the first step of CVGP, \(x_{2},\ldots,x_{2m}\) are controlled and only \(x_{1}\) is free. In this case, the ground-truth equation in the reduced form is \[(x_{1}+C_{1})D_{1}, \tag{6}\] in which both \(C_{1}\) and \(D_{1}\) are summary constants. Here \(C_{1}\) represents \(x_{2}\) and \(D_{1}\) represents \((x_{3}+x_{4})\ldots(x_{2m-1}+x_{2m})\) in the control variable experiments. The reduced equation is quite simple under the controlled environment. CVGP should be able to find the ground-truth expression exploring search space \(S(5)\). Proving using induction. In step \(2i\)\((1\leq i\leq m)\), variables \(x_{2i+1},x_{2i+2},\ldots,x_{2m}\) are held as constants, and \(x_{1},\ldots,x_{2i}\) are allowed to vary. The ground-truth expression in the reduced form found in the previous \((2i-1)\)-th step is: \[(x_{1}+x_{2})\ldots(x_{2i-1}+C_{2i-1})D_{2i-1}. \tag{7}\] CVGP needs to extend this equation to be the ground-truth expression in the reduced form for the \(2i\)-th step, which is: \[(x_{1}+x_{2})\ldots(x_{2i-1}+x_{2i})D_{2i}. \tag{8}\] We can see the change is to replace the summary constant \(C_{2i-1}\) to \(x_{2i}\). Assume the data is noiseless and CVGP can confirm expression (7) is the ground-truth reduced-form expression for the previous step. This means all the operands and variables will be frozen by CVGP, and only \(C_{2i-1}\) and \(D_{2i-1}\) are allowed to be replaced by new expressions. Assume CVGP follows the simple to complex search order, it should find the ground-truth expression (8) by searching replacement expressions of lengths up to 1. Similarly, in step \(2i+1\), assume CVGP confirms the ground-truth expression in the reduced form in step \(2i\), CVGP also only needs to search in constant-sized spaces to find the new ground-truth expression. Overall, we can see only \(\mathcal{O}(m)\) searches in constant-sized spaces are required for CVGP to find the final ground-truth expression. ## 4 Related Work **Symbolic Regression.** Symbolic Regression is proven to be NP-hard [19], due to the search space of all possible symbolic expressions being exponential in the number of input variables. Early works in this domain are based on heuristic search [23, 24]. Generic programming turns out to be effective in searching for good candidates of symbolic expressions [25, 2, 7, 8]. Reinforcement learning-based methods propose a risk-seeking policy gradient to find the expressions [4, 6, 5]. Other works reduced the combinatorial search space by considering the composition of base functions, _e.g._ Fast function extraction [26] and elite bases regression [27]. In terms of the families of expressions, research efforts have been devoted to searching for polynomials with single or two variables [28], time series equations [29], and also equations in physics [25]. Multi-variable symbolic regression is more challenging because the search space increases exponentially with respect to the number of independent variables. Our CVGP is a tailored algorithm to solve multi-variable symbolic regression problems. **AI-driven Scientific Discovery.** Recently AI has been highlighted to enable scientific discoveries in diverse domains [30, 31]. Early work in this domain focuses on learning logic (symbolic) representations [32, 33]. Recently, learning Partial Differential Equations (PDEs) from data has also been studied extensively [34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. In this domain, a line of works develops robots that automatically refine the hypothesis space, some with human interactions [45, 16, 17]. These works are quite related to ours because they also actively probe the hypothesis spaces, albeit they are in biology and chemistry. **Active Learning and Reasoning.** Active learning considers querying data points actively to maximize the learning performance [46, 47]. Our approach is related to active learning because control variable experiments can be viewed as a way to actively collect data. However, besides active data collection, our CVGP builds simple to complex models, which is not in active learning. **Meta-reasoning - Thinking Fast and Slow**. The co-existence of fast and slow cognition systems marks an interesting side of human intelligence [48, 49, 50]. Our CVGP is motivated by this dual cognition process. In essence, we argue instead of entirely relying on the brute-force way of learning using big data and heavy computation (fast thinking), incrementally expanding from reduced-form equations to the full equation may result in better outcomes (slow thinking). **Causality.** Control variable experiments are closely related to the idea of intervention, which is commonly used to discover causal relationships [51, 52, 53, 54, 55]. However, we mainly use control variable experiments to accelerate symbolic regression, which still identifies correlations. ## 5 Experiments In this section, we demonstrate CVGP finds the symbolic expressions with the smallest Normalized Mean-Square Errors (NMSEs) among all 7 competing approaches on 21 noiseless benchmark datasets (in Table 1) and 20 noisy benchmark datasets (in Table 2). In the ablation studies, we show our CVGP is consistently better than the baselines when evaluated in different evaluation metrics, evaluating different quantiles of the NMSE metric, with different amount of Gaussian noise added to the data (Figure 3, more complete results in Figure 4 and 5 in the appendix). For simple datasets, our approach can recover the ground-truth symbolic expressions. Table 3 shows our CVGP has a higher rate of recovering the ground-truth expressions than baselines. or a unary operand on a variable, such as \(\sin(x_{1})\). \(c\) is the number of cross terms. They look like \(C_{1}x_{3}x_{4}\) or \(C_{2}\sin(x_{1})\texttt{inv}(x_{5})\), etc. Here \(C_{1},C_{2}\) are randomly generated constants. The tuples and operands listed in different tables and charts indicate how the ground-truth expressions are generated. For each dataset configuration, we repeat our experiments 10 times, each time with a randomly generated symbolic expression of the given configuration. For noiseless datasets, the output is exactly the evaluation of the ground-truth expression. For noisy datasets, the output is further perturbed by Gaussian noise of zero means and a given standard deviation. **Remarks on Public Available Datasets.** Most public datasets are black-box [56], containing randomly generated input and output pairs of an unknown symbolic equation. The point of our paper is to show customized collected control variable experiment data improves symbolic regression, and hence we cannot use these randomly generated data. In addition, most datasets are on equations of a small number of independent variables. We intentionally test on benchmark sets involving many variables to highlight our approach. **Evaluation.** In terms of the evaluation metric, the median (50%) and 75%-percentile of the NMSE across these 10 experiments are reported. We choose to report median values instead of mean due to outliers (see box plots). This is a common practice for combinatorial optimization problems. **Baselines.** We consider the following baselines based on evolutionary algorithms: 1) Genetic Programming (GP) [57]. 2) Eureqa [58]. We also consider a series of baselines using reinforcement learning: 3) Priority queue training (PQT) [59]. 4) Vanilla Policy Gradient (VPG) that uses the REINFORCE algorithm [60] to train the model. 5) Deep Symbolic Regression (DSR) [4]. 6) Neural-Guided Genetic Programming \begin{table} \end{table} Table 2: Median (50%) and 75%-quantile NMSE values of the symbolic expressions found by all the algorithms on several _noisy_ benchmark datasets (Gaussian noise with zero mean and standard deviation 0.1 is added). Our CVGP finds symbolic expressions with the smallest NMSEs. Population Seeding (GPMeld) [5]. We leave the detailed descriptions of the configurations of our CVGP and baseline algorithms to the supplementary materials and an anonymized website detailing our implementation1 and only mention a few implementation notes here. We implemented GP and CVGP. They use a data oracle, which returns (noisy) observations of the ground-truth equation when queried with inputs. We cannot implement the same Oracle for other baselines because of code complexity and/or no available code. To ensure fairness, the sizes of the training datasets we use for those baselines are larger than the total number of data points accessed in the full execution of those algorithms. In other words, their access to data would have no difference if the same oracle has been implemented for them because it does not affect the executions whether the data is generated ahead of the execution or on the fly. The reported NMSE scores in all charts and tables are based on separately generated data that have never been used in training. The threshold to freeze operands in CVGP is if the MSE to fit a data batch is below 0.01. The threshold to freeze the value of a constant in CVGP is if the variance of best-fitted values of the constant across trials drops below 0.001. Footnote 1: [https://github.com/jiangnanhugo/cvgp](https://github.com/jiangnanhugo/cvgp) ### Experimental Analysis **Learning Result.** Our CVGP attains the smallest median (50%) and 75%-quantile NMSE values among all the baselines mentioned in Section 5.1, when evaluated on noiseless datasets (Table 1) and noisy datasets (Table 2). This shows our method can better handle multiple variables symbolic regression problems than the Figure 3: **(a-d) Box plots in different evaluation metrics of the expressions found by different algorithms on the noiseless dataset. (e-f) Box plots in NMSE values for the expressions found by CVGP and GP over benchmark datasets with different noise levels. Our CVGP is consistently the best regardless of the evaluation metrics and noise levels.** current best algorithms in this area. **Ablation Studies.** We use box plots in Figure 3(a-d) to show that the superiority of our CVGP generalizes to other quantiles beyond the 50% and 75%-quantile. We also show the performance is consistent under the variations of evaluation metrics in Figure 3(a-d), and noise levels in Figure 3(e-f). **Recovering Ground-truth Equations.** For relatively less challenging noiseless datasets (_i.e._, \((2,1,1)\) with various operand sets), our CVGP sometimes recovers ground-truth expressions. We evaluate the percentage that each algorithm successfully detects the ground-truth expressions on \(50\) randomly generated benchmark datasets. Table 3 shows that our CVGP algorithm has a higher chance to recover ground-truth expressions than the GP method. ## 6 Conclusion In this research, we propose Control Variable Genetic Programming (CVGP) for symbolic regression with many independent variables. This is beyond current state-of-the-art approaches mostly tested on equations with one or two variables. CVGP builds equations involving more and more independent variables via control variable experimentation. Theoretically, we show CVGP as an incremental building approach can bring an exponential reduction in the search spaces when learning a class of expressions. In experiments, CVGP finds the best-fitted expressions among 7 competing approaches and on dozens of benchmarks. ## 7 Acknowledgments This research was supported by NSF grants IIS-1850243, CCF-1918327.
2304.09530
SelfAct: Personalized Activity Recognition based on Self-Supervised and Active Learning
Supervised Deep Learning (DL) models are currently the leading approach for sensor-based Human Activity Recognition (HAR) on wearable and mobile devices. However, training them requires large amounts of labeled data whose collection is often time-consuming, expensive, and error-prone. At the same time, due to the intra- and inter-variability of activity execution, activity models should be personalized for each user. In this work, we propose SelfAct: a novel framework for HAR combining self-supervised and active learning to mitigate these problems. SelfAct leverages a large pool of unlabeled data collected from many users to pre-train through self-supervision a DL model, with the goal of learning a meaningful and efficient latent representation of sensor data. The resulting pre-trained model can be locally used by new users, which will fine-tune it thanks to a novel unsupervised active learning strategy. Our experiments on two publicly available HAR datasets demonstrate that SelfAct achieves results that are close to or even better than the ones of fully supervised approaches with a small number of active learning queries.
Luca Arrotta, Gabriele Civitarese, Samuele Valente, Claudio Bettini
2023-04-19T09:39:11Z
http://arxiv.org/abs/2304.09530v1
# _SelfAct_: Personalized Activity Recognition based on Self-Supervised and Active Learning ###### Abstract Supervised Deep Learning (DL) models are currently the leading approach for sensor-based Human Activity Recognition (HAR) on wearable and mobile devices. However, training them requires large amounts of labeled data whose collection is often time-consuming, expensive, and error-prone. At the same time, due to the intra- and inter-variability of activity execution, activity models should be personalized for each user. In this work, we propose _SelfAct_: a novel framework for HAR combining self-supervised and active learning to mitigate these problems. _SelfAct_ leverages a large pool of unlabeled data collected from many users to pre-train through self-supervision a DL model, with the goal of learning a meaningful and efficient latent representation of sensor data. The resulting pre-trained model can be locally used by new users, which will fine-tune it thanks to a novel unsupervised active learning strategy. Our experiments on two publicly available HAR datasets demonstrate that _SelfAct_ achieves results that are close to or even better than the ones of fully supervised approaches with a small number of active learning queries. sensor-based activity recognition, self-supervised learning, active learning ## I Introduction The recent research on sensor-based Human Activity Recognition (HAR) on mobile/wearable devices is dominated by solutions based on Deep Learning (DL) approaches [1]. Supervised learning is the most commonly proposed approach in this area since is capable of reaching significantly high recognition rates [2]. However, such approaches require a huge amount of labeled data to create models capable of generalizing. Indeed, the high intra- and inter-variability of activity execution is a significant challenge in this domain, and personalized models (i.e., models trained only with data from the target users) are the ones performing the best [3]. However, annotation is costly, time-consuming, intrusive, and thus often prohibitive. For this reason, several research groups are investigating solutions for HAR requiring limited labeled datasets [4]. Among these approaches, self-supervised learning methods are emerging [5, 6, 7]. The goal of self-supervised learning is to leverage large corpora of unlabeled data to learn to extract effective features from sensor data. While self-supervised approaches showed their effectiveness on NLP and computer vision tasks [8], their application to sensor-based HAR in real-world scenarios is still an open research question. One of the major drawbacks of existing approaches is that they assume that a small labeled dataset for fine-tuning is available. However, those works do not investigate how this dataset can be obtained in practice. In the literature, active learning has been widely investigated to obtain annotated data samples in HAR data scarcity scenarios [9, 4, 10]. However, those approaches assume that a pre-training labeled dataset is available to initialize the recognition model. Indeed, a query is triggered based on the confidence level of the classifier. Applying the concept of active learning to a self-supervised setting is not trivial, since it requires deciding when to trigger queries by analyzing an unlabeled data stream. In this work, we propose _SelfAct_: a novel framework bridging the gap between self-supervised learning and active learning. _SelfAct_ relies on self-supervised learning to learn, from a large number of users, an effective feature representation of sensor data. Then, each user who wants to use _SelfAct_ downloads this pre-trained model on personal mobile/wearable devices. After the first phase where the personal device accumulates unlabeled sensor data, a clustering algorithm on the embeddings generated by the pre-trained model is performed. After the accumulation phase, _SelfAct_ analyzes in real-time each data sample in the stream and matches it with the closest cluster. A novel unsupervised active learning strategy, based on cluster density, is adopted to decide whether to trigger a query or not. Labeled data samples obtained through active learning are finally used to fine-tune the pre-trained self-supervised model and personalize it on the specific user. The contributions of this work are the following: * We propose _SelfAct_: a framework for personalized sensor-based HAR on mobile/wearable devices based on self-supervised learning. * We design and integrate into _SelfAct_ a novel unsupervised active learning strategy to obtain labeled data samples. These samples are used to fine-tune a self-supervised model. * Our experiments on two public datasets show that _SelfAct_ achieves similar or even better recognition rates than fully supervised approaches with a very limited amount of active learning queries. ## II Related work Most of the existing works in the literature tackling sensor-based HAR on mobile/wearable devices are based on supervised Deep Learning (DL) models [1]. However, during their learning process, such models require large amounts of labeled data that are often prohibitive to acquire [11]. Moreover, activity models should be personalized for each user to capture her peculiarities in activity execution [11]. Several research groups are indeed working hard to mitigate the data scarcity problem, by proposing several categories of solutions. Some research efforts proposed solutions based on automatic data annotation. Some of these works assume the existence of previously annotated datasets to pre-train an activity model, that is used for annotation [12]. Other approaches require auxiliary data sources in the environment (e.g., acoustic data from microphones) to automatically annotate activity data [13]. Finally, other works proposed knowledge-based heuristic approaches to generate weak labels (e.g., combining step count and GPS data) [14]. However, the scalability of such an approach is questionable since the heuristic was evaluated considering data collected by monitoring a single user. Moreover, the proposed method can not be used for data annotation when multiple activities share similar patterns in terms of step count and GPS data (e.g., standing and sitting). Some research groups focused on solutions based on transfer learning [15, 16]. However, such methods rely on models that are pre-trained with significant amounts of labeled data available in a source domain and then fine-tuned in the target domain with a few annotated samples. Another category of solutions to mitigate the data scarcity problem is semi-supervised learning [17]. However, these approaches still assume the existence of a small labeled dataset to initialize an activity classifier, that is then incrementally updated with samples of the unlabeled data stream annotated through techniques like self-learning, co-learning, active learning [9, 18, 19], and label propagation [4]. Unsupervised approaches have also been proposed to tackle data scarcity. For instance, some research groups directly relied on clustering techniques to distinguish the different user's activities [20, 21, 22, 23]. However, they did not consider how to match clusters with the actual activity classes the user performs in realistic scenarios. More recently, self-supervised learning has been studied to efficiently learn (without supervision) meaningful features from sensor data for both mobile [24, 25, 26] and smart home [27] activity recognition. The majority of existing solutions assume the availability of a labeled dataset to fine-tune the self-supervised model and obtain the final activity classifier. A closely related work [28] uses a labeled validation dataset to match activities with clusters identified by k-means on self-supervised embeddings. Different from this work, _SelfAct_ does not assume the knowledge of the number of activities. Also, _SelfAct_ includes an unsupervised active learning strategy to select the samples representing each cluster to label only a few data samples through active learning. Overall, the major drawback of these works based on self-supervised learning is that they do not clarify how fine-tuning datasets can be obtained in realistic settings. Finally, the work in [29] considered BERT to extract from smart home sensor data a latent representation of the action units performed by the resident (e.g., a movement in the kitchen) that are then clustered. Hence, frequently occurring patterns of action units are found through a motif discovery process. The most frequent motifs are finally labeled by the resident thanks to active learning. However, NLP-based techniques like BERT and motif discovery work well for discrete environmental sensors, but they cannot be directly applied to the raw inertial measurements collected by mobile and wearable devices. ## III The _SelfAct_ framework In this section, we present _SelfAct_: our framework that generates a personalized activity classifier by combining self-supervised learning and unsupervised active learning. As depicted in Figure 1, in the first phase (called _self-supervised phase_), _SelfAct_ requires huge amounts of unlabeled inertial sensor data automatically and collaboratively collected from the mobile/wearable devices of multiple users (e.g., volunteers). This unlabeled dataset is used to train, in a self-supervised fashion, a deep neural network capable of extracting effective latent representations (i.e., embeddings) from sensor data. This process is described in detail in Section III-A. In a second phase (called _accumulation phase_), new users start using _SelfAct_ by downloading and deploying the pre-trained self-supervised model on their devices. Each user locally accumulates unlabeled inertial sensor data for a certain period. At the end of the _accumulation phase_, such unlabeled data are transformed into embeddings with the pre-trained model and then used to train a personal dimensionality reduction model. Finally, the reduced embeddings are clustered. Intuitively, each cluster would likely represent a personal sensor data pattern. More details about this _accumulation phase_ are presented in Section III-B. After the _accumulation phase_, _SelfAct_ starts the _active learning phase_ (presented in detail in Section III-C). This phase analyzes, in real-time, each new sensor data sample in the stream to decide whether to ask the user for feedback about the corresponding label (i.e., the activity that the user is actually performing). Intuitively, _SelfAct_ aims at asking for a label only whenever a sample is recognized as representative of a cluster. Thanks to active learning, _SelfAct_ automatically collects a small amount of labeled data specific to the subject and this data is used during the _fine-tuning phase_ (presented in Section III-D) to obtain a personalized activity classifier. The _SelfAct_'s algorithm running locally on clients is summarized in Algorithm 1. In the following, we will describe each phase of _SelfAct_ in detail. ### _Self-supervised phase_ The goal of the _self-supervised phase_ is to train, in a self-supervised fashion, a deep learning model capable of ``` 1:\(fth\leftarrow\) randomly initialized layers of the _Fine-Tuning Head_ 2:\(dr\leftarrow\) NIL. 3:\(samplesNumber\gets 0\) 4:\(storage\leftarrow\emptyset\) 5:\(reducedStorage\leftarrow\emptyset\) 6:\(labeledSamples\leftarrow\emptyset\) 7:\(clusters\leftarrow\) 8:for\(s\) data sample in the stream do 9:\(samplesNumber\gets samplesNumber+1\) 10:\(emb_{s}\leftarrow\) obtain embeddings using \(fe\) 11:\(storage\leftarrow storage\bigcup\{emb_{s}\}\) 12:if\(samplesNumber==ACC\_TH\)then 13:\(dr\leftarrow\) data reduction model trained using \(storage\) 14:\(reducedStorage\leftarrow\) embeddings in \(storage\) reduced using \(dr\) 15:\(clusters\leftarrow\) apply clustering algorithm on \(reducedStorage\) 16:else 17:if\(samplesNumber>ACC\_TH\)then 18:\(emb^{s}_{s}\leftarrow\) dimensionally reduced \(emb_{s}\) using \(dr\) model 19:\(matchingClust\leftarrow\) closest cluster to \(emb^{s}_{s}\) in \(clusters\) 20:if\(ActiveLearningNeeded(emb^{s}_{s},matchingClust)\)then 21:\(l\leftarrow\) feedback from user on the activity associated with \(s\) 22:\(labeledSamples\gets labeledSamples\bigcup\{(s,l)\}\) 23:endif 24:\(matchingClust\gets matchingClust\bigcup\{emb^{s}_{s}\}\) 25:if Fine-tuning conditions are verified then 26: adjust \(fth\) to cover all the classes in \(labeledSamples\) 27: fine-tune \(fe+fth\) using \(labeledSamples\) 28:\(fe+fth\) can be used to classify unlabeled samples 29:endif 30:endif 31:endif 32:endfor ``` **Algorithm 1** Client algorithm of _SelfAct_ **Input:** the _Feature Extraction Layers_\(fe\) of the pre-trained self-supervised model, the accumulation threshold \(ACC\_TH\) ``` 1:\(fth\leftarrow\) randomly initialized layers of the _Fine-Tuning Head_ 2:\(dr\leftarrow\) NIL. 3:\(samplesNumber\gets 0\) 4:\(storage\leftarrow\emptyset\) 5:\(reducedStorage\leftarrow\emptyset\) 6:\(labeledSamples\leftarrow\emptyset\) 7:\(clusters\leftarrow\) 8:for\(s\) data sample in the stream do 9:\(samplesNumber\gets samplesNumber+1\) 10:\(emb_{s}\leftarrow\) obtain embeddings using \(fe\) 11:\(storage\leftarrow storage\bigcup\{emb_{s}\}\) 12:if\(samplesNumber==ACC\_TH\)then 13:\(dr\leftarrow\) data reduction model trained using \(storage\) 14:\(reducedStorage\leftarrow\) embeddings in \(storage\) reduced using \(dr\) 15:\(clusters\leftarrow\) apply clustering algorithm on \(reducedStorage\) 16:else 17:if\(samplesNumber>ACC\_TH\)then 18:\(emb^{s}_{s}\leftarrow\) dimensionally reduced \(emb_{s}\) using \(dr\) model 19:\(matchingClust\leftarrow\) closest cluster to \(emb^{s}_{s}\) in \(clusters\) 20:if\(ActiveLearningNeeded(emb^{s}_{s},matchingClust)\)then 21:\(l\leftarrow\) feedback from user on the activity associated with \(s\) 22:\(labeledSamples\gets labeledSamples\bigcup\{(s,l)\}\) 23:endif 24:\(matchingClust\gets matchingClust\bigcup\{emb^{s}_{s}\}\) 25:if Fine-tuning conditions are verified then 26: adjust \(fth\) to cover all the classes in \(labeledSamples\) 27: fine-tune \(fe+fth\) using \(labeledSamples\) 28:\(fe+fth\) can be used to classify unlabeled samples 29:endif 30:endif 31:endif 32:endfor ``` **Algorithm 2** The _selfAct_ In this phase, _SelfAct_ relies on unlabeled raw sensor data that can be collaboratively and automatically collected from the mobile/wearable devices of several users. For the sake of this work, we do not focus on how such data are actually collected. For instance, such users can be volunteers involved in a large data collection campaign. Alternatively, unlabeled data could be collected by combining existing publicly available datasets for HAR. _SelfAct_ pre-processes and segments the available unlabeled data into fixed-length segmentation windows that are used to train the self-supervised model. During the _self-supervised phase_, this model is composed of two parts: i) _Feature Extraction Layers_ that learns how to extract embeddings from sensor data, and ii) a _Self-Supervised Head_, necessary to perform the self-supervised training process. _SelfAct_ does not impose a specific self-supervised approach. As we will discuss in Section IV, in our experiments we implemented _SelfAct_ with a version of the SimCLR [30] approach specifically adapted for the HAR domain [25]. The output of the _self-supervised phase_ is a pre-trained deep-learning model only composed of the _Feature Extraction Layers_, that is able to extract embeddings from unlabeled sensor data. This model can be locally deployed on the mobile devices of the new users that start using _SelfAct_. ### _Accumulation phase_ The _accumulation phase_ starts when a new user downloads and locally deploys on her device the model pre-trained in the _self-supervised phase_ (i.e., _Feature Extraction Layers_). The Fig. 1: The overall architecture of _SelfAct_ goal of the _accumulation phase_ is to accumulate from the user sufficient amounts of unlabeled data to reliably discover personal activity patterns through clustering. As depicted in Figure 2, the user's mobile device starts collecting the stream of unlabeled inertial sensor data that is segmented into fixed-length windows until an arbitrary number of \(ACC\_TH\) (accumulation threshold) samples is acquired. This parameter should be tuned based on the specific application, and, in real-world scenarios, its value should be high in order to collect unlabeled data over a sufficiently long time period (e.g., a day, or a week). Once unlabeled data accumulation is completed, each sample is provided to the pre-trained self-supervised model to obtain embeddings. Then, to efficiently perform clustering to find activity patterns, we leverage all the generated embeddings to train a personalized dimensionality reduction model. Indeed, recent works demonstrated the impact of dimensionality reduction in improving the performance of clustering techniques [31]. Finally, the dimensionally reduced accumulated embeddings are provided to a clustering algorithm that does not make any prior assumption on the number of clusters (e.g., DBSCAN). The goal is to group in the same cluster those embeddings that have been generated while the user was performing the same activity pattern. We expect our method to obtain more than one cluster for each activity, since users may perform the same activity in different ways (e.g., walking indoors and walking outdoors). ### _Active learning phase_ The goal of the _active learning phase_ is to select, in real-time, data samples that should be labeled by the user. These labeled samples are then used by _SelfAct_ to fine-tune a personalized activity classifier. As shown in Figure 3, in the _active learning phase_, each new unlabeled sample collected by the user's mobile device is provided to the pre-trained model, and the result is dimensionally reduced through the personalized dimensionality reduction model. Each new reduced embedding is then mapped to the closest cluster among the ones derived during the _accumulation phase_ (e.g., the cluster with the closest centroid). The active learning strategy of _SelfAct_ asks for a label only for those samples that can be considered as highly representative of the closest cluster (i.e., of each activity pattern) in order to minimize the number of queries that are prompted to the user. More specifically, an active learning process is started by evaluating whether a new reduced embeddings sample \(s\) improves the density of its cluster or not. _SelfAct_ assigns to each cluster \(c\) a threshold \(T_{c}\), which reflects the density level of the cluster. This threshold is automatically computed as the average distance between each pair of points within the cluster. To determine whether a new sample \(s\) is highly representative of cluster \(c\), we evaluate the average distance between each pair of points in \(c\) after adding \(s\) to the cluster. If this average distance is lower than \(T_{c}\), then \(s\) is considered highly representative of \(c\) and an active learning query is requested to the user. Intuitively, this happens when \(s\) is a sample that increases the density of \(c\). Algorithm 2 shows the pseudo-code of the strategy adopted by _SelfAct_ to decide if an active learning query is needed. Independently if the active learning query is triggered or not, \(s\) is added to \(c\), and \(T_{c}\) is updated accordingly. The active learning strategy of _SelfAct_ assumes that the clusters found during the _accumulation phase_ are reliable. For this reason, as Fig. 3: The _active learning phase_ of _SelfAct_ Fig. 2: The _accumulation phase_ of _SelfAct_ we will describe in Section IV, we empirically determined \(ACC\_TH\), by balancing the trade-off between recognition rate and a number of triggered questions. ### _Fine-tuning phase_ Finally, during the _fine-tuning phase_, _SelfAct_ relies on the data labeled through active learning at the previous phase to fine-tune the deep-learning model pre-trained during the _self-supervised phase_. In this way, _SelfAct_ builds an activity classifier personalized for each new user of the framework. Specifically, _SelfAct_ dynamically adds to the _Feature Extraction Layers_ of the deep-learning model pre-trained during the _self-supervised phase_ a _Fine-Tuning Head_ that includes classification layers. Since it is not possible to know in advance how many activities (and which) the user actually performed, the _Fine-Tuning Head_ is created dynamically based on the number of activity classes in the labeled data. During the fine-tuning process, the layers of both the _Feature Extraction Module_ and the _Fine-Tuning Head_ are unfrozen. The _fine-tuning phase_ should be triggered by considering specific conditions. For instance, it can be a periodic task (e.g., with a time periodicity or each time a certain amount of labels are collected). However, defining when fine-tuning should be started is out of the scope of this paper and it will be investigated in future work. ## IV Experimental evaluation This section describes the experiments we carried out to evaluate _SelfAct_. We first introduce the two publicly available datasets we considered for our experiments. Then, we discuss details about our experimental setup: the data pre-processing, the specific models we used to implement _SelfAct_, and the evaluation methodology we adopted. Finally, we present the results of our evaluation. ### _Datasets_ #### Iv-A1 Hhar The Heterogeneity Human Activity Recognition (HHAR) dataset [32] includes accelerometer and gyroscope data from \(9\) users wearing a smartphone (in a pouch carried on the waist) and a smartwatch. The dataset was collected considering \(6\) different activities: _biking_, _sitting_, _standing_, _walking_, _stairs up_, and _stairs down_. HHAR includes heterogeneous sampling rates, based on the specific devices worn by the user. In particular, the dataset includes \(2\) different smartwatch devices: LG Watch and Samsung Galaxy Gears with maximum sampling rates of \(200Hz\) and \(100Hz\), respectively. Moreover, HHAR contains \(4\) different smartphone models: Samsung Galaxy S3 mini (maximum sampling rate of \(100Hz\)), Samsung Galaxy S3 (\(150Hz\)), LG Nexus 4 (\(200Hz\)), and Samsung Galaxy S+ (\(50Hz\)). #### Iv-A2 MotionSense MotionSense [33] is a publicly available HAR dataset collected from \(24\) subjects carrying an iPhone 6s in their trousers' front pocket, and performing \(6\) different activities: _walking downstairs_, _walking upstairs_, _walking_, _jogging_, _sitting_, and _standing_. The dataset includes both accelerometer and gyroscope data sampled at \(50Hz\). ### _Experimental setup_ In the following, we describe our experimental setup. #### Iv-B1 Machine learning models As self-supervised learning model, in our experiments, we used a version of the _SimCLR_ technique [30] adapted for HAR in previous works [7, 25]. Consistently with the previous work, the _Feature Extraction Layers_ consist of three 1D convolutional layers with \(32\), \(64\), and \(96\) filters and \(24\), \(16\), and \(8\) kernel sizes, respectively, followed by a global maximum pooling layer. Each couple of consecutive layers is separated by a dropout layer with a dropout rate of \(0.1\). The _Self-Supervised Head_ consists of three fully connected layers with \(256\), \(128\), and \(50\) neurons, respectively. The self-supervised model is pre-trained for \(50\) epochs and a batch size of \(512\) with the SGD optimizer, considering a cosine decay for the learning rate. Moreover, we empirically determined the optimal transformations to apply to the unlabeled data during SimCLR pre-training, finding that, consistently with [25], the application of the rotation transformation only was the best option. The _Fine-Tuning Head_ consists of a fully connected layer containing \(1024\) neurons and a softmax layer for the final classification. Fine-tuning is performed by running \(50\) training epochs with a batch size of \(1\), using an early stopping technique that stopped the training process when validation loss did not improve for \(5\) consecutive epochs, and adopting the Adam optimizer. As a dimensionality reduction model, we adopted the Uniform Manifold Approximation and Projection (UMAP) technique [34] since previous work demonstrated how its use is effective in considerably improving the performance of clustering algorithms [31]. Moreover, since _SelfAct_ considers a real-time scenario for active learning, it is crucial to train a dimensionality reduction model only at the end of the accumulation phase, and then to map new data samples in the reduced space. These real-time constraints can be addressed by using UMAP, while t-SNE [35], for example, would not be appropriate. During our evaluation, we experimentally determined the best hyperparameters for UMAP: \(15\) as the number of neighbors, and an euclidean minimum distance equal to \(0.4\). Realistic and real-time constraints mainly guided the choice of the clustering algorithm of _SelfAct_. First of all, it is not realistic to assume knowing the exact number of different activity patterns that the user will perform while using _SelfAct_. Thus, it would not be appropriate to consider clustering algorithms that require knowing the number of clusters in advance (e.g., K-means). Moreover, in real-time scenarios, we need to rely on an algorithm that (i) clusters the reduced embeddings accumulated during the _accumulation phase_, and then (ii) directly maps new data into the existing clusters during the _active learning phase_. For these reasons, in our experiments, we relied on the HDBSCAN clustering algorithm [36]. In particular, we empirically determined its hyperparameters: a minimum cluster size of \(6\), and a minimum number of samples of \(10\). #### Iv-B2 Pre-processing Since the version of SimCLR we are using only considers data from a single tri-axial sensor as input, we focus on the accelerometer data collected through the users' smartphones. Such data are segmented into fixed-length windows with a \(50\%\) of overlap. In particular, following the evaluation protocol proposed in [25], we considered windows of \(400\) samples for MotionSense. On the other hand, since HHAR includes data collected with different sampling rates, we experimentally determined the size of the windows, obtaining a window size of \(800\) samples. Due to unsatisfying recognition rates during preliminary experiments, in MotionSense we grouped the labels _walking downstairs_ and _walking upstairs_ into the label _walking stairs_, while in HHAR we grouped _stairs down_ and _stairs up_ into _stairs_. Indeed, these activities are particularly challenging to discriminate. #### Iv-B3 Baseline In our evaluation, we compared _SelfAct_ with a _fully supervised_ approach: a deep neural network with the same architecture of the network used by _SelfAct_ during the _fine-tuning phase_, trained in a supervised way considering full availability of labeled training data. #### Iv-B4 Evaluation Methodology The _SelfAct_ framework is designed for personalizing the HAR model for each user by leveraging a limited amount of labeled data. Hence, we performed our experiments considering the leave-one-subject-out cross-validation technique. At each iteration of the cross-validation process, data from one user is used as the test set (i.e., accumulation, active learning, and fine-tuning phases), while the remaining folds are used as the training set (i,e., self-supervised phase). At the end of the process, we averaged the results obtained at each iteration in terms of weighted F1 score. Considering the _fully supervised_ baseline, at each fold, \(10\%\) of the training set was used as the validation set. On the other hand, in order to simulate the _accumulation_ and _active learning_ phases of _SelfAct_, a certain percentage (i.e. _accumulation threshold_) of the test set was used to simulate the samples that are accumulated by _SelfAct_ during the _accumulation phase_. The remaining samples of the test set were used to simulate the data stream processed during the _active learning phase1_. Footnote 1: This setting resembles a scenario in which practitioners must allocate both _accumulation_ and the _active learning_ phases within a given timeframe (e.g., one month) that is constrained by deployment or project requirements. During fine-tuning, we used \(10\%\) of labeled data as the validation set. Finally, all the samples in the test set that were not labeled (i.e., the ones collected in the _accumulation phase_ and the samples not labeled during the _active learning phase_) were used to assess the recognition rate of the fine-tuned model. We also compute the _active learning rate_, i.e., the ratio between the amount of triggered active learning questions and the total number of samples available during the _active learning phase_. While it was straightforward to apply the leave-one-out cross-validation on HHAR, it was more challenging on MotionSense. Indeed, users in this dataset do not have enough data samples to obtain meaningful results from the _accumulation_ and _active learning_ phases. Hence, we created groups of \(4\) or \(5\) users performing activities in a similar way (based on available meta-data like age, height, weight, etcetera) and we considered each group as a single user. This resulted in \(5\) pseudo-users that we used to perform the leave-one-out cross-validation. ### _Results_ In the following, we describe the results we obtained during our experimental evaluation. #### Iv-C1 Recognition rate Table I compares _SelfAct_ and the _fully supervised_ baseline in terms of F1 score and the average number of necessary labeled data to deploy such solutions on both datasets. Such results were obtained by accumulating during the _accumulation phase_ a number of samples that corresponds to \(75\%\) of the data available in the test set. Considering HHAR, _SelfAct_ dramatically outperforms _fully supervised_ in terms of F1 score (i.e., \(\approx+10\%\)). Moreover, this result is obtained considering only \(369\) labeled samples per user, while the _fully supervised_ baseline, at each iteration of the cross-validation, relied on average on \(\approx 25k\) labeled samples. On the other hand, considering MotionSense, our framework did not improve the recognition rates of _fully supervised_. This happens since the _active learning phase_ of _SelfAct_ is actually computed on the data of more users (see Section IV-B4). Hence, with a few fine-tuning samples labeled with active learning, it is hard to cover all the possible ways in which \begin{table} \begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Average number of} & \multirow{2}{*}{_SelfAct_ F1} & \multirow{2}{*}{\begin{tabular}{c} Average supervised \\ training set size \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} _Fully supervised_ \\ F1 \\ \end{tabular} } \\ \cline{1-1} & & & & \\ \hline HHAR & \begin{tabular}{c} 369 \\ 143 \\ \end{tabular} & \begin{tabular}{c} 0.9734 \\ 0.9291 \\ \end{tabular} & \begin{tabular}{c} 25053 \\ 5304 \\ \end{tabular} & \begin{tabular}{c} 0.8792 \\ 0.9483 \\ \end{tabular} \\ \hline \end{tabular} \end{table} TABLE I: Main results of _SelfAct_. The _accumulation phase_ was performed on 75% of the test set. The table reports the average number of active learning queries prompted to the user by _SelfAct_ (i.e., number of samples for fine-tuning), and the average number of labeled samples in the training set used to train _fully supervised_ an activity can be performed by different users. Nonetheless, the F1 score reached by _SelfAct_ is only \(\approx 2\%\) behind _fully supervised_, but relying only on \(143\) labeled samples per group of users included in the test set, instead of considering on average more than \(5k\) labeled training samples. Hence, we believe that these results still highlight the effectiveness of _SelfAct_, which reaches recognition rates close to supervised approaches with a limited amount of labeled samples. #### Iv-B2 Impact of accumulation threshold Figure 4 depicts how the _accumulation threshold_ affects the performance of _SelfAct_. By setting the _accumulation threshold_ at \(50\%\) (i.e., \(50\%\) of data used for accumulation and \(50\%\) of data in the active learning phase) we get a high recognition rate, at the cost of having the highest active learning rate. Indeed, with this threshold, _SelfAct_ finds a lower number of sparse clusters, thus requiring a higher number of samples during the active learning phase to increase their densities. This also implies a higher number of active learning queries. On the other hand, when the accumulation threshold is \(95\%\) (i.e., \(95\%\) of data used for accumulation and \(5\%\) of data in the active learning phase) the recognition rate is the lowest. This is due to the fact that this setting does not provide a sufficient amount of data in the active learning phase for properly fine-tuning the model. Hence, we observed that setting the threshold to \(75\%\) (i.e., \(75\%\) of data used for accumulation and \(25\%\) of data in the active learning phase) was the best trade-off between the recognition rate (that is similar to the one obtained with the threshold set to \(50\%\)) with a significantly reduced active learning rate. Indeed, this setting allows _SelfAct_ to identify clusters that better reflect activity patterns and that can be effectively used for the active learning phase. In general, our results show that increasing the accumulation threshold decreases the active learning rate. This is due to the fact that our unsupervised learning strategy is based on cluster density. Indeed, the higher the amount of accumulated data and the denser the clusters. ## V Conclusion and Future Work In this paper, we presented _SelfAct_: a framework for personalized sensor-based HAR. Thanks to a novel combination of self-supervised learning and active learning, _SelfAct_ requires only a limited amount of labeled data from target users to fine-tune a model pre-trained with unlabeled data only. While the results obtained so far are promising, we plan to investigate several aspects in future work. First, even though _SelfAct_ does not assume knowing in advance the activities performed by the target user, in this work the pre-trained feature extractor and fine-tuning are trained considering the same activity classes. In future work, we want to truly assess the capability of _SelfAct_ for activity discovery and domain adaptation. A challenge will be to distinguish outliers from new activities. A drawback of _SelfAct_ is that it relies on a fixed threshold to decide when switching from _accumulation phase_ to _active learning phase_. In future work, we will investigate automatic methods based on the silhouette score of the clusters. Another limitation of _SelfAct_ is that, while it requires limited labeled data samples to obtain high recognition rates, the active learning query rate is relatively high. This aspect has a negative impact on usability. We will investigate more sophisticated active learning strategies aiming at further minimizing the number of required samples. For instance, an interesting strategy is to establish a budget of queries for each cluster. Nonetheless, these results are also probably due to the fact that the considered datasets are small, and we could not evaluate _SelfAct_ with large amounts of data for the accumulation and active learning phases. We expect that, in real-world scenarios where it is possible to collect a large amount of unlabeled data, the active learning rate would be significantly lower. _SelfAct_ also assumes that it is possible to store and process locally unlabeled and labeled datasets. However, in real-world scenarios, unlabeled data may be collected for a significantly long time. For instance, the accumulation phase may last several days. We will investigate solutions based on trusted edge devices with storage and computational capabilities. Fig. 4: The impact of _accumulation threshold_ on recognition rate, number of clusters, and active learning rate Users' devices should be only in charge of running the fine-tuned classifier. ## Acknowledgment This work was partially supported by the project "MUSA - Multilayered Urban Sustainability Action", NextGeneration EU, MUR PNRR.
2310.12266
Spectral theory of $p$-adic unitary operator
The $p$-adic unitary operator $U$ is defined as an invertible operator on $p$-adic ultrametric Banach space such that $\left |U\right |=\left |U^{-1}\right |=1$. We point out $U$ has a spectral measure valued in $\textbf{projection functors}$, which can be explained as the measure theory on the formal group scheme. The spectrum decomposition of $U$ is complete when $\psi$ is a $p$-adic wave function. We study $\textbf{the Galois theory of operators}$. The abelian extension theory of $\mathbb{Q}_p$ is connected to the topological properties of the $p$-adic unitary operator. We classify the $p$-adic unitary operator as three types: $\textbf{Teichm\"uller type}, \textbf{continuous type}, \textbf{pro-finite type}$. Finally, we establish a $\textbf{framework of $p$-adic quantum mechanics}$, where projection functor plays a role of quantum measurement.
Zhao Tianhong
2023-10-18T19:04:06Z
http://arxiv.org/abs/2310.12266v2
# Spectral theory of \(p\)-adic unitary operator ###### Abstract The \(p\)-adic unitary operator \(U\) is defined as an invertible operator on \(p\)-adic ultrametric Banach space such that \(|U|=\left|U^{-1}\right|=1\). We point out \(U\) has a spectral measure valued in **projection functors**, which can be explained as the measure theory on the formal group scheme. The spectrum decomposition of \(U\) is complete when \(\psi\) is a \(p\)-adic wave function. We study **the Galois theory of operators**. The abelian extension theory of \(\mathbb{Q}_{p}\) is connected to the topological properties of the \(p\)-adic unitary operator. We classify the \(p\)-adic unitary operator as three types: **Teichmuller type**, **continuous type**, **pro-finite type**. Finally, we establish a **framework of \(p\)-adic quantum mechanics**, where projection functor plays a role of quantum measurement. ## 1 Introduction This paper aims to establish the spectral theory of the \(p\)-adic unitary operator and construct a framework for \(p\)-adic quantum mechanics. We apply the number theory on functional analysis to connect several areas. The physical motivation is to find the invariant structure in quantum mechanics by changing the base field. The previous work [9] of \(p\)-adic Hermite operator gives a table which compares the concept in \(p\)-adic functional analysis with the usual functional analysis over \(\mathbb{R},\mathbb{C}\). \begin{tabular}{|c|c|c|} \hline & Archimedean & Non-Archimedean \\ \hline base field & \(\mathbb{R},\mathbb{C}\) & \(K\),\(K\) is a extension of \(\mathbb{Q}_{p}\) \\ \hline Banach space & Hilbert space & ultrametric Banach space \\ \hline norm & \(|x+y|^{2}+|x-y|^{2}=2(|x|^{2}+|y|^{2})\) & \(|x+y|\leq max(|x|,|y|)\) \\ \hline orthogonal projection & \(|x|^{2}=|\pi(x)|^{2}+|\pi^{\perp}(x)|^{2}\) & \(|x|=max(|\pi(x)|,|\pi^{\perp}(x)|)\) \\ \hline Banach algebra & \(C^{*}\)-Algebra & ultrametric Banach algebra \\ \hline Galois action & Hermite conjugate \(\dagger\) & Frobenius map \(\sigma\) \\ \hline \end{tabular} The spectral theory, which involves the conventional Hermite operator and unitary operator, serves as a conceptual tool in quantum mechanics. We suggest a categorical point of view that substitutes the orthogonal projection with the projection functor. This implies that the language of quantum mechanics could extend beyond operators to include functors. The spectral measure of operators comes from measure theory on group scheme, which can be understood as a functorial measure theory. Moreover, the structures of base field has a deep influence on the properties of operators. This leads us to study the Galois theory of operators. The relationship between Galois groups and operators could be listed as follows: \begin{tabular}{|c|c|c|} \hline field & Galois group & operator \\ \hline \(\mathbb{R},\mathbb{C}\) & \(\operatorname{Gal}\left(\mathbb{C}\mid\mathbb{R}\right)\) & Hermite operator \\ \hline \(\mathbb{R},\mathbb{C}\) & \(\operatorname{Gal}\left(\mathbb{C}\mid\mathbb{R}\right)\) & unitary operator \\ \hline \(\mathbb{Q}_{p},\mathbb{Q}_{p}^{ur}\) & \(\operatorname{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\) & \(p\)-adic Hermite operator \\ \hline \(\mathbb{Q}_{p},\mathbb{Q}_{p}^{ab}\) & \(\operatorname{Gal}\left(\mathbb{Q}_{p}^{ab}\mid\mathbb{Q}_{p}\right)\) & \(p\)-adic unitary operator \\ \hline \end{tabular} We refer to [6] for the basic concepts in \(p\)-adic analysis. We refer to [3][4][5] for the recently progress in \(p\)-adic functional analysis. The connection between analytic geometry and spectral theory is discussed by [1]. We refer to [7] for theory of formal scheme and formal group. We refer to [2][8] for previous work in \(p\)-adic quantum mechanics. Noted that the \(p\)-adic wave function take values in usual complex field \(\mathbb{C}\) in their papers. The wave function in this paper take values in \(\mathbb{C}_{p}\). ## 2 Definition _Notation._ \(\mathbb{C}_{p}\): \(p\)-adic complex field. \(\mathcal{O}_{p}\): Integral ring of \(\mathbb{C}_{p}\). \(\mathfrak{m}_{p}\): Maximum ideal of \(\mathcal{O}_{p}\). \(\mathbb{G}_{m}\): Formal group scheme over \(\mathcal{O}_{p}\). \(A\): strict ultrametric Banach algebra. \(\mathbb{R}_{+}\): The internal \([0,+\infty)\subset\mathbb{R}\) \(\varepsilon\): \(p\)-adic number in \(\mathfrak{m}_{p}\) which is used to do reduction. \(\underset{\varepsilon\in\mathfrak{m}_{p}^{+}}{\lim}A/\varepsilon A\): The completion of \(A\) in \(p\)-adic topology. \(\operatorname{Comp}A\): The set of open ideal of \(A\). \(\Pi_{I}\): The left projection functor with respect to open ideal \(I\). \(\pi_{I}\): The right projection functor with respect to open ideal \(I\). \(\mathbb{Q}_{p}^{ur}\): The maximum unramified extension of \(\mathbb{Q}_{p}\). \(\mathbb{Q}_{p}^{ab}\): The maximum abelian extension of \(\mathbb{Q}_{p}\). \(\operatorname{Gal}\left(\mathbb{C}_{p}\mid\mathbb{Q}_{p}\right)\): The Galois group of extension \(\mathbb{C}_{p}\mid\mathbb{Q}_{p}\) consists of the continuous \(\mathbb{Q}_{p}\)-isomorphisms. Let \(\mathbb{C}_{p}\) be the \(p\)-adic complex numbers, \(\left|-\right|_{p}\) be the \(p\)-adic norm over \(\mathbb{C}_{p}\), \(\mathcal{O}_{p}\) be the ring : \[\mathcal{O}_{p}=\left\{x\in\mathbb{C}_{p},\left|x\right|_{p}\leq 1\right\}.\] Let \(\mathbb{G}_{m}\) be the \(\mathcal{O}_{p}\)-Banach algebra: \[\mathbb{G}_{m}:=\left\{f(t)=\sum_{n=-\infty}^{\infty}a_{n}t^{n},a_{n}\in \mathcal{O}_{p},a_{n}\to 0,\ n\to\pm\infty\right\}.\] with the norm \(\left|f(t)\right|_{\mathbb{G}_{m}}=\sup_{n\in\mathbb{Z}}\left|a_{n}\right|_{p}\). Let \(X\) be a \(\mathbb{C}_{p}\)-ultrametric Banach space, \(\left|-\right|_{X}\) be the norm. Let \(M\) be the unit ball: \[M:=\left\{x\in X,\left|x\right|_{X}\leq 1\right\}.\] **Definition 2.1**.: Let \(U:X\to X\) be a bounded invertible \(\mathbb{C}_{p}\)-linear operator, we say \(U\) is a \(p\)-adic unitary operator if one of the following equivalent conditions holds: \[1.U(X_{M})=X_{M}.\] \[2.\left|Ux\right|_{M}=\left|x\right|_{M},\forall x\in X_{M}.\] \[3.\left|U\right|=\left|U^{-1}\right|=1,\text{where}\left|-\right| \text{ is operator norm}.\] It's equivalent to say that a \(p\)-adic unitary operator \(U\) gives \(M\) a \(\mathbb{G}_{m}\)-module structure, where the action of \(\mathbb{G}_{m}\) on \(M\) is given by: \[\mathbb{G}_{m}\times M \longrightarrow M\] \[\left(\sum_{n=-\infty}^{\infty}a_{n}t^{n},x\right) \longmapsto\sum_{n=-\infty}^{\infty}\left(a_{n}U^{n}\left(x \right)\right).\] So the spectral theory of \(p\)-adic unitary operator is dominated by \(\mathbb{G}_{m}\). In scheme-theoretic sense \(\mathbb{G}_{m}\) is a formal group scheme. Let \(R\) be an arbitary \(\mathcal{O}_{p}\)-ultrametric Banach algebra ( not necessary commutative ), \(\operatorname{Hom}(\mathbb{G}_{m},R)\) be the set of contraction morphism of \(\mathcal{O}_{p}\)-Banach algebra. We have: \[\operatorname{Hom}(\mathbb{G}_{m},R)\xleftarrow{1:1}U(R)=\left\{x\in R,\left| x\right|=\left|x^{-1}\right|=1\right\}\] where \(U(R)\) is group consisted of all \(p\)-adic unitary elements in \(R\). Following the Grothendieck's philosophy, \(\operatorname{Hom}(\mathbb{G}_{m},-)\) defines a functor: \[\mathcal{O}_{p}-\mathbf{Banach}\text{ }\text{ ## 3 Berkovich space and Gelfand representation Let \(A\) be a \(\mathbb{C}_{p}\)-ultrametric Banach algebra with a norm \(\left|-\right|_{A}\). **Definition 3.1**.: We say \(\left|-\right|:A\rightarrow\mathbb{R}_{+}\) is a multiplicative semi-norm, if \(\left|-\right|\) satisfy the following properties: 1. \(\left|cx\right|=\left|c\right|_{p}\left|x\right|,\forall c\in\mathbb{C}_{p},x \in A\); 2. \(\left|x+y\right|\leq\sup(\left|x\right|,\left|y\right|),\forall x,y\in A\); 3. \(\left|xy\right|=\left|x\right|\left|y\right|,\forall x,y\in A\). **Definition 3.2**.: The Berkovich space of \(A\) is the set of multiplicative semi-norm bounded by \(\left|-\right|_{A}\): \[\operatorname{Berk}A=\Big{\{}\left|-\right|:A\rightarrow\mathbb{R}_{+}\ \Big{|}\ \left|-\right|\ \text{is multiplicative semi-norm},\left|-\right|\leq\left|-\right|_{A}\Big{\}}\] **Lemma 3.1**.: _(Monotone Convergence Theorem) For_ \[\left|-\right|_{1}\leq\left|-\right|_{2}\leq\left|-\right|_{3}\leq\cdots,\ \Big{\{}\left|-\right|_{i}\Big{\}}_{i=1,2,3\ldots}\subseteq \operatorname{Berk}A\] _then we have:_ \[\sup_{i=1,2,3\ldots}(\left|-\right|_{i})\in\operatorname{Berk}A.\] From Zorn's lemma, there exists some maximum norms. Maximum norms of Berkovich space are correspond to **generic points** or **irreducible components** in scheme theory. We can define a canonical spectral semi-norm on \(A\). **Definition 3.3**.: Let \(x\in A\), define the **spectral semi-norm**\(\left|-\right|_{sp}\): \[\left|x\right|_{sp}:=\sup_{\left|-\right|\in\operatorname{Berk}A}\left|x\right|\] Let \(A_{sp}\) be the completion of \(A\) over the spectral semi-norm \(\left|-\right|_{sp}\), the morphism: \(\Gamma:A\to A_{sp}\) is a generalized Gelfand representation map. However, it is not an isometry in general cases. In general, we have: \[\left|x\right|_{sp}\leq\left|x\right|_{A},\forall x\in A.\] Let \(A\) be a commutative \(C^{*}\)-Algebra with unit, We say \(\left|-\right|:A\rightarrow\mathbb{R}_{+}\) is a multiplicative semi-norm, if \(\left|-\right|\) satisfy the following properties: 1. \(\left|cx\right|=\left|c\right|\left|x\right|,\forall c\in\mathbb{C},x\in A\); 2. \(\left|x+y\right|\leq\left|x\right|+\left|y\right|,\forall x,y\in A\); 3. \(\left|xy\right|=\left|x\right|\left|y\right|,\forall x,y\in A\). the Berkovich space of \(A\) is defined as: \[\operatorname{Berk}A=\left\{\,|-|:A\to\mathbb{R}_{+}\,\,\Big{|}\,\,|-|\,\,\, \text{is multiplicative semi-norm},|-|\leq|-|_{A}\right\}\] We have: \[\operatorname{Berk}A\qquad\stackrel{{ 1:1}}{{\longleftrightarrow}} \operatorname{Max}A\qquad\stackrel{{ 1:1}}{{\longleftrightarrow}} \operatorname{Hom}(A,\mathbb{C})\] \[|-|\qquad\stackrel{{ 1:1}}{{\longleftrightarrow}} \operatorname{ker}|-|\qquad\stackrel{{ 1:1}}{{\longleftrightarrow}} A/\operatorname{ker}|-|\] where \(\operatorname{Max}A\) is the set of all maximum ideal of \(\operatorname{A}\), \(\operatorname{Hom}(A,\mathbb{C})\) is the set of all \(\mathbb{C}\)-Banach algebra morphism1. We can define a Gelfand topology on \(\operatorname{Max}A\), which is the weakest topology such that all elements \(f\in A\) be a continuous function on \(\operatorname{Max}A\). Let \(C(\operatorname{Max}A)\) be the set of complex valued continuous function on \(\operatorname{Max}A\), the Gelfand representation gives a isometry isomorphism: \(A\stackrel{{\sim}}{{\to}}C(\operatorname{Max}A)\). We have: Footnote 1: Some papers call \(\operatorname{Hom}(A,\mathbb{C})\) characters. \[|x|_{sp}=\sup_{\lambda\in\operatorname{Max}A}\left(|x(\lambda)|\right)=|x|_{A} \,,\forall x\in A.\] **Definition 3.4**.: Let \(U\) be an invertible operator on a Hilbert space \(\mathcal{H}\) over \(\mathbb{C}\), we say \(U\) is a unitary operator if the following condition holds: \[U^{\dagger}=U^{-1}.\] Let \(U\) be a usual unitary operator act on a Hilbert space \(\mathcal{H}\) over \(\mathbb{C}\), \(\operatorname{End}\mathcal{H}\) be the set of bounded operator on \(\mathcal{H}\). Let \[p(U)=\left\{f(U)\in\operatorname{End}\mathcal{H},f(U)=\sum_{k\in\mathbb{Z}}c_{ k}U^{k},c_{k}\in\mathbb{C}\right\}\] be the set of polynomial of \(U\) consists of positive degree and negetive degree. \(p(U)\) has a norm structure from the operator norm defined on \(\operatorname{End}\mathcal{H}\). Let \(A\) be the completion of \(p(U)\). \(A\) is the \(C^{*}\)-algebra generated by the \(U\), the spectral theory of usual unitary operator \(U\) in Gelfand sense is exactly the spectral theory of \(A\),we have: \[A\simeq C(K).\] \(K=\operatorname{Max}A\) is the spectrum of \(U\), which is a compact subset of the unit cycle \(S^{1}\). We can generalize this statement as the following theorem: **Theorem 3.2**.: _(Spectral theory of usual unitary operator) The complex valued continous function space \(C(S^{1})\) is a group scheme in the category of unital non-commutative \(C^{*}\)-algebra. Let \(R\) be a \(C^{*}\)-algebra. Let_ \[U(R)=\left\{x\in R,x^{-1}=x^{\dagger}\right\}\] _be the set of unitary elements of \(R\), \(U(R)\) is a group. \(\mathrm{Hom}(C(S^{1}),-)\) defines a functor:_ \[C^{*}\textbf{-Algebra} \longrightarrow\textbf{Group}\] \[R \longmapsto U(R).\] _Let \(K_{1},K_{2}\) be the compact set in \(S^{1}\), then \(K_{1}\cup K_{2}\) is a compact set in \(S^{1}\), so the compact set in \(S^{1}\) defines a direct system, ordered by relation of inclusion. We have:_ \[1.S^{1}=\bigcup_{K\subset S^{1}}K=\varinjlim_{K\subset S^{1}}K,\] \[2.C(S^{1})=\varprojlim_{K\subset S^{1}}C(K),\] \[3.C\left(K_{1}\coprod K_{2}\right)=C(K_{1})\times C(K_{2}).\] _The spectral measure theory of unitary operator \(U\) can be viewed as a functorial measure theory._ The orthogonal projection can be generated from the representation theory of \(C(S^{1})\), which is a technical problem in functional analysis. Let \(L\) be a compact set in \(S^{1}\), which is a measurable set. There exists an orthogonal projection \(\pi_{L}\in\mathrm{End}\,\mathcal{H}\) such that: \[\pi_{L}U=U\pi_{L},\] where \(\pi_{L}U\) is a operator whose spectrum lies in \(L\). Using some technical method in functional analysis and representation theory2, we can define the spectral integral in classical theory of unitary operator: Footnote 2: Riesz representation theorem is needed. \[I=\int_{S^{1}}d\pi_{\lambda}\quad U=\int_{S^{1}}\lambda d\pi_{\lambda}\] In \(p\)-adic case, the Gelfand representation of the ultrametric Banach algebra may not be isometry. **Example 3.1**.: _Let \(U=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\in\mathrm{GL}_{2}(\mathcal{O}_{p})\), the \(\mathbb{C}_{p}\)-ultrametric Banach algebra generated by \(U\) is isomorphic to \(A=\mathbb{C}_{p}\left[X\right]/(X-1)^{2}\), the Gelfand representation of \(A\) is isomorphic to \(A_{sp}=\mathbb{C}_{p}\left[X\right]/(X-1)\simeq\mathbb{C}_{p}\)._ ## 4 Projection functor NotationThe symbol \(\mathfrak{m}_{p}^{+}\) is defined as the set: \[\mathfrak{m}_{p}^{+}=\left(\mathfrak{m}_{p}\cup\left\{1^{-}\right\}\right)- \left\{0\right\}.\] We always use \(\varepsilon\in\mathfrak{m}_{p}^{+}\) to do reduction, where \(\varepsilon=1^{-}\) means the reduction over \(\overline{\mathbb{F}_{p}}\). Moreover, every ring has an identity. In algebraic geometry, localization is a powerful way to study modules. Let \(A\) be a commutative ring with unit, \(\mathfrak{p}\) be a prime ideal, \(A_{\mathfrak{p}}\) be the localization at \(\mathfrak{p}\). \(M\) be an \(A\)-module, we have: \[M=0\iff A_{\mathfrak{p}}\underset{A}{\otimes}M=M_{\mathfrak{p}}=0,\forall \mathfrak{p}\in\operatorname{Spec}A.\] One can define the localization functor: \(A_{\mathfrak{p}}\underset{A}{\otimes}(-)\). \(A_{\mathfrak{p}}\underset{A}{\otimes}(-)\) is an exact functor from category \(A-mod\) to \(A-mod\). When \(A\) is a ultrametric commutative Banach algebra, we cannot define the well-behaved localization functor for \(A\)-Banach modules. However, it is possible to define the **projection functor**. **Definition 4.1**.: Let \(A\) be a \(\mathcal{O}_{p}\)-algebra with respect to a norm \(\left|-\right|_{A}:A\rightarrow\mathbb{R}_{+}\), we say \(A\) is a **ultrametric \(\mathcal{O}_{p}\)-Banach algebra**, if the following conditions hold: \[1.\left|cx\right|_{A}\leq\left|c\right|_{p}\left|x\right|_{A}, \forall c\in\mathcal{O}_{p},x\in A;\] \[2.\left|xy\right|_{A}\leq\left|x\right|_{A}\left|y\right|_{A}, \forall x,y\in A;\] \[3.\left|x+y\right|_{A}\leq\sup(\left|x\right|_{A},\left|y\right| _{A}),\forall x,y\in A;\] \[4.\left(A,\left|-\right|_{A}\right)\text{is complete.}\] We say \(A\) is a **bounded ultrametric \(\mathcal{O}_{p}\)-Banach algebra** if the following condition hold: \[\forall x\in A,\left|x\right|_{A}\leq 1.\] **Definition 4.2**.: Let \(A\) be a bounded ultrametric \(\mathcal{O}_{p}\)-Banach algebra. Suppose \(\forall\varepsilon\in\mathfrak{m}_{p},\ \varepsilon A\) is a open-closed ideal in \(A\). Let's define: \[I_{\varepsilon}=\begin{cases}\varepsilon A&\varepsilon\in\mathfrak{m}_{p}\\ \bigcup_{\varepsilon\in\mathfrak{m}_{p}}\varepsilon A&\varepsilon=1^{-}.\end{cases}\] We have: \[A=\varprojlim_{\varepsilon\in\mathfrak{m}_{p}^{+}}A/I_{\varepsilon}:=\varprojlim _{\varepsilon\in\mathfrak{m}_{p}^{+}}A/\varepsilon A.\] We say \(A\) is a **strict ultrametric \(\mathcal{O}_{p}\)-Banach algebra** if the condition above holds. **Example 4.1**.: \(\mathbb{G}_{m}\) _is a strict ultrametric \(\mathcal{O}_{p}\)-Banach algebra._ **Example 4.2**.: _Let \(A\) be a ultrametric \(\mathbb{C}_{p}\)-Banach algebra,_ \[\mathcal{O}_{A}:=\{x\in A,\left|x\right|_{A}\leq 1\}\] _is a strict ultrametric \(\mathcal{O}_{p}\)-Banach algebra. \(\forall\varepsilon\in\mathfrak{m}_{p}^{+},\mathcal{O}_{A}/\varepsilon\mathcal{ O}_{A}\) is a strict ultrametric \(\mathcal{O}_{p}\)-Banach algebra._ **Definition 4.3**.: Let \(M\) be a \(\mathcal{O}_{p}\)-module with respect to a norm \(\left|-\right|_{M}:M\rightarrow\mathbb{R}_{+}\), we say \(M\) is a **ultrametric \(\mathcal{O}_{p}\)-Banach module**, if the following conditions hold: \[1.\left|cx\right|_{M}\leq\left|c\right|_{p}\left|x\right|_{M}, \forall c\in\mathcal{O}_{p},x\in M;\] \[2.\left|x+y\right|_{M}\leq\sup(\left|x\right|_{M},\left|y\right| _{M}),\forall x,y\in M;\] \[3.\left(M,\left|-\right|_{M}\right)\text{is complete.}\] We say \(M\) is a **bounded ultrametric \(\mathcal{O}_{p}\)-Banach module** if the following condition hold: \[\forall x\in M,\left|x\right|_{M}\leq 1.\] **Definition 4.4**.: Let \(M\) be a bounded ultrametric \(\mathcal{O}_{p}\)-Banach module. Suppose \(\forall\varepsilon\in\mathfrak{m}_{p},\ \varepsilon M\) is a open-closed submodule in \(A\). Let's define: \[\varepsilon M=\begin{cases}\varepsilon M&\varepsilon\in\mathfrak{m}_{p}\\ \bigcup_{\varepsilon\in\mathfrak{m}_{p}}\varepsilon M&\varepsilon=1^{-}. \end{cases}\] We have: \[M=\underset{\varepsilon\in\mathfrak{m}_{p}^{+}}{\lim}M/\varepsilon M\] We say \(M\) is a **strict ultrametric \(\mathcal{O}_{p}\)-Banach module** if the condition above holds. **Definition 4.5**.: Let \((A,\left|-\right|_{A})\) be a ultrametric \(\mathcal{O}_{p}\)-Banach algebra, \((M,\left|-\right|_{M})\) be a ultrametric \(\mathcal{O}_{p}\)-Banach module, we say \(M\) is a ultrametric \(A\)-Banach module, if the following condition holds: \[1.M\text{ is an }A-mod;\] \[2.\left|am\right|_{M}\leq\left|a\right|_{A}\left|m\right|_{M}, \forall a\in A,m\in M;\] The morphism between bounded ultrametric \(A\)-Banach module is defined by the contraction morphism of \(A\)-mod: \[\text{Hom}_{A\text{ }-\textbf{Banach}}(M,N)=\left\{\phi\in\text{Hom}_{A- \text{mod}}(M,N),\left|\phi(x)\right|_{N}\leq\left|x\right|_{M}\right\}.\] The morphism between bounded ultrametric \(\mathcal{O}_{p}\)-Banach algebra is defined by the contraction morphism of \(\mathcal{O}_{p}\)-algebra: \[\text{Hom}_{\mathcal{O}_{p}}-\textbf{Banach Alg}(A,R)=\left\{\phi\in\text{ Hom}_{\mathcal{O}_{p}\text{-algebra}}(A,R),\left|\phi(x)\right|_{R}\leq\left|x\right|_{A} \right\}.\] Let \(A-\textbf{Banach}\) donate the category of bounded ultrametric \(A\)-Banach module and contraction morphism. Let \(\mathcal{O}_{p}-\textbf{Banach Alg}\) donate the category of bounded ultrametric \(\mathcal{O}_{p}\)-Banach algebra and contraction morphism. Let \(A-\textbf{Banach}_{S}\) donate the category of strict ultrametric \(A\)-Banach module and contraction morphism. Let \(\mathcal{O}_{p}-\textbf{Banach Alg}_{S}\) donate the category of strict ultrametric \(\mathcal{O}_{p}\)-Banach algebra and contraction morphism. _Remark 4.1_.: The closed \(A\)-submodule \(N\) of bounded ultrametric \(A\)-Banach module \(M\) is bounded ultrametric \(A\)-Banach module with respect to the norm: \[\left|x\right|_{N}=\left|x\right|_{M}\] _Remark 4.2_.: Suppose \(N\) is a open-closed submodule of strict ultrametric \(A\)-Banach module \(M\), the quotient module \(M/N\) is strict ultrametric \(A\)-Banach module with respect to the quotient norm: \[\left|x\right|_{M/N}=\sup_{y\in N}(\left|x+y\right|_{M})\] **Definition 4.6**.: Let \(A\) be a ultrametric \(\mathcal{O}_{p}\)-Banach algebra, \(A\) has a \(p\)-adic topology given by \(\left|-\right|_{A}\). Let \(I\) be a ideal of \(A\). We say \(I\) is **open ideal** if \(I\) is open set. _Remark 4.3_.: \(I\) is open ideal if and only if: \[\exists\varepsilon\in\mathcal{O}_{p},\varepsilon\neq 0,w(\varepsilon)\in I,\] where \(w\) is the unit morphism: \[w:\mathcal{O}_{p}\to A\] Let \(I\leq A\) be a open ideal, it can be showed that \(I\) is closed in \(A\). Let's define the projection functor. **Definition 4.7**.: Let \(A\) be a strict ultrametric \(\mathcal{O}_{p}\)-Banach algebra, \(I\) be a open ideal of \(A\). The **left projection functor**\(\Pi_{I}:A-\mathbf{Banach}\to A-\mathbf{Banach}\) is defined as: \[\Pi_{I}(M)=\mathrm{Hom}_{A\,-\,\mathbf{Banach}}(A/I,M)=\left\{x\in M\mid ax=0, \forall a\in I\right\}.\] The **right projection functor**\(\pi_{I}:A-\mathbf{Banach}_{S}\to A-\mathbf{Banach}_{S}\) is defined as: \[\pi_{I}(M)=A/I\mathop{\otimes}_{A}M=M/IM.\] Since \(I\) is open ideal, it is easy to show that \(IM\) is open-closed submodule of \(M\). Moreover, **it is convenient to permit \(I=(0)\) or \((1)\)**. _Remark 4.4_.: The quotient algebra \(A/I\) can be viewed as the \(p\)-adic counterpart of complex valued function \(C(K)\) over compact set \(K\). \(\Pi_{I}(M)\) gives a "categorical submodule of \(M\)", \(\pi_{I}(M)\) gives a "categorical quotient module of \(M\)". Let \(\mathcal{H}\) be a Hilbert space, the close subspace \(X\) of \(\mathcal{H}\) has a orthogonal projection theorem: \[\mathcal{H}=X\hat{\oplus}X^{\perp},\mathcal{H}/X\simeq X^{\perp}.\] **Proposition 4.1**.: _Let \(I,J\) be open ideal of \(A\), \(M\in A-\mathbf{Banach}_{S}\). the projection functor has the following pullback and pushout diagram:_ _where \(I\cap J\) corresponds to the abstract union of "measurable set", \(I+J\) corresponds to the abstract intersection of "measurable set". So the direct limit and inverse limit of \(\Pi_{I}(-)\) and \(\pi_{I}(-)\) could be defined. Let \(\Omega\) be a set of some open ideal of \(A\) such that:_ \[I,J\in\Omega\implies I\cap J,I+J\text{ in }\Omega.\] _There exists a canonical morphism:_ _Remark 4.5_.: In general, \(\underset{I\in\Omega}{\varprojlim}\Pi_{I}(M)\) can be viewed as "**categorical interior** of \(M\)", \(\underset{I\in\Omega}{\varprojlim}\pi_{I}(M)\) can be viewed as "**categorical closure** of \(M\)". The phenomenon is similar as the categorical definition of interior and closure of topological space \(X\): In the case of Hilbert space, the orthogonal projection theorem tells us the closed subspace of Hilbert space is both "categorical open" and "categorical closed". So the category of Hilbert space admits excellent measure-theoretic structure. **Proposition 4.2**.: _In the functor category \(\operatorname{Fct}(A-\mathbf{Banach},A-\mathbf{Banach})\) and \(\operatorname{Fct}(A-\mathbf{Banach}_{S},A-\mathbf{Banach}_{S})\) we have:_ \[0.\Pi_{(1)}=0_{A\,-\,\mathbf{Banach}},\pi_{(1)}=0_{A\,-\,\mathbf{Banach}_{S}};\] \[1.\Pi_{(0)}=Id_{A\,-\,\mathbf{Banach}},\pi_{(0)}=Id_{A\,-\,\mathbf{Banach}_{S}};\] \[2.\Pi_{I}^{2}=\Pi_{I},\pi_{I}^{2}=\pi_{I};\] \[3.\Pi_{I}\circ\Pi_{J}=\Pi_{I+J},\pi_{I}\circ\pi_{J}=\pi_{I+J};\] \[4.\Pi_{I}\circ\Pi_{J}=0\iff I+J=(1);\] \[5.\pi_{I}\circ\pi_{J}=0\iff I+J=(1).\] **Proposition 4.3**.: _Let \(\operatorname{Comp}A=\{I\subset A\mid I\text{ is open ideal}\}\) be the set of open ideal of \(A\), Let \(\Omega\) be any index set. we have:_ \[1.I,J\in\operatorname{Comp}A\Rightarrow I\cap J\in\operatorname{Comp}A;\] \[2.\left\{I_{k}\right\}_{k\in\Omega}\subset\operatorname{Comp}A\Rightarrow \sum_{k\in\Omega}I_{k}\in\operatorname{Comp}A.\] \(\operatorname{Comp}A\) _has a lattice structure with \(\cap\) and \(\sum\)._ Let \(R\) be a object in the category \(\mathcal{O}_{p}-\mathbf{Banach\ Alg}\), not necessary commutative. Every morphism: \(f\in\operatorname{Hom}_{\mathcal{O}_{p}-\mathbf{Banach\ Alg}\left(A,R\right)}\) induces a map: \[f^{*}:\operatorname{Comp}R \rightarrow\operatorname{Comp}A\] \[V \mapsto f^{-1}\left(V\right).\] There exists a **relative topology** on \(\operatorname{Hom}_{\mathcal{O}_{p}-\mathbf{Banach\ Alg}\left(A,R\right)}\), which is defined by the weakest topology makes every element in \(A\) be a continuous function on \(\operatorname{Hom}_{\mathcal{O}_{p}-\mathbf{Banach\ Alg}\left(A,R\right)}\). Moreover, there exists a categorical is-morphism:3 Footnote 3: It comes from the definition of continuous morphism. \[\operatorname{Hom}_{\mathcal{O}_{p}-\mathbf{Banach\ Alg}\left(A,R\right)} \simeq\underset{\varepsilon^{*}\in\mathfrak{m}_{p}^{+}}{\underset{ \varepsilon\in\mathfrak{m}_{p}^{+}}{\underset{\varepsilon\in\mathfrak{m}_{p}^ {+}}{\underset{\varepsilon\in\mathfrak{m}_{p}^{+}}{\operatorname{Hom}}}}} \left(A/\varepsilon A,R/\varepsilon^{*}R\right).\] Now let's study the spectral measure theory on the formal group scheme \[\mathbb{G}_{m}:=\left\{f(T)=\sum_{n=-\infty}^{\infty}a_{n}T^{n},a_{n}\in \mathcal{O}_{p},a_{n}\to 0\right\}=\mathcal{O}_{p}\left\langle t,t^{-1}\right\rangle\] **Definition 4.8**.: Let \[u(\mathbb{G}_{m})=\left\{f\in\mathcal{O}_{p}\left[t,t^{-1}\right]\Biggm{|}f(t )=c\prod_{\lambda\in\mathcal{O}_{p}^{\times}}(t-\lambda)^{n_{\lambda}},c\in \mathcal{O}_{p}^{\times},n_{\lambda}\in\mathbb{Z}\right\}.\] we call \(u(\mathbb{G}_{m})\) **unit polynomial** of \(\mathbb{G}_{m}\). It's obvious to see that: \[f,g\in u(\mathbb{G}_{m})\implies fg\in u(\mathbb{G}_{m})\] **Definition 4.9**.: Let \(\varepsilon\in\mathfrak{m}_{p}^{+},f\in u(\mathbb{G}_{m})\), the **unit ideal** is defined as: \[I_{\varepsilon,f}=\begin{cases}(\varepsilon,f)&\varepsilon\in\mathfrak{m}_{p} \\ \bigcup_{\varepsilon\in\mathfrak{m}_{p}}I_{\varepsilon,f}&\varepsilon=1^{-} \end{cases}\] In geometry, \(I_{\varepsilon,f}\) represents some balls whose radius are \(\epsilon\) in \(\mathfrak{m}_{p}^{+}\). We want to use \(I_{\varepsilon,f}\) to parameterize the action of formal group scheme. Let \(\operatorname{Root}f\) donate the set of all \(\operatorname{root}(\)including multiplicity\()\) of \(f\), \(d(\operatorname{Root}f,\operatorname{Root}g)\) be the distance of set \(\operatorname{Root}f,\operatorname{Root}g,\)\(res(f,g)\) be the resultant of \(f,g\). **Theorem 4.4**.: _Let \(p\geq 3\), the following conditions are equivalent:_ \[1.I_{\varepsilon,f}+I_{\varepsilon,g}=(1);\] \[2.\Pi_{I_{\varepsilon,f}}\circ\Pi_{I_{\varepsilon,g}}=0;\] \[3.\pi_{I_{\varepsilon,f}}\circ\pi_{I_{\varepsilon,g}}=0;\] \[4.d(\operatorname{Root}f,\operatorname{Root}g)=1;\] \[5.res(f,g)\in\mathcal{O}_{p}^{\times};\] \[6.\mathbb{G}_{m}/I_{\varepsilon,fg}\simeq\mathbb{G}_{m}/I_{ \varepsilon,f}\times\mathbb{G}_{m}/I_{\varepsilon,g}.\] Proof.: \(1\Leftrightarrow 2\Leftrightarrow 3\): Noted that: \[\Pi_{I_{\varepsilon,f}}\circ\Pi_{I_{\varepsilon,g}}=\Pi_{I_{\varepsilon,f}+I_{ \varepsilon,g}}\] \[\pi_{I_{\varepsilon,f}}\circ\pi_{I_{\varepsilon,g}}=\pi_{I_{\varepsilon,f}+I_{ \varepsilon,g}}\] \(1\Leftrightarrow 4\): Let \(\tilde{f},\tilde{g}\) be the reduction of \(f,g\) in the ring \(\overline{\mathbb{F}_{p}}\left[t,t^{-1}\right]\), then \(\tilde{f},\tilde{g}\) are coprime. We have: \[\exists k(t),l(t)\in\overline{\mathbb{F}_{p}}\left[t,t^{-1}\right],k(t)\tilde {f}(t)+l(t)\tilde{g}(t)=1.\] If \(\lambda\in\overline{\mathbb{F}_{p}}\) is a common root of \(\tilde{f},\tilde{g}\), then we have: \[k(\lambda)\tilde{f}(\lambda)+l(\lambda)\tilde{g}(\lambda)=0,\] which leads to a contradiction. So we have: \(d(\operatorname{Root}f,\operatorname{Root}g)=1\). Suppose \(d(\operatorname{Root}f,\operatorname{Root}g)=1\), on the one hand, the reduction \(\tilde{f},\tilde{g}\) are coprime since \(\tilde{f},\tilde{g}\) has no common root. On the other hand, the reduction map: \[\mathcal{O}_{p}\left\langle t,t^{-1}\right\rangle\to\overline{\mathbb{F}_{p}} \left[t,t^{-1}\right]\] is surjective. There exists a lifting of \(k(t),l(t)\) such that: \[I_{\varepsilon,f}+I_{\varepsilon,g}=(1).\] \(4\Leftrightarrow 5\): From the definition of resultant. \(5\Leftrightarrow 6\): There exists \(k(t),l(t)\in\mathcal{O}_{p}\left[t,t^{-1}\right]\) such that: \[k(t)f(t)+l(t)g(t)=res(f,g).\] We can define two orthogonal projection \(P_{1},P_{2}\in\mathbb{G}_{m}/I_{\varepsilon,fg}\): \[1.P_{1}(t)=k(t)f(t)/res(f,g),P_{2}=l(t)g(t)/res(f,g);\] \[2.P_{1}^{2}=P_{1},P_{2}^{2}=P_{2};\] \[3.P_{1}+P_{2}=1,P_{1}P_{2}=P_{2}P_{1}=0;\] \[4.P_{1}(t)g(t)=0,P_{2}(t)f(t)=0;\] \[5.P_{1}(t)f(t)=f(t),P_{2}(t)g(t)=g(t);\] \[6.\forall h(t)\in\mathbb{G}_{m}/I_{\varepsilon,fg},h(t)=P_{1}(t) h(t)+P_{2}(t)h(t).\] From the argument of Chinese remainder theorem, there exists a isomorphism: \[\mathbb{G}_{m}/I_{\varepsilon,fg}\overset{P_{1}\times P_{2}}{\simeq}\mathbb{ G}_{m}/I_{\varepsilon,g}\times\mathbb{G}_{m}/I_{\varepsilon,f}.\] _Remark 4.6_.: This theorem tells us how to define the orthogonality property on \(\mathcal{O}_{p}^{\times}\) can be parameterized by the set of Teichmuller element \(T\left(\mathcal{O}_{p}^{\times}\right)\), which is a lifting of \(\overline{\mathbb{F}_{p}}\). The distance of different Teichmuller element is always \(1\). Moreover, there exists a one to one correspondence between the balls in \(\mathcal{O}_{p}^{\times}\) whose radius is \(1\) and \(T\left(\mathcal{O}_{p}^{\times}\right)\). \[T\left(\mathcal{O}_{p}^{\times}\right)=\left\{x\in\mathcal{O}_{p}^{\times}\ \Big{|}\ \exists k\in\mathbb{N},x^{p^{k}}=x\right\}\] **Corollary 4.5**.: _Suppose \(f\in u(\mathbb{G}_{m}),\lambda\in T\left(\mathcal{O}_{p}^{\times}\right)\). Let_ \[\mathrm{Root}_{\lambda}(f)=\left\{t_{\lambda}\in\mathrm{Root}(f)\ \Big{|}\ \left|t_{ \lambda}-\lambda\right|_{p}<1\right\},f_{\lambda}(t)=\prod_{t_{\lambda}\in \mathrm{Root}_{\lambda}(f)}(t-t_{\lambda}).\] _We have:_ \[f(t)=\prod_{\lambda\in T(\mathcal{O}_{p})}f_{\lambda}(t).\] _There exists a decomposition:_ \[\mathbb{G}_{m}/I_{\varepsilon,f}\simeq\prod_{\lambda\in T(\mathcal{O}_{p})} \mathbb{G}_{m}/I_{\varepsilon,f_{\lambda}}\] _Let_ \[u_{\lambda}(\mathbb{G}_{m})=\left\{f\in u(\mathbb{G}_{m})\mid\mathrm{Root}(f)= \mathrm{Root}_{\lambda}(f)\right\}.\] **Definition 4.10**.: We call \(f_{n}(t)=t^{n}-1,n\in\mathbb{Z}\)**principal unit polynomial**, \(I_{\varepsilon,n}=I_{\varepsilon,f_{n}(t)}\)**principal unit ideal**. **Proposition 4.6**.: _Every unit ideal \(I_{\varepsilon,f}\) include a principal unit ideal \(I_{\varepsilon,n}\) for some \(n\in\mathbb{Z}\)._ Proof.: Suppose the reduction of \(f\) in \(\overline{\mathbb{F}_{p}}\left[t,t^{-1}\right]\) is \(\tilde{f}\), since every invertible matrix in \(M_{k}(\overline{\mathbb{F}_{p}})\) has finite order, there exists \(N\) such that \(\tilde{f}(t)\mid t^{N}-1\). In the \(\mathbb{G}_{m}/I_{\varepsilon,f}\) we have: \[\left|t^{N}-1\right|<1.\] So there exists \(l\) such that: \[\left|t^{p^{l}N}-1\right|<\left|\varepsilon\right|\implies I_{\varepsilon,f} \supseteq I_{\varepsilon,p^{l}N}.\] _Remark 4.7_.: \(\mathbb{G}_{m}/I_{\varepsilon,n}\) is a subgroup scheme of \(\mathbb{G}_{m}\). However, \(\mathbb{G}_{m}/I_{\varepsilon,n}\) behaves bad in the category of non-commutative ultrametric \(\mathcal{O}_{p}\)-Banach algebra. **Proposition 4.7**.: \[1.I_{\varepsilon,n}+I_{\varepsilon,m}=I_{\varepsilon,gcd(n,m)};\] \[2.I_{\varepsilon,n}\cap I_{\varepsilon,m}=I_{\varepsilon,lcm(n,m)}.\] Let \(M\in Obj(\mathbb{G}_{m}-\mathbf{Banach}_{S})\), define \(M_{\varepsilon}=M/\varepsilon M\). We have: **Proposition 4.8**.: \[\varliminf_{n\geq 1}\Pi_{I_{\varepsilon,n}}(M_{\varepsilon}) =\varliminf_{n\geq 1}\mathrm{Hom}_{\mathbb{G}_{m}}-\mathbf{Banach}( \mathbb{G}_{m}/I_{\varepsilon,n},M_{\varepsilon})\] \[=\varliminf_{n\geq 1}\mathrm{Hom}_{\mathbb{G}_{m}}-\mathbf{Banach}( \mathbb{G}_{m}/I_{\varepsilon,n!},M_{\varepsilon})\] \[=\varliminf_{n\geq 1}\underset{\lambda\in T(\mathcal{O}_{P}^{ \times})}{\oplus}M_{\varepsilon,n,\lambda,tor}\] \[=\underset{\lambda\in T\left(\mathcal{O}_{P}^{\times}\right)}{ \oplus}M_{\varepsilon,n,\lambda,tor}\] \[=M_{\varepsilon,tor},\] _we have:_ \[M_{\varepsilon,tor} =\left\{m\in M_{\varepsilon}\mid\exists f\in u(\mathbb{G}_{m}), f(t)m=0\right\}\] \[M_{\varepsilon,\lambda,tor} =\left\{m\in M_{\varepsilon,tor}\mid\exists f\in u_{\lambda}( \mathbb{G}_{m}),f(t)m=0\right\}.\] **Proposition 4.9**.: \[\varliminf_{n\geq 1}\pi_{I_{\varepsilon,n}}(M_{\varepsilon}) =\varliminf_{n\geq 1}M_{\varepsilon}/(t^{n}-1)M_{\varepsilon}\] \[=\varliminf_{n\geq 1}M_{\varepsilon}/(t^{n!}-1)M_{\varepsilon}\] \[=\varliminf_{n\geq 1}\prod_{\lambda\in T\left(\mathcal{O}_{P}^{ \times}\right)}M_{\varepsilon,n,\lambda}\] \[=\prod_{\lambda\in T\left(\mathcal{O}_{P}^{\times}\right)}{ \varliminf_{n\geq 1}}M_{\varepsilon,n,\lambda}\] \[=\prod_{\lambda\in T\left(\mathcal{O}_{P}^{\times}\right)}M_{ \varepsilon,\lambda}\] \[=\hat{M_{\varepsilon}}.\] _Remark 4.8_.: \(M_{\varepsilon,tor}\) and \(\hat{M_{\varepsilon}}\) could be \(0\) for some \(M\). Let \(Frac\left(\mathbb{G}_{m}\right)\) be the fraction field of \(\mathbb{G}_{m}\), which has a multiplicative norm induced by \(\left|-\right|_{\mathbb{G}_{m}}\). Let \(K\) be the completion of \(Frac\left(\mathbb{G}_{m}\right)\), \(R\) be the integral ring of \(K\), which is a strict \(\mathbb{G}_{m}-\mathbf{Banach}\) module. We have: \[R_{\varepsilon,tor}=\hat{R_{\varepsilon}}=0.\] This example shows that the generic point of \(\mathbb{G}_{m}\) has an influence on the category \(\mathbb{G}_{m}-\mathbf{Banach}\). **Proposition 4.10**.: _Let \(M\in Obj(\mathbb{G}_{m}-\mathbf{Banach}_{S})\), \(M_{i}=\cup_{\varepsilon\in\mathfrak{m}_{p}}\varepsilon M\), \(M_{1}=M/M_{i}\). Suppose:_ \[M_{\varepsilon,tor}=\hat{M_{\varepsilon}}=0,\forall\varepsilon\in\mathfrak{m}_ {p}^{+},\] _then \(M_{1}\) is a \(\overline{\mathbb{F}_{p}}\left(t\right)\)-linear space._ Proof.: Suppose \(M_{1}\neq 0\), for any \(\lambda\in\overline{\mathbb{F}_{p}}\), there exists a exact sequence: We have: \[\ker\left(t-\lambda\right)=\operatorname{coker}\left(t-\lambda\right)=0.\] So \(t-\lambda\) is invertible. Hence \(M_{1}\) is a \(\overline{\mathbb{F}_{p}}\left(t\right)\)-linear space. **Definition 4.11**.: We call \(M_{\infty}=\overline{\mathbb{F}_{p}}\left(t\right)\underset{\mathbb{G}_{m}}{ \otimes}M\)**infinite place of spectrum decomposition**. Let \(I\) be a open ideal of \(\mathbb{G}_{m}\). We call \(\Pi_{I}(M_{\varepsilon}),\pi_{I}(M_{\varepsilon})\)**compact place of spectrum decomposition**. We call set: \[\operatorname{Sp}(M)=\left\{M_{\infty};M_{\varepsilon,\lambda,tor};M_{ \varepsilon,\lambda},\forall\varepsilon\in\mathfrak{m}_{p}^{+},\lambda\in T \left(\mathcal{O}_{p}^{\times}\right)\right\}\] **the spectrum decomposition of \(M\)**. 4 Footnote 4: Noteed that the spectrum decomposition of \(M\) has a correspondence to \(\operatorname{Berk}\mathbb{G}_{m}\), which is the spectrum of \(\mathbb{G}_{m}\) in Berkovich sense. **Theorem 4.11**.: _(completeness of spectrum) Suppose there exist \(\psi\in M\) such that5\(\left|\psi\right|=1\), then \(\operatorname{Sp}(M)\) is not \(\left\{0\right\}\)._ Footnote 5: \(\psi\) can be understood as a wave function. Proof.: We have: \[M_{1}\neq 0.\] Suppose \(\operatorname{Sp}(M)\) is \(\left\{0\right\}\), then \(M_{1}\) is a \(\overline{\mathbb{F}_{p}}\left(t\right)\)-linear space, which leads to a contradiction. ## 5 The Galois theory of operators The topological property of \(p\)-adic unitary operator is related to \(\mathbb{Q}_{p}^{\times}\). More precisely, the abelian extension theory of \(\mathbb{Q}_{p}\). In local class field theory, the pro-finite completion of \(\mathbb{Q}_{p}^{\times}\) corresponds to \(\operatorname{Gal}\left(\mathbb{Q}_{p}^{ab}\mid\mathbb{Q}_{p}\right)\). There exists a local **Artin morphism** \[\theta:\mathbb{Q}_{p}^{\times}\rightarrow\operatorname{Gal}\left(\mathbb{Q}_{ p}^{ab}\mid\mathbb{Q}_{p}\right)\] such that the following diagram commutes: \(0\)\(\mathbb{Z}_{p}^{\times}\)\(\mathbb{Q}_{p}^{\times}\)\(\mathbb{Z}\)\(0 where \(\theta\) is an almost isomorphism. In this section, we are about to talk about the Galois theory of operators. **Proposition 5.1**.: _We define the following group:_ \[B\left(1\right) =\left\{x\in\mathcal{O}_{p}^{\times}\ \Big{|}\ \left|x-1\right|_{p}<1\right\}\] \[T\left(\mathcal{O}_{p}^{\times}\right) =\left\{x\in\mathcal{O}_{p}^{\times}\ \Big{|}\ \exists k\in\mathbb{N},x^{p^{k}}=x\right\}.\] _Let \(\sigma\) be the Frobenius map:_ \[\sigma:\mathcal{O}_{p}^{\times} \rightarrow\mathcal{O}_{p}^{\times}\] \[x \mapsto x^{p}.\] _Let \(x\in\mathcal{O}_{p}^{\times}\), we have:_ \[\lim_{n\rightarrow\infty}\sigma^{n!}(x)\to 1 \Leftrightarrow x\in B\left(1\right)\] \[\lim_{n\rightarrow\infty}\sigma^{n!}(x)\to x \Leftrightarrow x\in T\left(\mathcal{O}_{p}^{\times}\right).\] _Let \(x\in\mathbb{C}_{p}\), we have:_ \[\lim_{n\rightarrow\infty}\sigma^{n!}(x)\ \text{converges} \Leftrightarrow x\in\mathcal{O}_{p}\] \[\lim_{n\rightarrow\infty}x^{n!} \to 1 \Leftrightarrow x\in\mathcal{O}_{p}^{\times}\] \[.\] _Finally, we have:_ \[\mathcal{O}_{p}^{\times} \simeq B\left(1\right)\times T\left(\mathcal{O}_{p}^{\times}\right)\] \[x \mapsto\left(\frac{x}{\lim_{n\rightarrow\infty}\sigma^{n!}(x)},\lim_{n\rightarrow\infty}\sigma^{n!}(x)\right).\] **Definition 5.1**.: Let \(M\in Obj(\mathbb{G}_{m}-\mathbf{Banach})\), suppose there exists \(x\in M,|x|=1\). Let \(U\) be a \(p\)-adic unitary operator on \(M\), \(\sigma:U\mapsto U^{p}\). We say \(U\) is **continuous type** if the strong operator limit of \(\left\{\sigma^{n!}(U)\right\}_{n\in\mathbb{N}}\) converges to identity operator: \[s-\lim_{n\rightarrow\infty}\sigma^{n!}(U)\to 1.\] We say \(U\) is **Teichmuller type** if the strong operator limit of \(\left\{\sigma^{n!}(U)\right\}_{n\in\mathbb{N}}\) converges to \(U\). \[s-\lim_{n\rightarrow\infty}\sigma^{n!}(U)\to U.\] We say \(U\) is **pro-finite type** if the strong operator limit of \(\left\{U^{n!}\right\}_{n\in\mathbb{N}}\) converges to identity operator: \[s-\lim_{n\rightarrow\infty}U^{n!}\to 1.\] Moreover, \(U\) is **pro-finite type** if and only if: \[M_{\varepsilon,tor}=M_{\varepsilon}\] The following theorem follows from [3]. We do not need the normal property of \(p\)-adic unitary operators. **Theorem 5.2**.: _(\(p\)-adic Stone's theorem) \(U\) is a \(p\)-adic unitary operator of **continuous type** if and only if the one-parameter unitary group \(\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}}\) can be well defined._ _Moreover, there exists a \(\mathbb{Z}_{p}^{\times}\simeq\operatorname{Gal}\left(\mathbb{Q}_{p}^{ab}\mid \mathbb{Q}_{p}^{ur}\right)\) action on the one-parameter unitary group \(\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}}\), where \(\mathbb{Q}_{p}^{ab}\) is the maximum abelian field extension of \(\mathbb{Q}_{p}\), \(\mathbb{Q}_{p}^{ur}\) is the maximum unramified extension of \(\mathbb{Q}_{p}\)._ Proof.: The definition of continuous type \(p\)-adic unitary operator is equivalent to the \(p\)-adic continuous property of group \(\left\{U^{n}\right\}_{n\in\mathbb{Z}}\) at \(n=0\). So \(\left\{U^{n}\right\}_{n\in\mathbb{Z}}\) can be uniquely extended to \(\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}}\) with the following diagram commutes: Let \(\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}}\) be any one-parameter unitary group, \(\alpha\in\mathbb{Z}_{p}^{\times}\), we can define: \[\phi_{\alpha}:\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}} \stackrel{{\simeq}}{{\longrightarrow}}\left\{U^{t} \right\}_{t\in\mathbb{Z}_{p}}\] \[U\mapsto U^{\alpha}.\] \(\left\{\phi_{\alpha},\alpha\in\mathbb{Z}_{p}^{\times}\right\}\) is the isomorphism group of \(\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}}\). In the case of usual one-parameter unitary group on Hilbert space, the action of \(\mathbb{Z}_{p}^{\times}\) corresponds to the skew-symmetry property of generator. The Hermite conjugate gives a \(\mathbb{Z}/2\mathbb{Z}\simeq\operatorname{Gal}\left(\mathbb{C}\mid\mathbb{R}\right)\)-action. **Theorem 5.3**.: _(Spectral decomposition theorem of Teichmuller element) \(U\) is a \(p\)-adic unitary operator of **Teichmuller type** if and only if there exists a spectral decomposition:_ \[\sum_{\lambda\in T\left(\mathcal{O}_{p}^{\times}\right)}\pi_{\lambda}=1,\ \ \sum_{\lambda\in T\left(\mathcal{O}_{p}^{\times}\right)}\lambda\pi_{\lambda}=U, \ \ \pi_{\lambda}^{2}=\pi_{\lambda},\ \ \pi_{\lambda}\pi_{\lambda^{*}}=0(\forall \lambda\neq\lambda^{*})\] _the sum converges in strong operator topology._ _Moreover, there exists a \(\hat{\mathbb{Z}}\simeq\operatorname{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{ Q}_{p}\right)\) action on \(U\), where \(\mathbb{Q}_{p}^{ur}\) is the maximum unramified field extension of \(\mathbb{Q}_{p}\) by joining the \(T\left(\mathcal{O}_{p}^{\times}\right)\)._ Proof.: We refer to [9] for the proof of spectral decomposition. We only show how to construct a \(\operatorname{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\)-action on \(\left\{U^{n}\right\}_{n\in\mathbb{Z}}\). First, \(\operatorname{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\) has a pro-finite group action on \(T\left(\mathcal{O}_{p}^{\times}\right)\). Let \(g\in\operatorname{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\), we have a group isomorphism: \[\psi_{g}:T\left(\mathcal{O}_{p}^{\times}\right) \to T\left(\mathcal{O}_{p}^{\times}\right)\] \[\lambda \mapsto g(\lambda).\] Finally, \(\mathrm{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\) has a group action on \(p\)-adic unitary operator of **Teichmuller type**. The construction is: \[U =\sum_{\lambda\in T\left(\mathcal{O}_{p}^{\times}\right)}\lambda \pi_{\lambda}\] \[\psi_{g}(U) =\sum_{\lambda\in T\left(\mathcal{O}_{p}^{\times}\right)}g( \lambda)\pi_{\lambda}.\] Since the \(\mathrm{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\)-group action on \(T\left(\mathcal{O}_{p}^{\times}\right)\) is pro-finite, the sum converges in strong operator topology. **Theorem 5.4**.: _(Jordan decomposition theorem of pro-finite unitary operator) \(U\) is a \(p\)-adic unitary operator of **pro-finite type** if and only if there exists a Jordan decomposition:_ \[U=U_{s}U_{n},\] _where \(U_{s}\) is a \(p\)-adic unitary operator of **Teichmuller type**, \(U_{n}\) is a \(p\)-adic unitary operator of **continuous type**. The Jordan decomposition of \(U\) is unique._ _Moreover, there exists a \(\hat{\mathbb{Z}}\times\mathbb{Z}_{p}^{\times}\simeq\mathrm{Gal}\left(\mathbb{ Q}_{p}^{ab}\mid\mathbb{Q}_{p}\right)\) action on \(U\), where \(\mathbb{Q}_{p}^{ab}\) is the maximum abelian field extension of \(\mathbb{Q}_{p}\)._ Proof.: Recall that \(\mathcal{O}_{p}^{\times}\simeq B\left(1\right)\times T\left(\mathcal{O}_{p}^{ \times}\right)\). For any \(x\in\left(\mathcal{O}_{p}/\varepsilon\right)^{\times}\), \(x\) has a finite order: \[\exists n(x)\in\mathbb{N},x^{n(x)}=1\] The Jordan decomposition of \(x\) is uniquely defined by the limit: \[x_{s}=\lim_{n\rightarrow\infty}\sigma^{n!}(x),\ x_{n}=\frac{x}{x_{s}}\] From the definition of pro-finite type \(p\)-adic unitary operator, for any \(m\) in \(M\), we have: \[\lim_{k\rightarrow\infty}U^{k!}m=m.\] Let \(\varepsilon\in\mathcal{O}_{p}-\left\{0\right\}\), \(m_{\varepsilon}\in M_{\varepsilon}\) be the reduction of \(m\), we have: \[\exists n\in\mathbb{N},\ U_{\varepsilon}^{n}m_{\varepsilon}=m_{\varepsilon}.\] The Jordan decomposition of \(U_{\varepsilon}\) is defined by: \[U_{\varepsilon,s}\left(m_{\varepsilon}\right)=\lim_{n\rightarrow\infty} \sigma^{n!}(U_{\varepsilon})\left(m_{\varepsilon}\right),\ U_{\varepsilon,n}= \frac{U_{\varepsilon}}{U_{\varepsilon,s}}.\] Since we have: \[M=\varprojlim_{\varepsilon\in\mathfrak{m}_{p}^{+}}M_{\varepsilon}\] The limit \(\lim_{n\rightarrow\infty}\sigma^{n!}(U)\left(m\right)\) converges. Let's define: \[U_{s}\left(m\right)=\lim_{n\rightarrow\infty}\sigma^{n!}(U)\left(m\right),\ U _{n}=\frac{U}{U_{s}}\] The Galois group \(\operatorname{Gal}\left(\mathbb{Q}_{p}^{ab}\mid\mathbb{Q}_{p}\right)\) action on \(U\) is defined by the product: \[\operatorname{Gal}\left(\mathbb{Q}_{p}^{ab}\mid\mathbb{Q}_{p}\right) \simeq\hat{\mathbb{Z}}\times\mathbb{Z}_{p}\] \[\left(g,h\right)\left(U\right) =g(U_{s})h(U_{n}).\] Let \(\sigma\in\operatorname{Gal}\left(\mathbb{C}_{p}\mid\mathbb{Q}_{p}\right)\). The categorical explanation is that the Galois group acts on the set of open ideal of \(\mathbb{G}_{m}\), which is induced by the following diagram: Moreover, every isomorphism \(f\) of \(\mathbb{G}_{m}\) induce a bijective map: \[f^{*}:\operatorname{Comp}\mathbb{G}_{m}\rightarrow\operatorname{Comp}\mathbb{ G}_{m}\] The involution \(i:t\to t^{-1}\) and the rotation \(r:t\rightarrow\alpha t,|\alpha|=1\) can be realized as the geometric symmetry on \(\operatorname{Comp}\mathbb{G}_{m}\). In fact, the definition of Hermite/unitary operator highly depends on the structure of field. The definition of inner product relies on the positive property of square in \(\mathbb{R}\), which is important in quantum theory. The Galois group can be understood as a symmetry of operators. _Remark 5.1_.: There exists a measure theory on any pro-finite group \(G\). For any open-closed normal subgroup \(N\) of \(G\), define the volume of \(N\) by: \[m\left(N\right)=\frac{1}{|G/N|}.\] Let \(L\) be a Galois extension of field \(K\), then \(\operatorname{Gal}\left(L\mid K\right)\) has a measure theory for any finite sub Galois extension \(M\) of \(K\). We can define the volume of \(\operatorname{Gal}\left(L\mid M\right)\) by: \[m\left(\operatorname{Gal}\left(L\mid M\right)\right)=\frac{1}{|M/K|}=\frac{1}{ |\operatorname{Gal}\left(M\mid K\right)|}.\] Let \(c+d\mathbb{Z}\subset\mathbb{Z},d\in\mathbb{Z}-\left\{0\right\},c\in\mathbb{Z}\), define the volume of \(c+d\mathbb{Z}\) by: \[m\left(c+d\mathbb{Z}\right)=\frac{1}{|d|}.\] The volume of \(c+d\mathbb{Z}\) can be viewed as the measure theory on pro-finite group ## 6 Examples **Example 6.1**.: _(The spectral theory of \(\mathbb{G}_{m}\) act on itself) Let \(\mathbb{G}_{m}\) act on itself. We have:_ \[\mathbb{G}_{m}=\mathcal{O}_{p}\left\langle t,t^{-1}\right\rangle\simeq c_{0} \left(\mathbb{Z}\right),\] _where \(t\) is a shift operator on \(c_{0}\left(\mathbb{Z}\right)\). Let \(c+d\mathbb{Z}\) be a arithmetic sequence. We can define the sum \(S_{c+d\mathbb{Z}}\), which is a "integral of \(c+d\mathbb{Z}\)":_ \[f\left(t\right)=\sum_{n=-\infty}^{\infty}a_{n}t^{n},S_{c+d\mathbb{Z}}\left(f \left(t\right)\right)=\sum_{n=-\infty}^{\infty}a_{c+dn}.\] _We have:_ \[S_{c+d\mathbb{Z}}\left(f\left(t\right)\right)=S_{c+d\mathbb{Z}}\left(t^{d}f \left(t\right)\right).\] _Moreover:_ \[S_{c+d\mathbb{Z}}=\sum_{c^{\ast}=1}^{d^{\ast}}S_{c+c^{\ast}d+dd^{\ast}\mathbb{ Z}}\] _which is the addictive property of integral. Finally, it is equivalent to use the right projection functor:_ \[\mathbb{G}_{m}/\left(t^{d}-1\right)\otimes\mathcal{O}_{p}\left\langle t,t^{- 1}\right\rangle\simeq\mathcal{O}_{p}\left[t\right]/\left(t^{d}-1\right) \simeq\mathcal{O}_{p}^{d},\] _where the coefficient corresponds to \(S_{c+d\mathbb{Z}}\). We have:_ \[f\left(t\right)=0\Leftrightarrow S_{c+d\mathbb{Z}}\left(f\left(t\right) \right)=0,\forall c\in\mathbb{Z},d\in\mathbb{Z}-\left\{0\right\}.\] **Example 6.2**.: _(Spectrum shift operator) Let \(X\) be a \(p\)-adic Hermite operator act on \(\mathbb{C}_{p}\)-ultrametric Banach space \(V\). Suppose the spectrum of \(X\) lies in \(\mathbb{Z}_{p}\). We have:_ \[X=\int_{\mathbb{Z}_{p}}\lambda dE_{\lambda}.\] _Let \(U\) be a \(p\)-adic unitary operator on \(X\). We say \(U\) is a **spectrum shift operator** if the following condition holds:_ \[UX-XU=U.\] _We have:_ \[UXU^{-1}=X+1 \implies\int_{\mathbb{Z}_{p}}\lambda UdE_{\lambda}U^{-1}=\int_{ \mathbb{Z}_{p}}\lambda+1dE_{\lambda}=\int_{\mathbb{Z}_{p}}\lambda dE_{\lambda-1}\] \[\implies UdE_{\lambda}U^{-1}=dE_{\lambda-1}.\] _The conjugate of \(U\) make the spectral measure of \(X\) shift by 1. Suppose \(U\) is a \(p\)-adic unitary operator of **continuous type**, then \(X\) has a continuous spectrum \(\mathbb{Z}_{p}\) shift by \(\left\{U^{t}\right\}_{t\in\mathbb{Z}_{p}}\). Let \(C\left(\mathbb{Z}_{p},\mathbb{C}_{p}\right)\) be the continuous function of \(\mathbb{Z}_{p}\) valued in \(\mathbb{C}_{p}\). Suppose:_ \[U\left(f\left(x\right)\right)=f\left(x+1\right),X\left(f\left(x\right)\right)= xf\left(x\right).\] _We have: \(U\) is a spectrum shift operator of \(X\). Let:_ \[a^{+}=XU^{-1},\,a^{-}=U-I,\,\,H=a^{+}a^{-};\] \[a^{-}a^{+}-a^{-}a^{+}=1.\] _Then \(\left(a^{+},a^{-}\right)\) is the creation and annihilation operators of \(H\). A non-commutative torus \(T_{\xi}\) can be described by the following commutation relation:_ \[UV=\xi VU,\] _where \(U,V\) is \(p\)-adic unitary operator, \(\xi\in\mathcal{O}_{p}^{\times}\) is a number. Let \(T_{\xi}\) be the \(\mathcal{O}_{p}\)-ultrametric algebra generated by \(U,V\). The conjugation of V makes the spectrum of \(U\) shift by \(\xi\). Suppose \(\left|\xi-1\right|<1,\xi\neq 0\), \(T_{\xi}\) has a reduction \(T_{\varepsilon,\xi}\) such that \(T_{\varepsilon,\xi}\) is commutative even though \(T_{\xi}\) is non-commutative. Let \(n,m\in\mathbb{Z}\), we have:_ \[U^{n}V^{m}=\xi^{nm}V^{m}U^{n}.\] _We can find \(n,m\) such that \(\xi^{nm}\to 1\). The algebra \(T_{\xi^{nm}}\) tends to be a commutative algebra._ **Example 6.3**.: _(\(p\)-adic unitary matrix group) Let \(\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\) be the group of n*n invertible matrix over \(\mathbb{Z}_{p}\), then \(\forall U\in\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\), \(U\) is pro-finite type \(p\)-adic unitary operator._ Proof.: The reduction of \(U\) in \(\mathrm{GL}_{n}\left(\mathbb{F}_{p}\right)\) has a finite order \(k\). So we have: \[U^{k}\in I+pM_{n}\left(\mathbb{Z}_{p}\right)\implies U^{n!}\to I.\] _Hence we have the Jordan decomposition of \(U\):_ \[U=U_{s}U_{n},\] _where \(U_{s}\) is a \(p\)-adic unitary matrix of **Teichmuller type**, \(U_{n}\) is a \(p\)-adic unitary matrix of **continuous type**._ _Let \(\mathbb{Z}_{p}^{n}\) be the unit ball in \(\mathbb{Q}_{p}^{n}\), the nature action of \(\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\) on \(\mathbb{Z}_{p}^{n}\) induce a group action on the continuous function on \(\mathbb{Z}_{p}^{n}\). Let \(g\in\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\), \(g^{*}\) be the action on continuous function. It is easy to check that \(g^{*}\) is a \(p\)-adic unitary operator of **pro-finite type**._ _There exists a more accurately decomposition for matrix in \(\mathrm{GL}_{n}\left(\mathbb{F}_{p}\right)\). For a arbitrary \(k\in\mathbb{N}\), Let \(\mathbb{F}_{p^{k}}\) be the finite field which has \(p^{k}\) elements, \(\mathbb{F}_{p^{k}}^{*}\) act on itself by left multiply, which is \(\mathbb{F}_{p}\)-linear. Let \(t_{k}\) be the generator of \(\mathbb{F}_{p^{k}}^{*}\), which can be realized as a \(k*k\) invertible matrix \(t_{k}\) in \(\mathrm{GL}_{k}\left(\mathbb{F}_{p}\right)\) of Teichmuller type:_ \[\sigma^{k}\left(t_{k}\right)=t_{k}^{p^{k}}=t_{k}\] _This is not canonical but it would be useful. Let_ \[\iota_{k}:\mathrm{GL}_{k}\left(\mathbb{F}_{p}\right) \rightarrow\mathrm{GL}_{n}\left(\mathbb{F}_{p}\right)\] \[U \mapsto\begin{pmatrix}U&\\ &I_{n-k}\end{pmatrix}\] _be the embedding map, \(T_{k}=\iota_{k}\left(t_{k}\right)\), define the set:_ \[\Phi =\left\{T\in\mathrm{GL}_{n}\left(\mathbb{F}_{p}\right)\;\middle| \;T=T_{n}^{m_{n}}...T_{2}^{m_{2}}T_{1}^{m_{1}};1\leq m_{k}\leq p^{k}-1,1\leq k \leq n\right\}\] \[B =\left\{N\in\mathrm{GL}_{n}\left(\mathbb{F}_{p}\right)\;\middle| \;N=\begin{pmatrix}1&n_{12}&n_{13}&...\\ 0&1&n_{23}&...\\ 0&0&1&...\\...&...&...&1\end{pmatrix};n_{ij}\in\mathbb{F}_{p},1\leq i,j\leq n\right\}.\] _We have:_ \[\left|\Phi\right|=\prod_{k=1}^{n}\left(p^{k}-1\right),\left|B \right|=p^{\frac{n\left(n-1\right)}{2}}\] \[\left|\Phi\right|\left|B\right|=\left|\mathrm{GL}_{n}\left( \mathbb{F}_{p}\right)\right|=\prod_{k=1}^{n}\left(p^{n}-p^{k}\right)\] **Proposition 6.1**.: _Let \(U\in\mathrm{GL}_{n}\left(\mathbb{F}_{p}\right)\), there exists a decomposition:_ \[U=TN;\ T\in\Phi,N\in B.\] _Moreover, the pair of matrices \(\left(T,N\right)\) is unique._ Proof.: First, the case \(n=1\) is obvious. Suppose the proposition holds for \(n=k,k\in\mathbb{N}\), we show that the proposition holds for \(n=k+1\). Let's define the affine transformation subgroup in \(\mathrm{GL}_{k+1}\left(\mathbb{F}_{p}\right)\): \[\mathrm{Aff}_{k}=\left\{g\in\mathrm{GL}_{k+1}\left(\mathbb{F}_{p}\right)\; \middle|\;g=\begin{pmatrix}a_{11}&...&a_{1k}&b_{1}\\...&...&...&...\\ a_{k1}&...&a_{kk}&b_{k}\\ 0&...&0&1\end{pmatrix}\right\}\] It can be showed that \(\mathrm{Aff}_{k}\) is generated by the following matrix: \[T=T_{k}^{m_{k}}...T_{2}^{m_{2}}T_{1}^{m_{1}};1\leq m_{l}\leq p^{l}-1,1\leq l\leq k\] \[N=\begin{pmatrix}1&...&0&b_{1}\\...&...&...&...\\ 0&...&1&b_{k}\\ 0&...&0&1\end{pmatrix};b_{l}\in\mathbb{F}_{p},1\leq l\leq k\] Finally, the eigenvalue of elements in \(\mathrm{Aff}_{k}\) lies in \(\mathbb{F}_{p^{k}}\), the eigenvalue of \(T_{k+1}\) lies in \(\mathbb{F}_{p^{k+1}}\). So we have: \[\left\{T_{k+1}^{m}\right\}_{1\leq m\leq p^{k+1}-1}\cap\mathrm{Aff} _{k}=\left\{I_{k+1}\right\}\] \[\left|\mathrm{Aff}_{k}\right|=\left|\mathrm{GL}_{k}\left(\mathbb{F }_{p}\right)\right|p^{k}\] \[\left|\mathrm{Aff}_{k}\right|\left(p^{k+1}-1\right)=\left| \mathrm{GL}_{k+1}\left(\mathbb{F}_{p}\right)\right|\] _Suppose \(p\geq 3\), there exists a Teichmuller lift for the set \(\Phi\) to the group \(\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\). Let \(\Upsilon\subset\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\) be a lift set of \(\Phi\). Let \(\mathcal{B}\) be the group:_ \[\mathcal{B}=\left\{N\in\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\ \middle|\ N\in\begin{pmatrix}1+p\mathbb{Z}_{p}& \mathbb{Z}_{p}&...\\ p\mathbb{Z}_{p}&1+p\mathbb{Z}_{p}&...\\...&...&...\end{pmatrix}\right\}.\] **Theorem 6.2**.: _Let \(U\in\mathrm{GL}_{n}\left(\mathbb{Z}_{p}\right)\), there exists a decomposition:_ \[U=TN;\ T\in\Upsilon,N\in\mathcal{B}.\] _Moreover, the pair of matrices \(\left(T,N\right)\) is unique._ **Example 6.4**.: _(\(p\)-adic time evolution) Considering a \(p\)-adic differential equation which is defined over a \(\mathcal{O}_{p}\)-ultrametric Banach space \(X\). Let \(H\) be a linear operator on \(X\), \(\psi:\mathbb{Z}_{p}\to X\) be a continuous function, **the \(p\)-adic time evolution equation** is defined as:_ \[\frac{\mathrm{d}\psi}{\mathrm{d}t}=H\psi.\] _The \(p\)-adic time evolution equation has a formal solution:_ \[\psi\left(t\right)=e^{Ht}\psi(0),\ t\in p\mathbb{Z}_{p}.\] _The formal solution sometimes can not be extended continuously to \(\mathbb{Z}_{p}\). Noted that \(e^{Ht}\) is a \(p\)-adic unitary operator of continuous type. We should find a more reasonable definition._ **Definition 6.1**.: Let \(X\) be a \(\mathcal{O}_{p}\)-ultrametric Banach space. Let \(H\) be a linear operator on \(X\), \(U\) be a unitary operator on \(X\). Suppose we have: \(HU=UH\). Let \(\psi_{k}:\mathbb{Z}_{p}\to X,k\in\mathbb{Z}\) be a family of continuous function, we say the following equation is **the \(p\)-adic time evolution equation**: \[\begin{cases}\frac{\mathrm{d}\psi_{k}}{\mathrm{d}t}=H\psi_{k},\ \ \forall t\in \mathbb{Z}_{p}\\ \psi_{k+1}=U\psi_{k},\ \ \forall k\in\mathbb{Z}\end{cases}\] **Proposition 6.3**.: \[\left|\psi_{k}\left(t+\varepsilon\right)\right|_{X}=\left|\psi_{k}\left(t \right)\right|_{X},\ \left|\varepsilon\right|_{p}<\frac{1}{p\left|H\right|},\left|\psi_{k+1} \right|_{X}=\left|\psi_{k}\right|.\] ## 7 Framework of \(p\)-adic quantum mechanics In this section, we want to establish a framework of \(p\)-adic quantum mechanics by spectral theory of \(p\)-adic operators. We refer to [2][8] for previous work. Let \(X\) be a \(\mathbb{C}_{p}\)-ultrametric Banach space, \(M\) be the unit ball of \(X\), \(H\) be a \(p\)-adic Hermite operator on \(X\), \(U\) be a \(p\)-adic unitary operator on \(X\), \(\pi\) be a orthogonal projection operator on \(X\), \(\psi\in X,\left|\psi\right|_{X}=1\) be a \(p\)-adic wave function. **Axiom 1**.: _(\(p\)-adic probability interpretation) The \(p\)-adic quantum system is described by a \(p\)-adic wave function \(\psi\) in a \(\mathbb{C}_{p}\)-ultrametric Banach space \(X\). The event is represented by \(\pi\). The probability is given by \(\left|\pi\left(\psi\right)\right|_{X}\). Suppose the event \(\pi\) can be decomposed into a family of disjoint event \(\left\{\pi_{i}\right\}_{i=1}^{k}\) satisfy the following condition:_ \[\pi=\sum_{i=1}^{k}\pi_{i},\ \ \pi_{i}\pi_{j}=\delta_{ij}\pi_{i},\ \forall 1\leq i,j\leq k.\] _Then the probability of \(\pi\) is the supremum of \(\left\{\pi_{i}\right\}_{i=1}^{k}\):_ \[\left|\pi\left(x\right)\right|=\sup_{1\leq i\leq k}\left|\pi_{i}\left(x\right) \right|.\] _Moreover, the right projection functor with respect to \(U\) has a similar probability interpretation._ **Axiom 2**.: _(\(p\)-adic time evolution) The \(p\)-adic time evolution is described by the operator \((H,U)\), where \(H\) is a \(p\)-adic linear operator (not necessary \(p\)-adic Hermite), \(U\) is a \(p\)-adic unitary operator, \(HU=UH\). \(H\) represents a continuous evolution, \(U\) represents a discrete evolution. The \(p\)-adic time evolution equation is given by:_ \[\begin{cases}\frac{\mathrm{d}\psi_{k}}{\mathrm{d}t}=H\psi_{k},\ \ \forall t\in \mathbb{Z}_{p}\\ \psi_{k+1}=U\psi_{k},\ \ \forall k\in\mathbb{Z}.\end{cases}\] **Axiom 3**.: _(\(p\)-adic observable and \(p\)-adic spectrum) Suppose \(H\) is a \(p\)-adic Hermite operator of period \(1\) with the symmetry of Galois group \(\mathrm{Gal}\left(\mathbb{Q}_{p}^{ur}\mid\mathbb{Q}_{p}\right)\), there exists a orthogonal projection valued spectral integral:_ \[I=\int_{\mathbb{Q}_{p}}dE_{\lambda}\ \ \ H=\int_{\mathbb{Q}_{p}}\lambda dE_{\lambda}.\] _Suppose \(U\) is a \(p\)-adic unitary operator, then \(X\) has a projection functor valued spectral measure. Let \(\psi\in M,\left|\psi\right|=1\) be a \(p\)-adic wave function, the spectrum decomposition:_ \[\mathrm{Sp}(M)=\left\{M_{\infty};M_{\varepsilon,\lambda,tor};M_{\varepsilon, \lambda},\forall\varepsilon\in\mathfrak{m}_{p}^{+},\lambda\in T\left(\mathcal{ O}_{p}^{\times}\right)\right\}\] _is complete._ **Axiom 4**.: _(\(p\)-adic quantum measurement) The \(p\)-adic quantum measurement is a right projection functor_ \[\pi_{I}:\mathbb{G}_{m}-\mathbf{Banach}_{S}\rightarrow\mathbb{G}_{m}-\mathbf{Banach }_{S},\] _where \(M\) is viewed as \(\mathbb{G}_{m}-\mathbf{Banach}_{S}\) module, \(\psi\in M\). After the \(p\)-adic quantum measurement, the wave function \(\psi\) restricts to \(\pi_{I}\left(\psi\right)\)._ ## 8 Further discussion Here are some questions remain unsolved. **Question 8.1**.: The behavior of open ideal seems to be a system which is both discrete and continuous. The reduction morphism makes \(\mathbb{G}_{m}\) ignore the diameter which is smaller than \(\varepsilon\). What is the \(p\)-adic \(\zeta\) function of \(\mathbb{G}_{m}\)? **Question 8.2**.: The abelian extension theory of \(\mathbb{Q}_{p}\) corresponds to some topological properties of \(p\)-adic unitary operator, which connects the class field theory with functional analysis. What is the connection between non-commutative Iwasawa theory and \(p\)-adic spectral theory? **Question 8.3**.: The geometry model of \(p\)-adic numbers and pro-finite groups are like some fractals. What is the definition of \(p\)-adic path-integral?
2310.15509
Dual frequency master oscillator generation and distribution for ALS and ALS-U
The ongoing work to upgrade ALS to ALS-U demands strict RF requirements such as low jitter and low spurs frequency reference to meet its accelerator and science goals. A low phase noise dual frequency Master Oscillator (MO), where the two frequencies are related by a fractional ratio of 608/609 and flexible divide by four frequency outputs has been consolidated into a single chassis. Optical fiber clock distribution system has been selected over the old coax system used in ALS to distribute these signals to various clients across the facility, providing high electrical isolation between outputs and therefore lower phase errors. A Xilinx FPGA ties the MO chassis together by providing a RS-485 interface to monitor and control the system. The new system aims to deliver phase-continuous frequencies with a phase noise (integrated RMS jitter) from 1 Hz to 1 MHz of less than 200 femtosecond per output. This paper will discuss the design, implementation, performance and installation of the new MO generation and distribution system.
Shreeharshini Dharanesh Murthy, Angel Jurado, Michael Betz, Qiang Du, Benjamin Flugstad
2023-10-24T04:34:12Z
http://arxiv.org/abs/2310.15509v1
# Dual Frequency Master Oscillator Generation and Distribution for ALS and ALS-U ###### Abstract The ongoing work to upgrade ALS to ALS-U demands strict RF requirements such as low jitter and low spurs frequency reference to meet its accelerator and science goals. A low phase noise dual frequency Master Oscillator (MO), where the two frequencies are related by a fractional ratio of 608/609 and flexible divide by four frequency outputs has been consolidated into a single chassis. Optical fiber clock distribution system has been selected over the old coax system used in ALS to distribute these signals to various clients across the facility, providing high electrical isolation between outputs and therefore lower phase errors. A Xilinx FPGA ties the MO chassis together by providing a RS-485 interface to monitor and control the system. The new system aims to deliver phase continuous frequencies with a phase noise (integrated RMS jitter) from \(1\,\mathrm{H}\mathrm{z}\) to \(1\,\mathrm{M}\mathrm{H}\mathrm{z}\) of less than \(200\,\mathrm{f}\mathrm{e}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o }\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n }\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o }\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n }\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o }\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n }\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o }\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n }\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n }\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t }\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n }\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t} \mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n} \mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{o} \mathrm{n}\mathrm{t}\mathrm{o The requirements for ALS-U MO include the lowest possible phase noise and low spurious coherent signals (spurs) in the 1 Hz to 100 kHz offset range. The spurs in this range will transfer through the LLRF system onto the beam, modulating the bunch arrival time. For a slow orbit feedback system, frequency setpoint needs to be adjustable in 1 Hz increments around \(f_{2}\)\(\pm\)10 kHz with \(\sim\)1 Hz update rate. Output phase and amplitude need to be continuous during these adjustments, which in turn applies to \(f_{1}\) outputs as well. ## 3 System Design The proposed final ALS-U MO system consists of commercial signal generator (Rohde and Schwarz SMA100B) to generate the new \(f_{2}\) frequency, followed by a custom-built distribution chassis to generate \(f_{1}\), \(\frac{f_{1}}{4}\) and, \(\frac{f_{2}}{4}\) along with the optical fiber system [4]. ### Hardware The new distribution chassis consists of two distribution boards - one for each \(f_{1}\) and \(f_{2}\) frequencies, an LMX2594 PLL evaluation board - to generate \(f_{1}=\frac{608}{609}\cdot f_{2}\), a Power Supply Unit (PSU) - for power management, control UART and Modbus communication, and frequency counters, a CMOD-A7 FPGA module, and User Interface (UI) board - user interface and display as shown in Fig. 3. The production MO distribution chassis is shown in Fig. 4. The two distribution boards are identical and use AD9508, a four channel programmable clock divider to divide \(f_{1}\) and \(f_{2}\) by four. One of the output channels is used for frequency measurement in the FPGA and the rest of them are distributed to various clients through Vialite rack/splitter. One of the three outputs generated is in the optical domain and is directly connected to the timing system, while the rest of them are on coax. The chip also provides automatic synchronization of all outputs therefore they all start with the same phase as shown in Fig. 5. The second frequency \(f_{1}\) is generated using a fractional PLL LMX2954 capable of achieving very low in-band noise, integrated jitter, and reduced spurs. It can turn off output (\(f_{1}\)) when not locked and also provides dynamic amplitude level control. ### Firmware and software A small form factor CMOD-A7 board built on Xilinx Artix-7 FPGA with a RISC-V capability ties the distribution chassis together. An open-source, size-optimized RISC-V CPU PicoRV32 [5] is used to handle system configuration, boot-time self-checking, continuous status monitoring, and remote interfacing with EPICS via the RS485/Modbus RTU port. The footprint of the firmware is only 60 kB of RAM and 22% LUTs are used. The firmware provides serial communication protocols through the PMOD connectors such as SPI to communicate to the PLL and the UI boards and I2C to communicate to the AD9508 clock divider chip. It is also equiped to measure frequencies \(f_{1}\), \(f_{2}\), \(\frac{f_{1}}{4}\) and, \(\frac{f_{2}}{4}\) through counters with an accuracy of about 500 Hz. The host server is continuously polling the registers through the interrupt service function of the PicoRV32 CPU, which handles Modbus RTU protocol over Ethernet and retrieves real-time register information for monitoring. Some of the available registers include voltage and current monitoring, continuous error detection (communication error, PLL not locked, and \(f_{2}\) frequency drifts), and frequency counters. The UI board also displays these status registers through the OLED screen. An EPICS IOC is built based on this register mapping and a Phoebus engineering screen was developed to display both the MO and the distribution chassis status Figure 4: Production MO distribution chassis. Figure 5: Hardware architecture with signal flow. Figure 3: Logical diagram of MO distribution chassis. at both operator and expert level panels as shown in Fig. 6. The software also promises to archive the frequency data available for long-term study. ## 3 Installation and Commissioning The new optical fiber based MO system along with the distribution chassis is going to be installed in three stages over the course of the next few years: the Pre-Dark Time - divided into two stages, before AR and during the AR commissioning periods and Post-Dark Time - final ALS-U configuration in 2025. In preparation for this, the first step was taken to change the hierarchical crystal oscillator source to a commercial HP8644B signal generator which reduced the overall phase noise to 314 fs within 1 Hz to 10 MHz frequency range. In January 2019, the HP8644b source was replaced by Holzworth HS9001A to reduce the integrated phase noise to 0 209 fs and avoiding the usage of external DC control voltage for frequency tuning. Finally, in December 2022 the HS9001A was switched to an ultra low-noise Rohde and Schwarz SMA100B, thereby further reducing phase noise to 60 fs (5X better compared to HP8644B) as shown in Fig. 7. The distribution chassis, Vialite fiber link chassis, and the three Vialite splitters were installed during the September 2023 summer shutdown. The distribution chassis currently has two clients: \(f_{1}\) coax output - to the entire system through the old MO distribution system and the \(\frac{1}{4}\) divider, \(f_{1}\) fiber output - to the Sub-Harmonic Buncher (SHB) through the vialite system. Further down the commissioning phase the old distribution system with the \(\frac{1}{4}\) divider will be retired and the new system will drive the entire accelerator. ## 4 Performance The phase and amplitude continuity tests were first performed with the new production distribution chassis and the SMA100B. There was no glitch in LMX2594 PLL outputs as long as the SMA100B did not have phase jumps or signal dropout while changing/sweeping frequencies. This can be achieved by limiting the SMA100B's step size to < 300 Hz during the sweep process. Fig. 8 demonstrates this feature, with the step size of 500 Hz (top waveform) the phase of SMA100B output (\(f_{2}\)) is discontinuous but with a step size of 300 Hz (bottom waveform), there is no glitch in the outputs of both SMA100B and LMX2594 PLL. Fig. 9 shows the integrated phase noise of the new system in 1 Hz to 1 MHz offset range measured using Rohde and Schwarz FSWP phase noise analyzer with RBW of 2%. The blue trace is the phase noise of the SMA100B output with the phase noise of 73 fs and the following four traces are phase noise plots at different outputs of distribution chassis, with the measured jitter of \(f_{2}\) = 74 fs, \(f_{1}\) = 126 fs, \(\frac{f_{2}}{4}\) = 125 fs, and \(\frac{f_{1}}{4}\) = 150 fs. The next two traces are the output of the Vialite splitters at \(f_{2}\) and \(f_{1}\) and show they add no significant noise to the system. As the requirements indicate, the outputs exhibit no significant spurs in the 1 Hz to 100 kHz offset range. Figure 6: Top: SMA100B operator panel. Center: SMA100B expert panel. Bottom: Distribution chassis panel. Figure 7: Phase noise comparison plots of different commercial MO. ## 5 Conclusion The new dual frequency MO system based on the single chassis design and fiber technology was successfully installed and is currently driving the entire system at ALS. More clients will use this new configuration further down the AR commissioning schedule. The system successfully meets the performance requirements of phase and amplitude continuous frequency outputs with the measured phase noise from 1 Hz to 1 MHz of less than 200 fs per channel. Further optimization of the PLL loop filter can be easily achieved if requirements change during the final ALS-U commissioning. This work was supported by the ALS and ALS-U Projects and the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
2309.00701
Entropies, level-density parameters, and fission probabilities along the triaxially- and axially-symmetric fission paths in $^{296}$Lv
We employ a statistical approach to investigate the influence of axial asymmetry on the nuclear level density and entropy along the fission pathways of a superheavy nucleus, explicitly focusing on the $^{296}$Lv isotope. These pathways are determined within multidimensional deformation spaces. Our analysis reveals a significant impact of triaxiality on entropy. Additionally, suppressing shell effects can alter the fission scenario depending on the available excitation energy. We derive the deformation-dependent level density parameter, which plays a crucial role in estimating the survival probability of a superheavy nucleus. Furthermore, we utilize a set of master equations to obtain the time-dependent fission probabilities and calculate the ratio of decay probabilities for both axial and triaxial paths.
A. Rahmatinejad, T. M. Shneidman, G. G. Adamian, N. V. Antonenko, P. Jachimowicz, M. Kowal
2023-09-01T18:47:21Z
http://arxiv.org/abs/2309.00701v2
Entropies, level-density parameters and fission probabilities along the triaxially- and axially-symmetric fission paths in \({}^{296}\)Lv ###### Abstract We employ a statistical approach to investigate the influence of axial asymmetry on the nuclear level density and entropy along the fission pathways of a superheavy nucleus, specifically focusing on the \({}^{296}\)Lv isotope. These pathways are determined within multidimensional deformation spaces. Our analysis reveals a significant impact of triaxiality on entropy. Additionally, suppressing of shell effects can alter the fission scenario depending on the available excitation energy. We derive the deformation-dependent level density parameter which plays a crucial role in estimating the survival probability of a superheavy nucleus. Furthermore, we utilize a set of master equations to obtain the time-dependent fission probabilities and calculate the ratio of decay probabilities for both axial and triaxial paths microscopic-macroscopic model, fission barrier, level-density parameter, survival probability, superheavy nuclei pacs: 21.10.Ma, 21.10.Pc, 24.60.Dr, 24.75.+i Also at Kazan Federal University, Kazan 420008, Russia ## I Introduction Recent breakthroughs in synthesizing novel superheavy nuclei, as documented in the works of Oganessian et al. [1; 2; 3], have resulted in substantial advancements. Building upon these achievements, forthcoming experiments are planned to extend the frontiers of heavy element production, capitalizing on the cutting-edge "superheavy factory" facility recently established at JINR (Dubna). The successful production of superheavy nuclei relies heavily on their ability to resist fission. This resistance is crucial in determining the survival probability of a hot compound nucleus formed in complete fusion reactions. The competition between neutron emission and fission plays a significant role in shaping the outcome of the decay process and it depends on factors such as the level densities and the potential energy surface (PES) topology. These characteristics mainly predestine the formation of evaporation residue nuclei. The role of entropy in this process is less studied and was not frequently taken up in studies. The critical advantage of entropy is its ability to encode information about the density of states across diverse potential energy landscapes simultaneously. However, the concept of entropy becomes more nuanced and challenging to define precisely. This is primarily due to the following reasons: * Finite Size: Unlike macroscopic systems, which typically consist of many particles, heavy nuclei are relatively small and finite systems. As a result, the statistical behavior and thermodynamic quantities, including entropy, might exhibit deviations from the behavior predicted by classical statistical mechanics. * Quantum Mechanical Nature: Nuclei are governed by quantum mechanics which introduces inherent uncertainties and restrictions on the states available to individual particles. The discrete energy levels and quantum correlations make entropy more intricate than that of the classical systems. * Lack of Equilibrium: Determination of the entropy in a nucleus requires achieving thermal equilibrium, which is challenging for isolated individual nuclei. Unlike macroscopic systems that readily reach equilibrium through interactions with a heat bath, individual nuclei are not typically in thermal contact with a large reservoir, making less straightforward the application of equilibrium statistical mechanics. Nevertheless, various statistical approaches and theoretical models have been developed to estimate and describe the entropy of heavy nuclei. These methods incorporate the concepts of nuclear level densities, statistical ensembles, and thermodynamic considerations to provide insights into the statistical behavior and thermodynamic properties of nuclear systems. So, the concept of entropy is still in use. Calculating level densities, we should determine the eigenvalues and their degeneracy of the nuclear Hamil tonian and to count the number of states within a specific energy interval of interest. Due to the exponential increase in the total number of states with excitation energy above a few MeV, statistical methods are employed to tackle the problem. Several sophisticated combinatorial methods, such as those described in Refs. [4; 5], have been developed, where factors like parity, angular momentum, pairing correlations, and collective enhancements are explicitly accounted for using the Gogny interaction. Additionally, a general and exact scheme for calculating particle-hole level densities while considering the Pauli exclusion principle is presented in Ref. [6]. The role of microscopic level densities in the fission process and their influence on nuclear shape evolution with excitation energy are discussed in Refs. [7]. Relativistic mean-field theory is employed to calculate nuclear level densities in Ref. [8]. In contrast, a self-consistent mean-field approach is utilized to determine single-particle level densities at various temperatures and the level density parameters are calculated using the Yukawa-folded potential in Ref. [9]. Spin- and parity-dependent nuclear level densities obtained with the moment method in the proton-neutron formalism are presented in Ref. [10]. Finally, a direct microscopic calculation of nuclear level densities with the shell model Monte-Carlo approach is discussed in Ref. [11]. In practical applications, certain approximations and assumptions are still necessary, including corrections for superfluidity effects and collective rotational and vibrational enhancements. The well-established Fermi-gas model can effectively describe the transition from a paired system to a system of non-interacting fermions as the nuclear system goes from low to higher energies. This phenomenological models accounts for the pairing effect by introducing a constant back-shift parameter \(\Delta\) of the excitation energy. In the Fermi-gas model, the average level-density parameter, which relates the excitation energy to the nuclear temperature, is often assumed to have a linear dependence on the mass number \(A\)[12]. However, the level-density parameter exhibits also energy dependence and gradually approaches an asymptotic value as the energy increases beyond the neutron separation energy. To incorporate the energy and shell correction dependencies into the level density parameter, a phenomenological expression was introduced in Ref. [13]. Our recent findings [14] have demonstrated that the level density parameter \(a\), which plays a crucial role in estimating survival probabilities, strongly depends on the available excitation energy, particularly at low energies. Given its appearance as an exponential function in the decay rates, even minor variations in this parameter can profoundly impact the estimated survival probabilities of superheavy nuclei. In this study, we will derive this parameter as a function of deformation, providing a more accurate and comprehensive description. The primary objective of this article is to combine advanced techniques, including the multidimensional manifold of deformations for fission trajectories, with a statistical approach for computing nuclear level density and entropy, and to finally calculate the level density parameter along fission paths. We discuss the calculations of two extreme cases, axial and non-axial fission pathways for the \({}^{296}\)Lv nucleus. It is important to note that the level density at any point along specific fission paths is not the same as at the ground state (GS). This is primarily due to two factors. Firstly, the excitation energy available at any point of the fission paths is reduced by the difference in deformation energy between this point and the GS. Secondly, the distribution of single-particle levels is altered due to the change of the shape of compound nucleus. Thus, identifying the correct paths that govern the fission process is essential. ## II Method of calculation We adopt a two-step methodology in our study. Firstly, we construct a 5-dimensional PES using the macroscopic-microscopic (MM) method. This PES allows us to identify suitable axial and non-axial fission paths by considering the direction with the most substantial gradient drop in all deformation variables. In the second step, we employ a statistical approach to determine the density of states. Based on this information, we calculate the entropy and relevant parameters along the fission paths. ### Potential Energy Surface The PES are calculated using MM method. In this approach, the microscopic energy is determined by employing the Strutinski shell and pairing correction method [15], which involves diagonalization of the deformed Woods-Saxon potential [16] to obtain single-particle levels. We consider \(n_{p}=450\) lowest proton levels and \(n_{n}=550\) lowest neutron levels from the \(N_{max}=19\) lowest shells of the harmonic oscillator in the diagonalization process. The standard values of \(\hbar\omega_{0}=41/A^{1/3}\) MeV for the oscillator energy and \(\gamma=1.2\hbar\omega_{0}\) for the Strutinski smearing parameter are used. Additionally, a six-order correction polynomial is applied to calculate the shell correction. For the macroscopic component, we employ the Yukawa plus exponential model [17] with parameters specified in Ref. [18]. The deformation-dependent Coulomb and surface energies are integrated using a 64-point Gaussian quadrature method. In order to construct the PES, a five-dimensional deformation space is utilized, which includes an expansion of the nuclear radius: \[R(\vartheta,\varphi)=cR_{0}\{1 + \beta_{20}\mathrm{Y}_{20}+\frac{\beta_{22}}{\sqrt{2}}[\mathrm{Y} _{22}+\mathrm{Y}_{2-2}] \tag{1}\] \[+ \beta_{40}\mathrm{Y}_{40}+\beta_{60}\mathrm{Y}_{60}+\beta_{80} \mathrm{Y}_{80}\},\] where the quadrupole non-axial deformation parameter \(\beta_{22}\) is explicitly included. For each nucleus, we generate the following 5D grid of deformations: \[\beta_{20} = \phantom{-}0.00\;(0.05)\;0.60,\] \[\beta_{22} = \phantom{-}0.00\;(0.05)\;0.45,\] \[\beta_{40} = -0.20\;(0.05)\;0.20, \tag{2}\] \[\beta_{60} = -0.10\;(0.05)\;0.10,\] \[\beta_{80} = -0.10\;(0.05)\;0.10.\] A grid of \(29,250\) points (nuclear shapes) is employed, with the grid steps specified in parentheses. This primary grid, labeled as (2), is extended through fivefold interpolating in all directions. As a result, an interpolated energy grid consisting of over 50 million points is obtained (see Refs. [19; 20] for more details). All the parameters are specified in the last tables of Ref. [21]. ### Level-density parameters Based on the superfluid formalism [22; 23] and an assumption of thermal equilibrium between neutron and proton subsystems, the excitation energies (\(U=U_{Z}+U_{N}\)), entropies (\(S=S_{Z}+S_{N}\)) and intrinsic level densities \(\rho\) are calculated at each temperature \(T\) as \[E_{N(Z)}(T)=2\sum_{k}\varepsilon_{k}n_{\Delta,k}^{N(Z)}-\frac{\Delta_{N(Z)}^{ 2}}{G_{N(Z)}}, \tag{3}\] \[U_{N(Z)}(T)=E_{N(Z)}(T)-E_{N(Z)}(0), \tag{4}\] \[S_{N(Z)}(T)=\sum_{k}\left\{\ln\left[1+\exp\left(-\frac{E_{k}^{N(Z)}}{T}\right)\right]\right.\] \[\left.+\frac{E_{k}^{N(Z)}}{T\left[1+\exp\left(\frac{E_{k}^{N(Z)}}{ T}\right)\right]}\right\}, \tag{5}\] \[\rho=\frac{\exp\left(S\right)}{(2\pi)^{\frac{3}{2}}\sqrt{D}}. \tag{6}\] The determinant of the matrix composed of the second derivatives of the entropy with respect to \(1/T\) and \(\mu=\lambda/T\) is denoted as \(D\). In the equations above, the occupation probabilities \[n_{\Delta,k}^{N(Z)}(T)=\frac{1}{2}\left(1-\frac{\varepsilon_{k}^{N(Z)}- \lambda_{N}}{E_{k}^{N(Z)}}\tanh\frac{E_{k}^{N(Z)}}{2T}\right) \tag{7}\] and the quasiparticle energies \(E_{k}^{N(Z)}=\sqrt{(\varepsilon_{k}^{N(Z)}-\lambda_{N(Z)})^{2}+\Delta_{N(Z)}^ {2}}\) are calculated using the single-particle energies (\(\varepsilon_{k}^{N(Z)}\)) of the Woods-Saxon potential. The constants of the pairing interaction for neutrons (\(G_{N}\)) and protons (\(G_{Z}\)) are adjusted to obtain correct GS pairing gaps (\(\Delta_{N}\) and \(\Delta_{Z}\)) in the MM method with using the Bardeen-Cooper-Schrieffer (BCS) equations: \[N=2\sum_{k}n_{\Delta,k}^{N}(T), \tag{8}\] \[\frac{2}{G_{N}}=\sum_{k}\frac{1}{E_{k}^{N}}\tanh\frac{E_{k}^{N}}{2T} \tag{9}\] for neutrons and \[Z=2\sum_{k}n_{\Delta,k}^{Z}(T), \tag{10}\] \[\frac{2}{G_{Z}}=\sum_{k}\frac{1}{E_{k}^{Z}}\tanh\frac{E_{k}^{Z}}{2T} \tag{11}\] for protons at zero temperature. Using the values of pairing constants obtained, the pairing gaps \(\Delta_{N(Z)}\) and chemical potentials \(\lambda_{N(Z)}\) are determined by solving Eqs.(8)-(11) at given temperatures. We repeated the calculations using the single-particle level energies obtained with the Woods-Saxon potential at each given set of deformations (\(\beta=\beta_{20},\beta_{22}\)) along the fission path. Above the critical temperature (\(T_{cr}\)), the pairing gap disappears, and all thermodynamic quantities revert to those of a noninteracting Fermi system. Generally, a larger density of states close to the Fermi surface at the saddle point (SP) leads to a larger pairing correlations and, as a result, to a larger critical temperature compared to the GS. In our calculations in the superheavy mass region, the critical temperatures for neutrons and protons are about 0.42 MeV at the GS and 0.52 MeV at the SP. The corresponding total excitation energies are approximately \(U_{cr}\approx 5.14\) MeV at the GS and \(U_{cr}\approx 11.27\) MeV at the SP. Fitting the calculated values of the intrinsic level density with the back-shifted Fermi gas expression \[\rho_{FG}(U)=\frac{\sqrt{\pi}}{12a^{\frac{1}{4}}(U-\Delta)^{\frac{1}{4}}}\exp (2\sqrt{a(U-\Delta)}) \tag{12}\] we obtain the level density parameter \(a(U)\) as a function of excitation energy. In the calculations, the energy back-shifts are taken as \(\Delta=12/\sqrt{A}\), 0, and \(-12/\sqrt{A}\) MeV for even-even, odd, and odd-odd isotopes, respectively. At finite temperatures, the microscopic corrections to energy are replaced by microscopic corrections to the free energy \(\delta F(T)=\delta F_{shell}(T)+\delta F_{pair}(T)\). Hence, the pairing and shell corrections to entropy, as well as the energy, are required at each temperature. In general, the temperature-dependent energy and entropy are defined as \[E(T)=2\int_{-\infty}^{\infty}n(T,\lambda)\varepsilon g(\varepsilon)d\varepsilon \tag{13}\] \[S(T) = -\int_{-\infty}^{+\infty}[(1-n(T,\lambda))\log(1-n(T,\lambda))+ \tag{14}\] \[n(T,\lambda)\log n(T,\lambda)]g(\varepsilon)d\varepsilon,\] respectively. Here, \(g(\varepsilon)\) is the density of the single-particle levels and temperature-dependent occupation probability is taken as \(n(T,\lambda)=1/(1+e^{(\varepsilon-\lambda)/T})\)[24]. To obtain the shell correction at finite temperature \(\delta F_{shell}=E(T)-\tilde{E}(T)-T\left[S(T)-\tilde{S}(T)\right]\), we calculate the energy \(E(T)\) and entropy \(S(T)\) for the discrete spectrum by taking \(g(\varepsilon)=\sum_{k}\delta(\varepsilon-\varepsilon_{k})\) in (13,14). The energy \(\tilde{E}(T)\) and entropy \(\tilde{S}(T)\) for the smoothed spectrum are calculated with occupation number taken as \(n(T,\tilde{\lambda})=1/(1+e^{(\varepsilon-\tilde{\lambda})/T})\) and the smooth density \(\tilde{g}(\varepsilon)\) of the single-particle levels as \[\tilde{g}(\varepsilon)=\frac{1}{\gamma\sqrt{\pi}}\sum_{k=1}e^{-x^{2}}\sum_{n= 0,2,\dots}^{6}c_{n}H_{n}(x), \tag{15}\] where \(x=(\varepsilon-\varepsilon_{k})/\gamma\), \(c_{0}=1\), and \(c_{n+2}=-c_{n}/(n+2)\)[15]. The chemical potentials \((\lambda,\tilde{\lambda})\) are defined by the particle number conservation condition \[N(Z)=2\int_{-\infty}^{+\infty}n(T,\lambda)g(\varepsilon)=2\int_{-\infty}^{+ \infty}n(T,\tilde{\lambda})\tilde{g}(\varepsilon)d\varepsilon. \tag{16}\] The temperature-dependent pairing correction is calculated as \(\delta F_{pair}=E_{pair}-\tilde{E}_{pair}-T\left[S_{pair}-\tilde{S}_{pair}\right]\). The pairing energy, \(E_{pair}(T)=E^{BCS}(T)-\frac{E^{BCS}_{\lambda=0}}{2}(T)\), corresponding to the real single-particle level distribution is calculated using \[E^{BCS}_{N(Z)}(T)=2\sum_{k}\varepsilon_{k}n^{N(Z)}_{\Delta,k}-\frac{\Delta_{N (Z)}^{2}}{G_{N(Z)}}-G\sum_{k}\left(n^{N(Z)}_{\Delta,k}\right)^{2}, \tag{17}\] where \(n^{N(Z)}_{\Delta,k}\) is defined in Eq. (7). Note, that the last term of (17) is often neglected in the calculation of excitation energy under the standard assumption of the BCS model [25]. However, it should be taken into account in the microscopic corrections because of the different pairing constants used in the calculations for the discrete and smoothed spectra. The pairing energy corresponding to the smooth single-particle level distribution \(\tilde{E}_{pair}(T)=\tilde{E}^{BCS}(T)-\tilde{E}^{BCS}_{\Delta=0}(T)\) is calculated using \[\tilde{E}^{BCS}_{N(Z)}(T)=2\int_{-\infty}^{+\infty} \varepsilon\tilde{g}(\varepsilon)\tilde{n}(T)d\varepsilon-\frac{\tilde{ \Delta}_{N(Z)}^{2}}{\tilde{G}_{N(Z)}}\] \[-\tilde{G}\int_{-\infty}^{+\infty}\tilde{g}(\varepsilon)\tilde{n} ^{2}(T)d\varepsilon. \tag{18}\] Here, \[\tilde{n}(T)=\frac{1}{2}\left(1-\frac{\varepsilon-\tilde{\lambda}_{N(Z)}}{ \tilde{E}_{N(Z)}}\tanh\frac{\beta\tilde{E}^{N(Z)}}{2}\right) \tag{19}\] is the occupation probability and \(\tilde{E}^{N(Z)}=\sqrt{\left(\varepsilon-\tilde{\lambda}_{N(Z)}\right)^{2}+ \tilde{\Delta}_{Z(N)}^{2}}\) is the quasiparticle energy. The chemical potentials \(\tilde{\lambda}_{Z(N)}\) and pairing gaps \(\tilde{\Delta}_{N(Z)}\) for smoothed spectrum are calculated by solving analogs of BCS equations: \[\frac{2}{\tilde{G}_{N(Z)}}=\int_{-\infty}^{+\infty}\frac{1}{\tilde{E}^{N(Z)}} \tanh\left(\frac{\beta\tilde{E}^{N(Z)}}{2}\right)\tilde{g}(\varepsilon)d\varepsilon, \tag{20}\] \[N(Z)=\int_{-\infty}^{+\infty}\left(1-\frac{\varepsilon-\tilde{\lambda}_{N(Z)}} {\tilde{E}^{Z(N)}}\tanh\frac{\beta\tilde{E}^{N(Z)}}{2}\right)\tilde{g}( \varepsilon)d\varepsilon. \tag{21}\] The pairing constant in Eq. (20) is adjusted to reproduce the smoothed pairing energy at \(T=0\) which is obtained with the assumption of a constant density of pairs near the Fermi level [21]. The pairing correction to entropy is calculated in a similar way. The average entropy \(\tilde{S}^{BCS}_{N(Z)}(T)\) is obtained by replacing the sum over discrete states \(k\) in Eq.(5) by integrals with the average density of states defined in Eq. (15) and replacing the quasiparticle energy \(E^{N(Z)}_{k}\) by its smooth analog \(\tilde{E}^{N(Z)}\). ## III Results and discussion In Fig. 1, we present the PES of \({}^{296}\)Lv projected onto the \((\beta_{20},\beta_{22})\) plane. Among the various trajectories, we selected two for in-depth analysis. The first trajectory, marked in red, encompasses the triaxial shapes, while the second trajectory, depicted in blue, corresponds to the axial shape. The energy landscape is determined by minimizing the energy on a five-dimensional grid (2) concerning \(\beta_{40}\), \(\beta_{60}\), and \(\beta_{80}\). The inclusion of quadrupole nonaxial deformation \(\beta_{22}\) in (1) significantly modifies the landscape and plays a crucial role in the depiction of the first SP. As depicted on the map, the impact of this effect is evident in the substantial reduction of the axial barrier by approximately 1 MeV. However, it is crucial to consider that mapping of energy in a multidimensional space poses some challenges. When minimizing certain deformations, the dimensionality reduction often results in a PES composed of disconnected patches corresponding to multiple minima in the discarded dimensions. Hence, the actual SP is identified through the imaginary water flow technique in the total deformation space. In the initial step, various thermodynamic quantities such as entropy, intrinsic level density, and energy-dependent shell corrections are computed as functions of excitation energy for each deformation along the selected fission path. Neglecting the kinetic energy for the motion in deformation space for each set of deformation \(\beta=(\beta_{20},\beta_{22})\) we have \[U(\beta)+E_{pot}(\beta)=E_{0}\equiv\text{constant}, \tag{22}\] where \(E_{pot}(\beta)=E_{mac}(\beta)+E_{mic}(\beta)\) is the corresponding potential energy and \(U(\beta)\) is the excitation energy. If damping of microscopic correction with excitation energy is taken into account, the Eq. (22) should be replaced with \[U^{*}(\beta)+E_{pot}(\beta,U)=E_{0}, \tag{23}\] where \(E_{pot}(\beta,U)=E_{mac}(\beta)+E_{mic}(\beta,U^{*})\) represents the damped potential energy. Calculating the level density and entropy at \(U^{*}(\beta)\), we consistently take into account the excitation energy-dependent potential as well as the change of thermodynamic quantities due to variation of excitation energy with respect to the damped potential. The properties of the nucleus at various deformations are calculated at the same value of total energy \(E_{0}\) or, equivalently, at the given excitation energy of the GS: \(U_{0}=E_{0}-E_{mac}(GS)-E_{mic}(GS)\). The energy-dependent microscopic part of the potential \(E_{mic}(\beta,U^{*})\) is obtained as explained in Sec. II.2. ### Fission Paths In our detailed analysis, we will focus on the entropy and level density parameters along specific fission paths. We have chosen the paths that exhibit the greatest gradient drop in all variables, and we have determined the potential to ensure that these paths pass through two uniaxial SP. The axial splitting path is depicted in blue in Fig. 2. Additionally, the macroscopic and microscopic components of the potential energy are shown in green and red, respectively. In Fig. 2(b), we show the same analysis, maintaining the same color hierarchy, but for the non-axial path. Here, we observe notable differences with Fig. 2(a). There is essentially no macroscopic barrier along this path. Instead, it is the shell effect, as expected, that creates the barrier and stabilizes the system. Furthermore, we can clearly see that the triaxial barrier is extensive and spread out, whereas the axial barrier is more compact. The competition between the height and shape of these barriers becomes crucial when considering the transmission process, especially when the available energy is lower than the fission barrier (tunneling). The influence of excitation energy on the GS and the SP is evident as shown in Fig. 3. The GS represents the lowest energy configuration, characterized by a well-defined shell structure. As the excitation energy increases, it introduces additional energy into the system, potentially destabilizing the shell structure. On the other hand, the SP corresponds to a highly deformed configuration, where the competition between shell effects and the deformation energy is crucial. The excitation energy has a more indirect impact on the SP, primarily affecting the overall energy of the system rather than the specific internal structure. Nevertheless, the destructive nature of excitation energy on the entire fission barrier is observed. Figure 1: The potential energy landscape of \({}^{296}\)LV is projected onto the \((\beta_{20},\beta_{22})\) plane. The dashed line represents the potential fission path from the GS to the SP. The energy scale is measured in MeV and is calculated relative to the macroscopic energy at the spherical shape. Figure 2: The variation of potential energy, macroscopic energy, and microscopic correction at zero excitation energy is depicted as a the function of the deformation parameter \(\beta_{2}\) for axial (a) and triaxial (b) paths of fission of \({}^{296}\)LV. At higher excitation energies, the barrier gradually diminishes, as the influence of shell effects becomes less pronounced, and the barrier approaches the macroscopic energy. Eventually, the threshold for fission disappears as the increasing excitation energy washes out the barrier. Also, with increasing \(U_{0}\), the potential energy as a function of \(\beta_{2}\) becomes flat near the GS. In Fig. 4, we present the variation of fission barriers \(E_{pot}-E_{pot}^{GS}\) with excitation energy for both axial (solid lines) and triaxial (dashed lines) fission paths. Note that as stated before, \(U_{0}\) is the excitation energy with respect to cold GS. The actual excitation energy of the GS is smaller than \(U_{0}\) because of the damping effects. For instance, at \(U_{0}=25\) MeV the GS excitation energy with respect to damped potential is \(U^{*}(GS)=20.93\) MeV. The damping effect reduces fission barrier corresponding to triaxial(axial) path from \(B_{f}^{tri(ax)}=6.25(7.31)\) MeV for cold nucleus to \(B_{f}^{tri(ax)}=3.05(3.93)\) MeV at \(U_{0}=25\) MeV. Also, the damping effect shifts the triaxial SP from \(\beta_{2}=0.37\) to \(\beta_{2}=0.33\), and the axial SP from \(\beta_{2}=0.25\) to \(\beta_{2}=0.22\). ### Level-density parameter In this work, we explore variation of level-density parameter with deformation along the fission path. The level-density parameter at the GS corresponds to particle emission channels such as neutron emission (\(a_{n}\)), while its value at the SP corresponds to fission (\(a_{f}\)). It is widely recognized that even the most minor change in the values of \(a_{n}\) and \(a_{f}\) can significantly impact the accuracy of calculations of the nuclear survival probability. Our investigation encompasses the entire cleavage path, both axially (Fig. 5(a)) and non-axially (Fig. 5(b)) symmetric at \(U_{0}=15\) and \(35\) MeV. Evidently, the choice of pathway significantly affects the behavior of the level-density parameter. The variability is more pronounced along the axial path and appears smoother along the non-axial path. Additionally, the results demonstrate that the intricate topology of the PES has a more significant impact on the parameter at lower energies than anticipated. As evident from Fig. 5 (see dotted lines), the damping of microscopic correction systematically influences both fission pathways. Although, this effect may seem to be small, it is not negligible because it affects the GS and the SP differently. Therefore, the damping has significant impact on \(a_{f}/a_{n}\) ratio. It is interesting to note that at small \(U_{0}\) the value of \(a(SP)\) is comparable or even smaller than \(a(GS))\). This is in line with the conclusion of Ref. [26] that the ratio \(a_{f}/a_{n}\) exhibits an increasing trend with excitation energy approaching its asymptotic value less than \(1.1\). To highlight damping effect more prominently, the ratio of level-density parameters calculated with and without damping along axial (solid line) and triaxial (dotted line) fission paths is presented in Fig. 6. The calculations are performed at \(U_{0}=30\) MeV. The most significant impact occurs near the GS. This effect can be attributed to the large shell correction and large excitation energy at the GS. Similar but less pronounced effect is observed for the axial path at deformed minimum around \(\beta_{20}\sim 0.4\). It is noteworthy that the observed effect of damping never exceeds \(5\%\). ### Entropy Figure 7 displays two panels illustrating the variations in entropy along the axial and non-axial fission paths at energies \(U_{0}=15\) and \(35\) MeV. Dotted lines depict the entropy values calculated with damped potential, while the solid lines represent the results without accounting for damping. By comparing these two sets of curves, we can assess the influence of damping on the entropy along these paths. Notably, regardless of the chosen path, the entropy systematically decreases when potential damping Figure 3: The variation of potential energy along axial (a) and triaxial (b) fission path at \(U_{0}=0,15,35\) MeV, together with the macroscopic energy. Here, \(U_{0}\) is the excitation energy calculated concerning the GS without the damping of shell correction. is included. It is apparent that at low energies the behavior of entropy to large extent is determined by the intricacies of the potential energy and of the density of single-particle states along the path. The nucleus undergoes significant changes of deformation as it moves from the GS to the SPs. Along this paths, the elongation and stretching of the nucleus lead to increase of the density of single-particle states and, thus, entropy of the deformed configurations is enhanced. However, at low energies the impact of excitation energy on entropy is dominant so that the maximum entropy is observed at the GS. With increase of excitation energy the trend of entropy curves are still defined by the shape of potential energy, but it is less pronounced because of the growing role of the density of single-particle states. The account for the damping, completely changes the behavior of entropy. The damping of microscopic correc Figure 4: The variation of fission barriers with excitation energy \(U_{0}=0,15,35\) MeV (a-c), for axial (solid lines) and triaxial (dotted lines) fission path. Figure 5: (a): Level-density parameters obtained for axial fission path at \(U_{0}=15,35\) MeV with (dotted lines) and without (solid lines) consideration of damping of microscopical effects. (b): The same as panel (a), but for triaxial fission path. Figure 6: The ratio of level-density parameters obtained with account for damping of potential to those obtained without account for damping of potential along axial (solid line) and triaxial (dashed line) fission path at \(U_{0}=30\) MeV. tion suppresses the role of excitation energy in entropy but leaves the role of density of single-particle states untouched. As seen from Fig. 3, because of the significant damping of microscopic corrections at the GS, the excitation energy of the GS decreases more in comparison with the deformed configurations. Therefore, the gain in entropy of the GS due to the larger excitation energy is reduced. This leads to smearing of the entropy as a function of deformation, irrespective of whether triaxiality is taken into account or not. At larger \(U_{0}\), due to the significant reduction of the excitation energy for the configurations around the GS together with the larger density of states for deformed configurations, the entropy curve gets flattened around the GS. Moreover, the entropy at the GS becomes smaller than entropy of SPs. It is worth noting that the difference between entropies at the axial and triaxial SPs decreases independent of whether or not the damping effect is taken into account. In Fig.7, both energy and deformation effects contribute to the displayed entropy curves. To elucidate the effect of deformation on entropy, we performed the calculations with fixed excitation energy along the path. Note that the assumption of constant excitation energy is not realistic, and we do it only to study the effect of deformation on entropy. In Fig. 8, the excitation energies are fixed as \(U=15\) and \(35\) MeV with respect to undamped potentials (solid lines). Panel (a) shows the ratio of entropies calculated at each deformation along the axial fission path to those at the GS. Panel (b) shows the same as panel (a), but for the triaxial path. The trend of these curves clearly resembles the shape of the potential energy. The effect of structure gradually decreases with excitation energy. It is seen in Fig. 8, that at a given excitation energy, the behavior of entropy is determined by the increased complexity of the deformed configurations. Entropy is generally higher at the SP than at the GS because the SP represents a more disordered and diverse configuration of nucleons and benefits from using multiple energy shells. The calculations at \(U^{*}\) values corresponding to the considered energies are shown with dotted lines. As seen, the structure effect is stronger in this case. The damping at the GS is larger than that at larger deformations. Hence, the enhancement of the entropy ratios at the SP seen from the figure is both due to larger state density and slower potential damping. ### Fission probability To investigate the impact of entropy on decay properties, we employ a simplified approach to calculate the decay rates for two distinct paths. Our analysis involves discretizing the trajectories into a finite number of grid points: \(i=(1:n_{\text{max}})\) for the axial path and \(i=(-n_{\text{max}}:-1)\) for the triaxial path. The grid point \(i=0\) corresponds to the configuration of the GS. Each point is characterized by a set of deformation parameters \(\beta_{i}=(\beta_{20}^{(i)},\beta_{22}^{(i)})\) The probability \(n(\beta_{i})\) of the nuclear system being populated at time \(t\) in a state characterized by deformation \(\beta_{i}\) is determined by a set of master equations [27] \[\frac{dn(\beta_{i})}{dt}=\Lambda_{i,i+1}n(\beta_{i+1})+\Lambda_{i,i-1}n(\beta_ {i-1})-\Lambda_{i\pm 1,i}n(\beta_{i}). \tag{24}\] We assume that only neighboring systems are connected. Consequently, within our simplified model, systems belonging to different trajectories (axial or triaxial) are only linked through the GS. This approximation seems appropriate at low excitation energies. In Eq. (24), the symbol \(\Lambda_{if}\) represents the transition rates, which can be expressed in terms of microscopic transition probabilities and the level density of the final states. Assuming the same transition strength \(\lambda_{0}\) in both axial and triaxial Figure 7: Comparison between entropy along axial (blue lines) and triaxial fission paths (red lines) at excitation energies \(U_{0}=15\) and \(35\) MeV with (dotted lines), and without (solid lines) consideration of the damping of potential. paths, we calculate the transition width as \[\Lambda_{if} =\lambda_{if}\rho(\beta_{f}),\] \[\lambda_{if} =\lambda_{fi}=\frac{\lambda_{0}}{\sqrt{\rho(\beta_{i})\rho(\beta_{ f})}}. \tag{25}\] Here, we make an assumption that once the system reaches any of the SPs, it certainly undergoes fission. Since the system inevitably undergoes fission, the fission probabilities through the SP of axial \([n_{ax}^{SP}(t)]\) and triaxial \([n_{tri}^{SP}(t)]\) paths are connected as \(n_{ax}^{SP}(\infty)+n_{tri}^{SP}(\infty)=1\). The ratios of fission probabilities, corresponding to an axial and triaxial paths calculated with and without accounting for damping of potential, are displayed in Fig. 9 (a) as a function of \(U_{0}\). The ratio of entropy at the SP of axial fission path to that of triaxial fission path \(S_{ax}^{SP}/S_{tri}^{SP}\) is also displayed in the panel (b). The contribution of the considered paths to total fission depends on their relative entropies at the SP. The triaxial path exhibits dominance at lower energies, while at higher energies, the contributions of both the axial and triaxial paths become comparable. As seen from Fig. 9 (b), the \(S_{ax}^{SP}/S_{tri}^{SP}\) ratios both for the calculations with and without damping of potential are approaching asymptotic values after \(U_{0}=25\) MeV. Consequently, the ratios of fission probabilities are expected to exhibit a constant behavior at \(U_{0}>25\) MeV. However, it is worth noting that when the damping effect is taken into account, the entropy values from the GS to the SP get closer to each other with increasing excitation energy so that, as shown in Fig. 7 (b), at \(U>25\) MeV they become almost constant. Hence, the assumption of two distinct paths for the evaluation of fission probability becomes inadequate. This finding suggests that in order to accurately model the fission process, particularly when considering the damping effect, the competition between the axial and triaxial paths should be accounted at all intermediate points along the paths, rather than solely at the GS. In the absence of damping, the GS entropy remains higher than that at the SP even at higher excitation en Figure 8: (a) The ratios of entropies calculated at each deformation at fixed excitation energies \(U=15\) and \(35\) MeV to those of the GS with considering potentials without damping (solid lines) for axial fission path. The calculation at excitation energies with respect to the damped potentials \(U^{*}\) are presented with dotted lines. (b) the same as panel (a), but for the triaxial path. Figure 9: Panel (a): The ratio of fission probability through the SP of axial fission path to that of triaxial fission path as a function of excitation energy. Panel (b): The ratio of entropy at the SP of axial fission path to that of triaxial fission path. The solid blue lines and dashed red lines indicate the calculations with and without damping of potential, respectively. ergies. This implies that there is a probability for the system to return to the GS and subsequently have the opportunity to choose another path. However, when the damping effect is taken into account, because of relatively small difference between SP and GS entropies once the system selects a path, it progresses toward the SP and does not revert back to the GS. With damping of potential and decreasing of the variation of entropy along the fission paths, the axial path, which has a SP closer to the GS, achieves a non-zero decay probability more rapidly than the triaxial path. This effect is represented in decay constants \(\lambda\) which are determined by fitting the \(n^{SP}(t)\) with the following form \[n^{SP}(t)=n^{SP}(\infty)(1-e^{-\lambda t}).\] The ratio \(\lambda_{tri}/\lambda_{tri}\) of decay constants corresponding to the triaxial and axial pathways is displayed in Fig. 10. As shown, with account of the damping effect the axial path reaches its asymptotic probability \(n^{SP}_{ax}(\infty)\) faster than triaxial path and this effect increases with excitation energy. ## IV Conclusions Our analysis emphasizes that in the study of fission, it is essential to consider not only the shape of the PES but also the entropy, which incorporates both energy and structural effects. As demonstrated for the superheavy nucleus \({}^{296}\)Lv, variations in structural effects at different points of the fission path can lead to significant changes of the level density and, correspondingly, of the entropy, especially in the case of axial symmetry. Furthermore, our work highlights the significant influence of suppressing of shell effects in certain regions, where it is strong and cannot be neglected. At sufficiently high excitation energies, the fission process exhibits iso-entropic behavior. While, at lower energies, the entropy demonstrates more pronounced variations. By employing the master equation, we computed the decay probabilities that vary with time, considering the permissible symmetry in the fission pathway. As shown, the time-dependent fission probability is influenced by the behavior of entropy along the fission paths. The decay constants corresponding to the axial and triaxial paths are comparable. To provide a more precise analysis, fission calculations should be conducted on a multi-dimensional PES, considering different trajectories leading to the SP region. However, to gain a comprehensive understanding, one should expand the study to encompass additional degrees of freedom and allowing the system to select an optimal path. This extension will provide a more thorough exploration of the fission process and its dependence on entropy. ## Acknowledgements M. K. was co-financed by the COPIGAL project. T.M.S, G.G.A., and N.V.A. were supported by Ministry of Science and Higher Education of the Russian Federation (Moscow, Contract No. 075-10-2020-117).
2306.10397
Enhancing the Prediction of Emotional Experience in Movies using Deep Neural Networks: The Significance of Audio and Language
Our paper focuses on making use of deep neural network models to accurately predict the range of human emotions experienced during watching movies. In this certain setup, there exist three clear-cut input modalities that considerably influence the experienced emotions: visual cues derived from RGB video frames, auditory components encompassing sounds, speech, and music, and linguistic elements encompassing actors' dialogues. Emotions are commonly described using a two-factor model including valence (ranging from happy to sad) and arousal (indicating the intensity of the emotion). In this regard, a Plethora of works have presented a multitude of models aiming to predict valence and arousal from video content. However, non of these models contain all three modalities, with language being consistently eliminated across all of them. In this study, we comprehensively combine all modalities and conduct an analysis to ascertain the importance of each in predicting valence and arousal. Making use of pre-trained neural networks, we represent each input modality in our study. In order to process visual input, we employ pre-trained convolutional neural networks to recognize scenes[1], objects[2], and actions[3,4]. For audio processing, we utilize a specialized neural network designed for handling sound-related tasks, namely SoundNet[5]. Finally, Bidirectional Encoder Representations from Transformers (BERT) models are used to extract linguistic features[6] in our analysis. We report results on the COGNIMUSE dataset[7], where our proposed model outperforms the current state-of-the-art approaches. Surprisingly, our findings reveal that language significantly influences the experienced arousal, while sound emerges as the primary determinant for predicting valence. In contrast, the visual modality exhibits the least impact among all modalities in predicting emotions.
Sogand Mehrpour Mohammadi, Meysam Gouran Orimi, Hamidreza Rabiee
2023-06-17T17:40:27Z
http://arxiv.org/abs/2306.10397v1
Enhancing the prediction of emotional experience in movies using deep neural networks: the significance of audio and language ###### Abstract Our paper focuses on making use of deep neural network models to accurately predict the range of human emotions experienced during watching movies. In this certain setup, there exist three clear-cut input modalities that considerably influence the experienced emotions: visual cues derived from RGB video frames, auditory components encompassing sounds, speech, and music, and linguistic elements encompassing actors' dialogues. Emotions are commonly described using a two-factor model including valence (ranging from happy to sad) and arousal (indicating the intensity of the emotion). In this regard, a Plethora of works have presented a multitude of models aiming to predict valence and arousal from video content. However, non of these models contain all three modalities, with language being consistently eliminated across all of them. In this study, we comprehensively combine all modalities and conduct an analysis to ascertain the importance of each in predicting valence and arousal. Making use of pre-trained neural networks, we represent each input modality in our study. In order to process visual input, we employ pre-trained convolutional neural networks to recognize scenes [1], objects [2], and actions [3, 4]. For audio processing, we utilize a specialized neural network designed for handling sound-related tasks, namely SoundNet [5]. Finally, Bidirectional Encoder Representations from Transformers (BERT) models are used to extract linguistic features [6] in our analysis. We report results on the COG-NIMUSE dataset [7], where our proposed model outperforms the current state-of-the-art approaches. Surprisingly, our findings reveal that language significantly influences the experienced arousal, while sound emerges as the primary determinant for predicting valence. In contrast, the visual modality exhibits the least impact among all modalities in predicting emotions. Sogand Mehrpour Mohammadi\({}^{1}\), Meysam Gouran Orim\({}^{2}\), Hamidreza Rabiee\({}^{3}\)\({}^{1}\)Department of Computer Science, University of Mazandaran, Babolsar, Iran \({}^{2}\)Department of Computer Engineering, Shahrood University of Technology, Shahrood, Iran \({}^{3}\)Department of Electrical Engineering, Islamic Azad University, Karaj Branch, Karaj, Iran Email: [email protected] Emotion Recognition, Multimodal, Sound, Text, Spatio-temporal, Affective Computing ## 1 Introduction In recent years, notable advancements have been made in the field of emotion recognition specifically pertaine to video and sound analysis [1]. However, there remain challenges as predictive models need to improve their accuracy and information processing capabilities to efficiently disentangle the influence of diverse modalities. These advancements will empower multimedia creators in the advertising or film industries to employ these improved predictive models as valuable tools. Researchers in the fields of neuroscience and psychology have extensively investigated the impacts of various input stimulus modalities on evoked emotions [8, 9, 10] and [11, 12]. Multimodal approaches to emotion identification are considered among the most important techniques for obtaining computer intelligence, as humans can perceive and comprehend emotions via the combined information conveyed in sound, video, and text [13, 14]. In the past, research on emotion recognition has predominantly directed on discriminative emotion features and recognition models that rely on single modalities such as audio signals [15] or facial expressions in video. In this work, we place significant emphasis on considering context information, knowing its importance in developing a robust emotion prediction model [16]. There is a scarcity of research focused on anticipating the feeling responses that videos and other multimedia content can elicit in viewers. With the progression of neural networks, numerous studies have endeavored to extract suitable global and local features from raw speech signals and RGB images, leading to improved performance in emotion recognition tasks [17]. These collective efforts have enhanced the performance of emotion recognition systems. Recent advancements in emotion recognition systems are hindered by a lack of comprehensive multimodal analysis, as they fail to consider all the relevant modalities that influence the recognition of emotions. In this study, we harness the capabilities of deep convolutional neural networks (CNNs) to present a novel three-dimensional model of effective content. This model enables us to precisely predict the evoked emotion from movies, music, and text, with the ability to independently forecast arousal and valence as distinct dimensions of emotion. To extract features from static RGB frames, we employ state-of-the-art deep convolutional neural networks (CNNs). These advanced CNN models empower us to capture both picture-related features and motion-related features from the RGB frames. The I3D [3] architecture, which combines spatial and temporal networks, is employed to determine activity in a manner similar to CNNs. In order to extract information from scenes [1] and objects [2] in video frames, the spatial network is used. On the other hand, the temporal network captures and learns information according to the motion of the camera and objects across successive frames. Furthermore, sound representation is computed using soundNet [5], which is employed to extract the audio features. SoundNet has illustrated notable performance in the classification of acoustic scenes and objects by utilizing the inherent synchronization between vision and sound during model training. This natural synchronization allows for more efficient utilization of the advantages presented by both visual and audio information. To obtain text features from movie subtitles, we employed the pre-trained Bidirectional Encoder Representations from Transformers (BERT) model [6]. By feeding the movie subtitles into the BERT model, we achieved meaningful text representations that are vital for emotion recognition. In other words, the BERT model is utilized for extracting deep bidirectional representations from unlabeled text. In our study, we employed fully connected layers to integrate proposed multimodal networks. This integration was performed using the expanded COGNIMUSE dataset, [7]. ### Related work Currently, the majority of computer vision research on emotion identification mainly focuses on analyzing human facial expressions in both video and still images [18]. When inspecting research related to predicting emotions from videos, it is evident that the majority of works utilize multimodal techniques that consider and combine data from diverse modalities. Moreover, the field of computer vision has shown a growing interest in exploring the relationships and interactions between diverse modalities, including the synergies between sound and vision modalities [19]. In this paper, we aim to explore the relationship between sound and visual semantic information in the domain of emotion detection. While the combination of diverse modalities, such as sound and language, has been increasingly studied in the field of speech recognition [20], we extend this exploration to the realm of emotion detection, searching for uncover how these modalities synergistically contribute to the exact determination and understanding of emotions in multimedia content. Additionally, the relationship between text and visual modalities has been thoroughly investigated for the question-answering task [21].In this paper, our main goal is to develop a comprehensive representation that empowers us to conceive emotions through the integration of multiple modalities, including text, sound, and visual information extracted from both static images and motion. Unlike previous works that concentrates on only two modalities, our aim is to capture the rich and complex nature of emotions by harnessing the synergistic interaction of all available modalities. To extract text features from the movie subtitles in our dataset, we utilized a pre-trained BERT model [6].By feeding the movie subtitles into this pre-trained BERT model, we can extract meaningful text features that capture the linguistic content and context for further analysis and emotion anticipation. The main contribution of our paper is to demonstrate the effectiveness of a deep model that leverages three important natural modalities for emotion recognition. We provide an in-depth description of our methodology and thorough explanations of the conducted experiments in the following sections. Through these detailed analyses, we aim to showcase the robustness and accuracy of our approach in recognizing and predicting emotions across several modalities. In Section 2, we present our approach for developing a comprehensive understanding of emotion representations by incorporating concepts from sound, text, and visuals. In section 3, we present a detailed account of the experiments conducted to determine the effectiveness of our proposed representations. We present various experimental setups and analyze the performance of our model in anticipating and classifying emotions based on the extracted features. Figure 1: Sample frames from the COGNIMUSE dataset to show our different modalities, i.e., video frame, sound and subtitles,(a) low valence, (b) high valence. The experimental results serve to validate the robustness and efficacy of our approach across diverse datasets and scenarios. ## 2 Multimodal Learning of Concepts In this section, our main goal is to achieve a comprehensive understanding of a given notion by exploring and comprehending its various modalities, including text, sound, and still images. By considering and integrating information from several modalities, we aim to capture a more complete and nuanced understanding of the targeted notion. We introduce a novel multimodal approach that utilizes all available modalities, including visual, sound, and text characteristics, to train our emotion prediction model. To achieve this, we employ the visual modality, which encompasses scene comprehension, object detection, and motion data. Additionally, the acoustic content of the second modality is processed. Last but not least, we employ text processing techniques to train multimodal networks within the context of textual information. To begin with, our network is organized to automatically detect visual concepts in both spatial and temporal dimensions, addressing the numerous challenges according to this task. Secondly, in order to tackle the sound processing challenge, it is vital for us to comprehend the audio segment that corresponds to a video concept. Finally, we extract the wording directly from the movie's subtitles, which may correspond to specific scenes, several instances, or different visual ideas. We utilize a pre-trained network to learn representations range from sound, vision, and text modalities in order to recognize emotions. Prior to combining them to produce audio-visual-text features, each of these input categories is passed through fully connected layers to decrease dimensionality and adapt the representations for emotion prediction. During the training phase of our presented network architecture, the weights of these fully connected layers are learned. Figure 1 shows a few frame samples showing strong and low-valence emotions. ### Visual concept _Still Image:_ Following a similar approach to previous work [22], we divided the videos into five-second segments and extracted frames from each segment. Then, to extract features from static RGB frames, we employ the pre-trained Resnet152 [2] algorithm. The Imagenet Figure 2: Video, Sound, and text modalities pass through our proposed network for emotion classification using fully connected layers. dataset, which offers spatial information on the category of objects, is used to train the Resnet network. _Scene information:_ To ascertain the semantic importance of visual attributes in scenes, we utilize pre-trained VGG networks [24] that were trained on Places datasets [1] for the scene classification task.To feed the fully connected layer of our network, we calculated the visual features from successive frames for each movie. After getting the visual features, we utilized max-pooling to downsample the feature representation and decrease its dimensionality. _Motion information:_ To retrieve motion characteristics from the input image frame sequence, we utilized a two-stream inflated 3D ConvNet (I3D) [3]. The I3D model includes 2D ConvNet inflation and was trained on the Kinetics dataset for the action detection task.To extract spatio-temporal features, we applied a two-stream method with a combination of still frames and optical flow. Specifically, we fed the still RGB images into the pre-trained I3D models, which were originally trained for action recognition tasks. This allowed us to derive relevant spatio-temporal features from the video data. ### Sound In the COGNIMUSE dataset, the accompanying sound for the movies can be classified as either speech or music. As part of our preprocessing method, each training and test sample is separated into non-overlapping fixed-length sound waveform files of approximately five seconds. This approach is similar to the one used in the work of [22]. Next, we extract features from the raw audio waveforms using SoundNet [5]. \begin{table} \begin{tabular}{|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{**Arousal**} & \multicolumn{2}{c|}{**valence**} \\ \hline \hline **model** & **Accuracy(\%)** & **Accuracy \(\pm\) 1 (\%)** & **Accuracy (\%)** & **Accuracy \(\pm\) 1 (\%)** \\ \hline **FC (RGB frame + OF + Audio)**[22] & 53.32 & 94.75 & 43.10 & 90.51 \\ \hline **LSTM (RGB frame + OF + Audio)**[22] & 48.64 & 95.28 & 37.20 & 89.22 \\ \hline \hline **Visual (Resnet + Places + I3D)** & 42.23 & 90.55 & 41.71 & 91.23 \\ \hline **Sound (SoundNet)** & 58.59 & 95.11 & **56.28** & **97.20** \\ \hline **Text (BERT)** & **58.86** & **95.13** & 32.55 & 82.62 \\ \hline **Visual+ Sound** & 54.30 & 94.20 & 32.30 & 82.90 \\ \hline **Resnet+ Sound** & 58.59 & 95.11 & 36.43 & 86.20 \\ \hline **Text+ Sound** & 58.59 & 95.11 & 30.82 & 83.17 \\ \hline **Text + visual** & 54.22 & 94.36 & 30.40 & 83.22 \\ \hline **Text + Resnet** & 58.22 & 95.11 & 36.32 & 86.18 \\ \hline **Visual+ Sound + Text** & 56.86 & 94.65 & 42.19 & 91.46 \\ \hline \end{tabular} \end{table} Table 1: Accuracy results on COGNIMUSE dataset along with the arousal and valence experienced emotion labels. \begin{table} \begin{tabular}{|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{**Arousal**} & \multicolumn{2}{c|}{**valence**} \\ \hline \hline **Model** & **Accuracy (\%)** & **Accuracy \(\pm\) 1 (\%)** & **Accuracy (\%)** & **Accuracy \(\pm\) 1 (\%)** \\ \hline **FC (RGB frame + OF + Audio)**[22] & 31.20 & 72.94 & 30.33 & 66.95 \\ \hline **LSTM (RGB frame + OF + Audio)**[22] & 30.80 & 71.69 & 22.54 & 57.63 \\ \hline **Malandrakis et al.**[23] & 24.00 & 57.00 & 24.00 & 64.00 \\ \hline \hline **Visual (Resnet + Places + I3D)** & 42.67 & 86.99 & 49.20 & 93.81 \\ \hline **Sound (SoundNet)** & 58.51 & 95.10 & **55.85** & 96.21 \\ \hline **Text (BERT)** & **58.56** & 95.10 & 32.45 & 83.99 \\ \hline **Visual+ Sound** & 58.51 & 95.10 & 54.85 & **97.45** \\ \hline **Text+ Sound** & 58.51 & 95.10 & 41.79 & 83.65 \\ \hline **Text + visual** & 58.51 & 94.94 & 31.53 & 84.21 \\ \hline **Text + Resnet** & 56.89 & 94.79 & 30.41 & 83.70 \\ \hline **Sound + Text + Resnet** & 54.42 & 94.07 & 37.48 & 87.23 \\ \hline **Visual+ Sound + Text** & 54.45 & **96.35** & 32.48 & 83.99 \\ \hline \end{tabular} \end{table} Table 2: Accuracy results on COGNIMUSE dataset along with the arousal and valence dimension on intended emotion label. ### Text In this paper, we leverage the multimodal properties of sound, video, and text as sources of supervision to predict emotion labels. By incorporating information from several modalities, we intend to increase the accuracy and robustness of emotion prediction. We use three modalities that are able to mutually supervise each other during the training process. Large-scale language models, such as BERT (Bidirectional Encoder Representations from Transformers) [6], have shown significant performance in several natural language processing (NLP) tasks, showcasing their efficiency at both the word and sentence levels. These models have significantly advanced the field by capturing rich contextual information and semantic representations, allowing them to earn state-of-the-art results in diverse NLP applications. To extract bidirectional linguistic semantic data from the subtitles of each movie, we employ the power of the BERT (Bidirectional Encoder Representations from Transformers) model.Before using word2vec to embed each word from the subtitles, we apply a pre-processing step to remove English stop words [25].In our paper, the BERT model is employed, which enables us to understand the relationships between words, the meaning of sentences, and the overall structure of the subtitle text, which in turn contributes to our multimodal emotion anticipation model. ### Modality integration To capture the correlations between diverse modalities and their influence on elicited emotion, we use the synchronous nature of sound, text, and visual input. We utilize pairs of video and text (from the subtitles), as well as pairs of images and sound (from videos). Our proposed multimodal network, depicted in Figure 2, leverages fully connected layers to incorporate and process diverse modalities for emotion classification. During training, the weights of the fully connected layers in our network are learned and optimized to precisely predict emotions. Subsequently, the outputs of these fully connected layers are concatenated and passed through another two fully connected layers. Using our multimodal network, we can predict emotion arousal and valence labels while individuals watch movies. ## 3 Experiment To determine the performance of our model, we conduct tests using the COGNIMUSE dataset [7], which is composed of seven half-hour continuous movie clips. This dataset provides emotion annotations in terms of continuous arousal and valence ratings, ranging from -1 to 1. The primary focus of this work is to predict intended, expected and experienced emotions.In other words, we aim to understand and predict the emotions that individuals intend to convey, expect to feel, and actually experience while interacting with various multimedia content. ### Experimental setup To ensure a comprehensive comparison, we also consider the results of intended emotions, which reflect the intentions of the filmmakers. These intentions are assessed by the same specialist who evaluates the other emotion categories. Additionally, to address potential challenges in emotion categorization, we employ seven different bins for each set of emotions. This method allows for a more nuanced analysis and interpretation of the emotions expressed in the multimedia content. To evaluate the performance of our proposed models, we utilized a leave-one-out cross-validation method. This means that for each iteration, we train the model on all samples except one, and then test its performance on the left-out sample. This process is repeated for each sample in the dataset. To evaluate the accuracy of our emotion classification, we use two metrics, namely accuracy and accuracy\(\pm\)1. Accuracy measures the percentage of correctly classified emotions, while accuracy\(\pm\)1 considers emotions that are classified within one bin of the ground truth emotion. In order to pre-process the data and remove noise, we apply Malandrakis' method [23]. After noise removal, we further apply the Savitzky-Golay filter [26] to smooth the data and decrease any remaining fluctuations. To guarantee consistency and comparability, we also rescale the preprocessed data within the range of -1 to 1. The valence and arousal are independently classified into seven distinct classes by the models. To train these models, we utilize stochastic gradient descent (SGD) optimization algorithm. The models are trained with a learning rate of 0.005, weight decay of 0.005, and the softmax function with a temperature of T = 2. During training, we run the models for 50 epochs with a batch size of 128. To avoid overfitting and enhance efficiency, we utilize early stopping with a patience of 25 epochs. ### Results The results presented in table 1 and 2 showcase the accuracy of our proposed models for the 7-class classification of intended emotion prediction and experienced emotion prediction, respectively. We followed the test-training split and evaluation design presented in [22] to ensure consistency in our evaluation methodology. Each video clip in our dataset is divided into five-second segments, with synchronized sound and subtitles for each segment. The raw RGB video frames are then fed into the ResNet-152, a pre-trained model on the ImageNet dataset, to extract features from each frame. These features are successively used as input to the final classification layer for emotion prediction.A completely connected layer is added on top of the retrieved features after applying max-pool to minimize dimensions, as seen in Fig 2. We employ the same methodology to extract scene features from locations. I3D models are used to extract motion features from snippet frames, and motion features are then layered with average pooling dimensional reduction before a fully connected layer is applied. The BERT model and SoundNet are utilized in a similar manner as the I3D model for extracting features from text and sound modalities, respectively. In our study, we investigate how diverse modalities, such as visual, sound, and text, contribute to the classification of viewers' emotional states. We use the network with fully connected layers for this analysis. In our study, we maintain a consistent architecture throughout the experiments, while changing the input features based on the specific modality being considered. We inspect diverse combinations of modalities, including audio, text, RGB frame features, Places features, and I3D features. Each modality is treated as a separate input to the architecture, allowing us to examine their individual contributions to the overall emotion recognition task. By systematically evaluating the performance of the model with different input features, we gain insights into the relative importance and effectiveness of each modality in capturing and predicting emotions. As shown in Table 1, text features provide a classification accuracy for arousal emotion labels that is higher than that of other modalities (image and motion). Sound modality gains high accuracy in valence emotion labels on experienced emotion annotation. According to the results presented in Table 2,text modality obtains higher accuracy when predicting intended arousal emotion label, and the sound modality provides better accuracy in predicting intended valence emotion label. The observed results, where text features perform well in anticipating intended arousal emotion labels and audio features perform better in predicting intended valence emotion labels, suggest that the influence of different modalities on emotions may vary. It is possible that text features, such as the semantic content derived from subtitles, play a more important role in conveying information related to arousal emotions. On the other hand, the audio features used, which capture the acoustic characteristics of the sound, may be more effective in capturing signals related to valence emotions. We compare our approach with the state-of-the-art of [23, 22] in emotion prediction in Table 1, 2. ## 4 Conclusion In this paper, we propose a novel deep convolutional network that leverages multimodal inputs, including sound, vision, and text, to learn representations for emotion identification. Emotion classification is evaluated using both experienced and intended emotion annotations from the extended COGNIMUSE dataset. We train multiple model components and evaluate their performance when using diverse input modalities and their combinations. The results of our experiments demonstrate significant improvements in both experienced and intended emotion annotations from the extended COGNIMUSE dataset, highlighting the efficacy of our approach in enhancing emotion recognition using multimodal information.
2310.02075
Learning Quantum Processes with Quantum Statistical Queries
Learning complex quantum processes is a central challenge in many areas of quantum computing and quantum machine learning, with applications in quantum benchmarking, cryptanalysis, and variational quantum algorithms. This paper introduces the first learning framework for studying quantum process learning within the Quantum Statistical Query (QSQ) model, providing the first formal definition of statistical queries to quantum processes (QPSQs). The framework allows us to propose an efficient QPSQ learner for arbitrary quantum processes accompanied by a provable performance guarantee. We also provide numerical simulations to demonstrate the efficacy of this algorithm. In our new framework, we prove exponential query complexity lower bounds for learning unitary 2-designs, and a doubly exponential lower bound for learning haar-random unitaries. The practical relevance of this framework is exemplified through application in cryptography, highlighting vulnerabilities of a large class of Classical-Readout Quantum Physical Unclonable Functions (CR-QPUFs), addressing an important open question in the field of quantum hardware security. This work marks a significant step towards understanding the learnability of quantum processes and shedding light on their security implications.
Chirag Wadhwa, Mina Doosti
2023-10-03T14:15:20Z
http://arxiv.org/abs/2310.02075v3
# Learning Quantum Processes with Quantum Statistical Queries ###### Abstract Learning complex quantum processes is a central challenge in many areas of quantum computing and quantum machine learning, with applications in quantum benchmarking, cryptanalysis, and variational quantum algorithms. This paper introduces the first learning framework for studying quantum process learning within the Quantum Statistical Query (QSQ) model, providing the first formal definition of statistical queries to quantum processes (QPSQs). The framework allows us to propose an efficient QPSQ learner for arbitrary quantum processes accompanied by a provable performance guarantee. We also provide numerical simulations to demonstrate the efficacy of this algorithm. The practical relevance of this framework is exemplified through application in cryptanalysis, highlighting vulnerabilities of Classical-Readout Quantum Physical Unclonable Functions (CR-QPUFs), addressing an important open question in the field of quantum hardware security. This work marks a significant step towards understanding the learnability of quantum processes and shedding light on their security implications. ## 1 Introduction In recent years, the field of machine learning (ML) has experienced remarkable growth, reshaping the landscape of artificial intelligence. This transformation has left an indelible mark on natural language processing, image processing, optimization, and numerous other scientific and engineering disciplines. It has also given birth to popular and captivating tools like ChatGPT and AlphaGo. Currently, another field of research that has flourished over the past decades alongside ML is the field of Quantum computing. Quantum computing provides a fundamentally different model of computation and information processing that has given rise to creative algorithms that use these special features to provide speedups for certain computational tasks [44, 78, 2, 70], and it has unveiled a multitude of concrete and potential applications across diverse domains, including simulating complex physical systems [12], cryptanalysis [78, 17], linear optimizations [34], chemistry [57, 19] and many others. Quite naturally, the convergence of ML and quantum computing has garnered considerable attention for research in the past few years. This intersection has piqued research interest due to the presence of intriguing theoretical challenges and the practical applicability of both fields. In the realm of Quantum Machine Learning (QML), a multitude of quantum algorithms have emerged in the quest to leverage quantum machines for practical ML tasks [76, 13, 43, 83, 59, 72, 26, 4]. However, achieving a profound understanding of the foundational principles underpinning these methods and techniques remains an ongoing pursuit, perhaps not surprisingly, as the same complexity applies to both of the pillars of QML: machine learning and quantum computing. Nonetheless, there have been efforts to shine a light on the power of quantum in performing learning tasks [50, 51, 43, 45, 76, 56]. Quantum learning theory constitutes the theoretical framework to study the advantages and limitations of quantum machine learning in the broad sense of it. The main goal is to quantify learning problems as well as design and analyse algorithms to solve them. In this context of _computational learning_, the _learner_ is a classical, quantum or hybrid algorithm, trying to learn an object (functions, quantum states, quantum processes, distributions, etc.) from a concept class, by having access to some examples of it. Among the different learning models in the classical learning theory literature, some have been extended to quantum and studied under the umbrella of quantum learning theory such as _Quantum PAC learning_[18, 8], _Quantum Statistical Query_ model [9, 10] and _Quantum agnostic learning_ model [8]. We refer to the following survey for an overview of the results [7]. However more broadly, this framework, in essence, encompasses a wide array of quantum learning results, ranging from learning functions encoded within quantum states [18, 11, 38], quantum state tomography [14, 66, 86], shadow tomography [48, 1] and learning diverse classes of probability distributions [24, 11, 62], to process tomography [61]. While most of the efforts in quantum learning theory have been focused on quantum states (both as examples and target objects), here in this work we shift the focus to quantum processes. Learning quantum processes is a fundamental problem that arises in many areas in physics [84, 39] and quantum computing, such as quantum benchmarking [77, 58, 47, 15], noise characterisation [42], error mitigation [79, 71], and variational quantum algorithms [76]. Furthermore, with the crucial role of quantum computing in cryptography, another area in which the problem of learning quantum processes, particularly unitaries, arises is cryptanalysis. In these scenarios, the quantum process of interest can manifest as a quantum oracle, providing a quantum implementation of a classical function [16, 52, 75, 23, 38], or as a physical device or hardware component disclosing an unknown underlying unitary, which serves as a cryptographic key or fingerprint. [6, 68, 69]. The primary challenge in learning complex quantum processes lies in its often resource-intensive nature, rendering conventional techniques such as process tomography [61], impractical, especially for the current and near-future NISQ (Noisy Intermediate-Scale Quantum) devices. Recent endeavours have explored diverse techniques and drawn inspiration from machine learning, classical learning theory, and classical shadow methods to devise algorithms and approaches for efficiently tackling specific instances of this challenge [46, 63, 33, 77, 58, 47, 25]. However, to the best of our knowledge, no unified framework in the discipline of quantum learning theory has been established in order to formally study the learnability of quantum processes. In this work, we provide a robust framework for the study of quantum processes in a specific learning model, namely the _Quantum Statistical Query (QSQ)_ model, for the first time. In a statistical query model (quantum or classical) the learner constructs a hypothesis not by accessing a sequence of labelled examples themselves, but instead by adaptively querying an oracle that provides an _estimate_ of the statistical properties of the labelled examples. In the quantum case, this statistical estimation is in fact the estimated expectation values of some observable, over multiple copies of a quantum state \(\rho\), by performing efficiently implementable noisy measurements on them. This ex tension to the quantum world comes quite naturally as this is often the useful information extracted from a quantum system or quantum states: by measuring the quantum system several times and estimating the expectation value of an observable, which corresponds to a physical quantity. This natural correspondence to the physics of the quantum experiment and the learning tasks designed based on them marks our main motivation for the choice of this model. The feasibility of this model in practice, as compared to quantum PAC learning, makes it a good candidate for studying learning algorithms in the NISQ era and its limitations. Aside from being physically well-motivated, the model however weaker than PAC learning, is rich and interesting to study many learning problems, as it has also been a framework to show separation between quantum and classical learners. It has been shown that several concept classes are efficiently learnable in the QSQ model, whereas they require exponentially many samples with classical statistical queries [9]. Quantum statistical queries also found applications in the classical verification of quantum learning [21] and quantum error mitigation [71, 10]. Establishing our framework and definitions for learning quantum processes within the quantum statistical query model enables us to develop efficient learning algorithms for learning general quantum processes under certain distribution of states and conditions on observable. Here, our statistical query learner comes equipped with provable performance guarantees. Furthermore, we explore the practical applications of our model and learning algorithm, venturing into the domain of cryptanalysis. In this domain, our results shed light on the security vulnerabilities of a class of quantum physical unclonable functions, filling a significant gap in the study of these primitives and the protocols employing them. ### Overview of the main results We provide a list of the main contributions of this paper. * **Framework for learning quantum processes in Quantum Statistical Query model (QPSQ):** We provide a formal definition that captures what it means to learn a quantum process using statistical queries. Specifically, we define the quantum statistical query oracle QPStat for a quantum process \(\mathcal{E}\) that produces a statistical estimate of the expectation value of an observable \(O\), after the application of the quantum process on an input state \(\rho\). We show how our oracle generalises the previously defined QStat oracles for states. * **Efficient QPSQ learner for learning arbitrary quantum processes:** We provide an algorithm that can efficiently learn arbitrary processes (under certain conditions on the observable and distribution) in the QPSQ model and hence we provide the first QPSQ learner within our framework. Our algorithm is inspired by the process-learning algorithm of Huang et al. in [46], which itself uses ideas from classical shadow tomography [48]. However, our algorithm, defined within the QPSQ model is applicable more generally and easily to a wider range of scenarios (as shown later in the application section) and provides a more physically-motivated version of the algorithm using similar classical ML methods. We also the query complexity of the algorithm as its provable performance guarantee. * **Numerical simulations of our learning algorithm:** We demonstrate the performance of our proposed algorithm through numerical simulations. We also provide methods for generating valid and simple synthetic statistical queries for our simulations. * **Application of the QPSQ framework and learner in cryptography:** We explore the applications of our model and learning algorithm in cryptanalysis. In this domain, we focus on a primitive from quantum hardware security, namely _Classical-Readout of Quantum PUF or CR-QPUF_. What makes this primitive a perfect case study for our model is the fact that its main security property relies on the assumption of the inherent difficulty of learning its quantum processes. This assumption has been shown to be violated for a very limited subclass of them with simple underlying unitary circuits using heuristic classical machine learning attacks [69]. However, by employing our efficient learners, we give compelling proof that CR-QPUF with even highly complex underlying quantum circuits with an exponential depth can also be learned in the QPSQ model. Notably, our results show the vulnerability of a large family of protocols relying on CR-QPUFs. ### Related works The quantum statistical query has been first introduced in [9] where they also show that parity functions, \(k\)-juntas and polynomial-sized DNF formulas are efficiently learnable in the QSQ model, in contrast to the classical setting where these problems are provably hard. They also introduce the notion of _private PAC learning_, related to differential privacy, and show its relationship to the QSQ model. The relation between quantum statistical query and differential privacy has been further investigated in [5], studying in particular local differential privacy and QSQ. Recently [10] has dived further into the QSQ model and the role of entanglement in learning in such models and providing QSQ lower bounds on learning quantum states. The paper also compares QSQ to noisy quantum PAC learning, establishing an exponential separation between the two models. An efficient algorithm for learning the prediction of quantum processes has been introduced in [46] which uses classical shadow [48] as a subroutine. The efficiency guarantee of the algorithm is given for any quantum process under a distribution of input states that is invariant under local Clifford transformation, and with an observable that satisfies certain efficiency and locality conditions. Other approaches for learning unknown quantum processes have been suggested, such as [33], which focuses on learning quantum processes without input control, or [20], which provides methods based on the Pauli transfer matrix for learning quantum processes and Hamiltonians, emphasising the importance of quantum memory. ### Organisation of the paper We provide all the necessary background and notations that we use in the paper in Section 2. In Section 3 we introduce our quantum process learning with quantum statistical query (QPSQ) model and we provide the formal definitions and relations to other definitions. In Section 4 we provide our learning algorithm for learning arbitrary processes. We provide our numerical simulations in Section 5 and finally, the applications are discussed in Section 6. Preliminaries We start by introducing the notation we use in the paper as well as the essential background. ### Quantum Information We include some basic definitions of quantum computation and information in this section. For more details, we refer the reader to [64]. We will denote the \(d\times d\) identity matrix as \(I_{d}\) and we may omit the index \(d\) when the dimension is clear from the context. We use the bra-ket notation, where we denote a vector \(v\in\mathbb{C}^{N}\) using the ket notation \(\ket{v}\) and its adjoint using the bra notation \(\bra{v}\). For \(u,v\in\mathbb{C}^{n}\), we will denote by \(\bra{u}v\) the standard Hermitian inner product \(u^{\dagger}v\). A quantum (pure) state is a normalized vector \(\ket{v}\), i.e. \(\ket{\bra{v}}\ket{=1}\). We will write \(\mathcal{M}_{N,N}\) to denote the set of linear operators from \(\mathbb{C}^{N}\) to \(\mathbb{C}^{N}\) and we define the set of quantum states as \(\mathcal{S}_{N}:=\{\rho\in\mathcal{M}_{N,N}:\rho\geq 0,\mathrm{Tr}[\rho]=1\}\). We denote by \(\mathcal{U}_{N}\) the set of \(N\)-dimensional unitary operators, \[\mathcal{U}_{N}:=\left\{U\in\mathcal{M}_{N,N}:UU^{\dagger}=U^{\dagger}U=I \right\}. \tag{1}\] We will now introduce a useful orthonormal basis for \(\mathcal{M}_{N,N}\) which is widely used in quantum information. **Definition 1** (Pauli operators).: The set of Pauli operators is given by \[X=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},Y=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix} \tag{2}\] The set \(\mathcal{P}_{1}=\{I,X,Y,Z\}\) forms an orthonormal basis for \(\mathcal{M}_{2,2}\) with respect to the Hilbert-Schmidt inner product. The tensor products of Pauli operators and the identity, i.e. the operators of the form \(P\in\{I,X,Y,Z\}^{\otimes n}:=\mathcal{P}_{n}\), are usually referred as _stabilizer operators_ or _Pauli strings_ over \(n\) qubits. **Definition 2** (Stabilizer states).: The eigenstates of the Pauli matrices are of special interest. We define this set as : \[stab_{1}=\{\ket{0},\ket{1},\ket{+},\ket{-},\ket{+y},\ket{-y}\}, \tag{3}\] where \(\ket{0}\) & \(\ket{1}\) are the eigenstates of \(Z\), \(\ket{+}\) & \(\ket{-}\) are the eigenstates of \(X\) and \(\ket{+y}\) & \(\ket{-y}\) are the eigenstates of \(Y\). **Definition 3** (Clifford group).: The Clifford group is the group of unitaries generated by the following 3 gates : \[H=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix},S=\begin{pmatrix}1&0\\ 0&i\end{pmatrix},CNOT=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix} \tag{4}\] We now define the Haar measure \(\mu_{N}\), which can be thought as the uniform probability distribution over all quantum states or over all unitary operators in the Hilbert space of dimension \(N\). For a comprehensive introduction to the Haar measure and its properties, we refer to [60]. **Definition 4** (Haar measure).: The Haar measure on the unitary group \(U(N)\) is the unique probability measure \(\mu_{N}\) that is both left and right invariant over the set \(\mathcal{U}_{N}\), i.e., for all integrable functions \(f\) and for all \(V\in\mathcal{U}_{N}\), we have: \[\int_{U(N)}f(U)d\mu_{N}(U)=\int_{U(N)}f(UV)d\mu_{N}(U)=\int_{U(N)}f(VU)d\mu_{N}( U). \tag{5}\] Given a state \(\ket{\phi}\in\mathbb{C}^{N}\), we denote the \(k\)-th moment of a Haar random state as \[\mathbb{E}_{\ket{\psi}\sim\mu_{N}}\left[\ket{\psi}\bra{\psi}^{\otimes k} \right]:=\mathbb{E}_{U\sim\mu_{N}}\left[U^{\otimes k}\ket{\phi}\bra{\phi}^{ \otimes k}U^{\dagger\otimes k}\right]. \tag{6}\] Note that the right invariance of the Haar measure implies that the definition of \(\mathbb{E}_{\ket{\psi}\sim\mu_{N}}\left[\ket{\psi}\bra{\psi}^{\otimes k}\right]\) does not depend on the choice of \(\ket{\phi}\). **Definition 5** (Quantum process).: A map \(\mathcal{E}\) from one quantum state to another is said to be completely positive if for any positive operator \(A,\mathcal{E}(A)\) is also a positive operator. \(\mathcal{E}\) is said to be trace-preserving if for any input density operator \(\rho,\textit{tr}(\mathcal{E}(\rho))=1=\textit{tr}(\rho)\). A quantum process \(\mathcal{E}\) is defined as a Completely Positive Trace-Preserving (CPTP) map from one quantum state to another. For a unitary \(U\), the associated map is: \[\mathcal{E}:\rho\to U\rho U^{\dagger} \tag{7}\] **Definition 6** (Povm).: A _Positive Operator-Valued Measure_ (POVM) is a quantum measurement described by a collection of positive operators \(\{E_{m}\}_{m}\), such that \(\sum_{m}E_{m}=I\) and the probability of obtaining measurement outcome \(m\) on a state \(\rho\) is given by \(p(m)=\textit{Tr}(E_{m}\rho)\). ### Classical Shadow Tomography Classical shadow tomography is the technique of using randomized measurements to learn many properties of quantum states [32, 48, 51]. It has recently been shown that classical shadow tomography can be used to predict the outcomes of arbitrary quantum processes [46]. In this section, we include relevant results on classical shadow tomography that will be used in the rest of the work. **Definition 7** (Randomized Pauli Measurement).: Given \(n>0\). A randomized Pauli measurement on an \(n\)-qubit state is given by a \(6^{n}\)-outcome POVM \[\mathcal{F}^{Pauli}\triangleq\left\{\frac{1}{3^{n}}\bigotimes_{i=1}^{n}\ket{ s_{i}}\bra{s_{i}}\right\}_{s_{1},\ldots,s_{n}\in\textit{stab}_{1}} \tag{8}\] which corresponds to measuring every qubit under a random Pauli basis \((X,Y,Z)\). The outcome of \(\mathcal{F}^{Pauli}\) is an \(n\)-qubit state \(\ket{\psi}=\bigotimes_{i=1}^{n}\ket{s_{i}}\), where \(\ket{s_{i}}\in\textit{stab}_{1}\) is a single-qubit stabilizer state. Next, we define classical shadows based on randomized Pauli measurements. Other measurements can also be used to define classical shadows. **Definition 8** (Classical shadow of a quantum state).: Given \(n,N>0\). Consider an \(n\)-qubit state \(\rho\). A size \(N\) classical shadow of \(S_{N}(\rho)\) of quantum state \(\rho\) is a random set given by \[S_{N}(\rho)\triangleq\left\{\ket{\psi_{l}}\right\}_{l=1}^{N}, \tag{9}\] where \(|\psi_{l}\rangle=\bigotimes_{i=1}^{n}|s_{l,i}\rangle\) is the outcome of the \(l\)-th randomized Pauli measurement on a single copy of \(\rho\). **Definition 9** (Classical Shadow Approximation of a quantum state).: Given the classical shadow \(S_{N}(\rho)\) of an \(n\)-qubit state \(\rho\). We can approximate \(\rho\) via \[\sigma_{N}(\rho)=\frac{1}{N}\sum_{l=1}^{N}\bigotimes_{i=1}^{n}(3|s_{l,i} \rangle\langle s_{l,i}|-I) \tag{10}\] **Definition 10** (Classical shadow of a quantum process).: Given an \(n\)-qubit CPTP map \(\mathcal{E}\). A size-\(N\) classical shadow \(S_{N}(\mathcal{E})\) of the quantum process \(\mathcal{E}\) is a random set given by \[S_{N}(\mathcal{E})\triangleq\Big{\{}|\psi_{l}^{(in)}\rangle,|\psi_{l}^{(out)} \rangle\Big{\}}_{l=1}^{N} \tag{11}\] where \(|\psi_{l}^{(in)}\rangle=\bigotimes_{i=1}^{n}|s_{l,i}^{(in)}\rangle\) is a random input state with \(|s_{l,i}^{(in)}\rangle\in\mathit{stab}_{1}\) sampled uniformly at random, and \(|\psi_{l}^{(out)}\rangle=\bigotimes_{i=1}^{n}|s_{l,i}^{(out)}\rangle\) is the outcome of performing a random Pauli measurement on \(\mathcal{E}(|\psi_{l}^{(in)}\rangle\langle\psi_{l}^{(in)}|)\). The authors in [46] recently proposed a machine learning algorithm that is able to learn the average output behaviour of any quantum process, under some restrictions. In the learning phase, the algorithm works with the classical shadow of a generic quantum process \(\mathcal{E}\) and a set of observables \(\{O_{i}\}\). In the prediction phase, the algorithm receives an input quantum state \(\rho\) sampled from the target distribution \(D\), and aims to predict \(\mathit{tr}(O_{i}\mathcal{E}(\rho))\) for all observables in the set. The algorithm comes with a rigorous performance guarantee on the average prediction error over \(D\), achieved with efficient time and sample complexity with respect to the number of qubits and error parameters. While the guarantee holds for any quantum process, there are certain restrictions on the observables \(\{O_{i}\}\) and the distribution \(D\). For the detailed restrictions and the exact complexity, we refer to [46]. ### Computational Learning Theory Computational learning theory studies what it means to _learn a function_. One of the most successful formal learning frameworks is undoubtedly the model of _Probably Approximately Correct_ (PAC) learning, which was introduced in [82]. In this model, we consider a class of target Boolean functions \(\mathcal{C}\subseteq\{f|f:\{0,1\}^{n}\rightarrow\{0,1\}\}\), usually called the _concept class_. For an arbitrary concept \(c\in\mathcal{C}\), a PAC learner receives samples of the form \(\{x,c(x)\}\), where, in general, \(x\) is sampled from an unknown probability distribution \(D:\{0,1\}^{n}\rightarrow[0,1]\). In the setting of noisy PAC learning, the bit \(c(x)\) of each sample may independently be incorrect with some probability. The learner aims to output, with high probability, a hypothesis function \(h\) with low error on average over the distribution \(D\). Another learning model of interest was introduced in [53] - the Statistical Query (SQ) model. In this model, a learner is more restricted in the way it can interact with the data. Rather than learning from potentially noisy samples directly, the algorithm learns using the statistical properties of the data, making it more robust to noise. In particular, an SQ learner receives as input estimates of the expectation values of some chosen functions within specified error tolerance. Quantum generalizations of both PAC and SQ learning have already been introduced and studied widely. The Quantum PAC (QPAC) model was introduced in [18], where the learner has access to a quantum computer and receives _quantum example states_ as input. The quantum example state for a concept \(c\) over \(n\) input bits with the target distribution \(D\) is the \(n+1\)-qubit state \(|\psi_{c}\rangle=\sum_{x}\sqrt{D(x)}|x,c(x)\rangle\). It has been shown in [8] that the sample complexity of quantum and classical PAC learning is the same. However, over a fixed uniform distribution, learning with quantum queries can provide exponential advantage over the classical learner [11, 38]. A quantum analogue of statistical queries was introduced in [9]. Here, the statistical query returns an approximation of the expectation value for an input measurement observable on quantum examples of the concept class to be learned. We include the quantum statistical query oracle defined in [9] below: **Definition 11** (Qstat, from [9]).: Let \(\mathcal{C}\subseteq\{c:\{0,1\}^{n}\rightarrow\{0,1\}\}\) be a concept class and \(D:\{0,1\}^{n}\rightarrow\{0,1\}\) be a distribution. A quantum statistical query oracle \(\mathsf{Qstat}(O,\tau)\) for some \(c^{*}\in\mathcal{C}\) receives as inputs \(O,\tau\), where \(\tau\geq 0\) and \(O\in(\mathbb{C}^{2})^{\otimes n+1}\times(\mathbb{C}^{2})^{\otimes n+1},||O||\leq 1\), and returns a number \(\alpha\) satisfying \[|\alpha-\langle\psi_{c^{*}}|O|\psi_{c^{*}}\rangle|\leq\tau\] where \(|\psi_{c^{*}}\rangle=\sum_{x\in\{0,1\}^{n}}\sqrt{D(x)}|x,c^{*}(x)\rangle\). Note that in this QSQ model, while the learner can obtain an estimate of any measurement on the quantum examples, it is restricted only to classical computation. Interestingly, several concept classes such as parities, juntas, and DNF formulae are efficiently learnable in the QSQ model, whereas the classical statistical query model necessitates an exponentially larger number of samples. Additionally, the authors of [10] have established an exponential gap between QSQ learning and learning with quantum examples in the presence of classification noise. Quantum statistical queries have also found practical applications in classical verification of quantum learning, as detailed in [21]. Furthermore, they have been employed in the analysis of quantum error mitigation models [71, 10] and quantum neural networks [31]. Alternative variations of quantum statistical queries have also been explored in [45, 37, 65]. ## 3 Quantum Statistical Queries to Quantum Processes In this section, we propose a framework and definition for learning quantum processes through quantum statistical queries and discuss its importance and relevance to different problems. Previously studied quantum statistical queries (Definition 11) have considered queries to quantum examples associated with some classical function [9]. While this model is the first generalisation of statistical query learning into the quantum setting and this type of query is useful for the problem of learning certain classes of classical functions, they do not encompass the quantum processes, and as such a new framework is needed for studying the learnability of quantum processes through quantum statistical queries. Learning quantum processes from (often limited amount of) data is a crucial problem in physics and many areas of quantum computing such as error characterisation and error mitigation [79, 55, 61, 42, 50]. In many realistic and near-term scenarios, the only accessible data of the quantum process is through measured outcomes of such quantum channels, which are in fact nothing but statistical queries to such quantum processes. Hence studying the quantum process learnability via statistical queries is well motivated practically from the nature of quantum experiments and measurements. In what follows, we first give a formal definition for this model and then we also clarify the relationship between our definition and the previous definition of QSQ by showing that it encompasses the latter. **Definition 12** (Statistical Query to a Quantum Process (QPSQ)).: Let \(\mathcal{E}:\mathbb{C}^{d}\rightarrow\mathbb{C}^{d}\) be a quantum process acting on a \(d\)-dimensional Hilbert space. A QPSQ learning algorithm has access to a quantum statistical query oracle \(\mathsf{QPSStat}\) of the process \(\mathcal{E}\), which receives as input an observable \(O\in\mathbb{C}^{d}\times\mathbb{C}^{d}\) satisfying \(\left\|O\right\|\leq 1\), a tolerance parameter \(\tau\geq 0\), and using \(poly(1/\tau)\) copies of a quantum state \(\rho\in\mathcal{S}_{d}\), outputs a number \(\alpha\) satisfying \[\left|\alpha-\mathrm{Tr}\big{(}O\mathcal{E}(\rho)\big{)}\right|\leq\tau \tag{12}\] We denote the query as \(\alpha\leftarrow\mathsf{QPSStat}_{\mathcal{E}}(\rho,O,\tau)\). The output \(\alpha\) acts as an estimate of the expectation value of \(O\) on the state \(\rho\) after evolution under \(\mathcal{E}\) within absolute error \(\tau\). Our definition of QPSQs is justified in the setting where a learner has black-box access to a quantum process, with the ability to query the process with any quantum state. We note that our definition does not specify how the copies of the target quantum state \(\rho\) are being provided. The algorithm can provide multiple copies of the quantum state or can send the classical description of the quantum state to the oracle where they can be locally prepared, depending on the scenario and application. Obtaining the QPSQs for a such general class of quantum operations is then achieved by the most natural operation, which is, after evolution under this process, performing a measurement chosen by the learner. Finally, an estimate of the expectation value is returned to the learner. Again since most physical properties of a quantum system are extracted through such interactions with its associated quantum channel, the application of this model in physics is straightforward. However, we will show that this model can be used in various scenarios including quantum cryptanalysis and learning quantumly-encoded classical functions. As a practical use-case for cryptanalysis, in section 6 we demonstrate how a QPSQ learning algorithm performs as a learning attack on authentication using CR-QPUFs. We now discuss what it means for a QPSQ learning algorithm to be _efficient_. Any such learning algorithm aiming to learn some property of the process \(\mathcal{E}\) must be a quantum polynomial time (\(QPT\)) algorithm, making at most polynomially many queries to the oracle \(\mathsf{QPSStat}\). As the learner itself must provide the input states or their classical description to \(\mathsf{QPSStat}\), it must be possible to efficiently prepare the required copies of these states. This is an important point for the physical justification of this model. If the learner was able to query \(\mathsf{QPSStat}\) with arbitrary quantum states that require exponential quantum computation for preparation, the learning model would no longer be physically justified. In the following definition for an _efficient_ QPSQ learning algorithm, we only discuss the efficiency of the algorithm, not its correctness, allowing various notions of correctness depending on the desired property of \(\mathcal{E}\) to be learned up for consideration. **Definition 13** (Efficient QPSQ learner).: A QPSQ learning algorithm is called an _efficient QPSQ learner_ if and only if it makes at most \(\mathrm{poly}(\log(d))\) queries with tolerance at least \(1/\mathrm{poly}(\log(d)\) to the\(\mathsf{QPSStat}_{\mathcal{E}}\) oracle, and runs in \(\mathrm{poly}(\log(d))\) time. After formally defining our QPSQ model, we now talk about the relationship between our proposed model and other SQ models. It is easy to see that the \(\mathsf{QPSStat}\) oracle in Definition 12 generalizes the Qstat oracle from Definition 11. We start by considering the unitary \(U_{c}\) associated with a Boolean function \(c:\{0,1\}^{n}\rightarrow\{0,1\}\), \(U_{c}|x,y\rangle=|x,y\oplus c(x)\rangle,\;\forall x\in\{0,1\}^{n},y\in\{0,1\}\), and the quantum process \(\mathcal{E}_{c}:\rho\to U_{c}\rho U_{c}^{\dagger}\). Let \(|\psi_{D}\rangle=\sum_{x\in\{0,1\}^{n}}\sqrt{D(x)}|x,0\rangle\) be a superposition state associated with a distribution \(D\) over \(\{0,1\}^{n}\). Clearly, \(\mathcal{E}_{c}(|\psi_{D}\rangle\langle\psi_{D}|)=|\psi_{c}\rangle\langle\psi_ {c}|\). Thus, for any observable \(O\), we can see that \(\mathrm{Tr}\big{(}O\mathcal{E}_{c}(|\psi_{D}\rangle\langle\psi_{D}|)\big{)}= \langle\psi_{c}|O|\psi_{c}\rangle\), giving us the equivalence \[\mathsf{QPStat}_{\mathcal{E}_{c}}(|\psi_{D}\rangle\langle\psi_{D}|,O,\tau) \equiv\mathsf{Qstat}_{|\psi_{c}\rangle}(O,\tau) \tag{13}\] Along with the definition of \(\mathsf{Qstat}\), Arunachalam et. al.[9] presented algorithms for learning various concept classes in their QSQ model. The generalization of \(\mathsf{Qstat}\) by \(\mathsf{QPStat}\) implies that these algorithms also hold in our QPSQ learning model. Given \(\mathsf{QPStat}\) oracle access to the process \(\mathcal{E}_{c}\) associated with the target concept, and sufficient copies of the state \(|\psi_{D}\rangle\), the required output from \(\mathsf{Qstat}\) queries can be obtained using \(\mathsf{QPStat}\) queries, and the remainder of the learning algorithms proceed identically. Thus, any concept class efficiently learnable in the QSQ model can be learned efficiently given QPSQ access to the unitary encoding of the target function instead. ## 4 Learning Arbitrary Processes from QPSQs In this section, we introduce an algorithm for learning to predict the output of arbitrary quantum processes over a restricted class of observables. Our algorithm is very much inspired by the hybrid algorithm by Huang et al. in [46], which uses ideas from classical shadow tomography [48], to efficiently learn to predict the outcome of arbitrary quantum processes. In fact, our main observation was that the QPSQ model provides a more natural and general way of extracting data from a quantum process or a quantum experiment than the data extracted through the classical shadow queries, used in [46]. The classical machine learning part of our learning algorithm becomes very similar to the one in [46] with the difference being that instead of using the classical shadow approximation, we use the \(\mathsf{QPStat}\) output to estimate the measurement outcome. This allows us to prove a rigorous performance guarantee for our learning algorithm with a modified query complexity. We discuss the applications of this learning algorithm later in section 6. ``` for\(l=1\) to \(N\)do \(|\psi_{l}^{(in)}\rangle\leftarrow\bigotimes_{i=1}^{n}|s_{l,i}^{(in)}\rangle,|s _{l,i}^{(in)}\rangle\in stab_{1}\), chosen uniformly at random \(y_{l}(O)\leftarrow\mathsf{QPStat}_{\mathcal{E}}(|\psi_{l}^{(in)}\rangle \langle\psi_{l}^{(in)}|,O,\tau)\) endfor return\(S_{N}(\mathcal{E},O)=\{|\psi_{l}^{(in)}\rangle,y_{l}(O)\}_{l=1}^{N}\) ``` **Algorithm 1** Construct statistical query database for observable \(O\) We note that the dataset we construct is specific to the observable, while the dataset shown in [46] can be reused for any valid observable. Given the classical dataset, the learning and prediction methods are identical to those from [46]. We include them here for completeness. ``` Learning: for all\(P\in\mathcal{P}_{n},|P|\leq k=\Theta(log(1/\epsilon))\)do \(x_{P}^{\prime\prime}(O)\leftarrow\frac{1}{N}\sum_{l=1}^{N}y_{l}(O)tr(P|\psi_{l}^ {in}\rangle\langle\psi_{l}^{in}|)\) if\((\frac{1}{3})^{P}>2\tilde{\epsilon}\) and \(|x_{P}^{\prime\prime}(O)|>2.3^{|P|/2}\sqrt{\tilde{\epsilon}}\sum_{Q:a_{Q}\neq 0}|a_{Q}|\)then \(\hat{\alpha}_{P}(O)\gets 3^{|P|}x_{P}^{\prime\prime}(O)\) else \(\hat{\alpha}_{P}(O)\gets 0\) endif endfor Prediction for target state \(\rho\): \(h(\rho)\leftarrow\sum_{P:|P|\leq k}\hat{\alpha}_{P}(O)tr(P\rho)\) return\(h(\rho)\) ``` **Algorithm 2** Learning to predict arbitrary quantum processes using statistical queries We have the following rigorous performance guarantee for Algorithm 2 - **Theorem 1** (Learning an unknown quantum process using statistical queries).: _Given \(n,\epsilon,\tilde{\epsilon},\delta,\tau>0\) and \(\tau<\tilde{\epsilon}\). Consider any unknown \(n\)-qubit quantum process \(\mathcal{E}\) and a known \(n\)-qubit observable \(O\) given as a sum of few-body ( \(\leq\kappa=\mathcal{O}(1)\)) observables, where each qubit is acted on by \(\mathcal{O}(1)\) of the few-body observables. Given a training dataset \(S_{N}(\mathcal{E},O)\) obtained by performing \(N\) statistical queries with tolerance \(\tau\) as specified in Algorithm 1, with_ \[N=\Omega\left(\frac{log(n^{k+\kappa}/\delta)}{(\tilde{\epsilon}-\tau)^{2}}\right) \tag{14}\] _With probability \(\geq 1-\delta\), Algorithm 2 learns a function \(h(\rho)\), such that for any distribution \(\mathcal{D}\) over \(n\)-qubit states invariant under single-qubit Clifford gates,_ \[\underset{\rho\sim\mathcal{D}}{\mathbb{E}}|h(\rho)-tr(O\mathcal{E}(\rho))|^{ 2}\leq\epsilon+\tilde{\epsilon}\max(\left\|O^{\prime}\right\|^{2},1), \tag{15}\] _where \(O^{\prime}\) is the low-degree truncation ( of degree \(k=\lceil log_{1.5}(1/\epsilon)\rceil\)) of the observable \(O\) after the Heisenberg evolution under \(\mathcal{E}\). The training and prediction time of \(h(\rho)\) are bounded above by \(\mathcal{O}(kn^{k}N)\)_ Proof.: Our method of using statistical queries instead of classical shadow data slightly modifies the number of queries needed in the guarantee from [46]. We only show the part of the proof that varies from theirs, and refer to their work for the remainder. Let \(\mathcal{D}^{0}\) be the uniform distribution over the tensor product of \(n\) single-qubit stabilizer states. Then, we define the coefficient \(x_{P}(O)\) for all Paulis \(P\) and observables \(O\). \[x_{P}(O)=\underset{\rho\sim\mathcal{D}^{0}}{\mathbb{E}}tr(P\rho)tr(O \mathcal{E}(\rho)) \tag{16}\] As shown in [46], these coefficients can be used to represent \(O^{\prime}\), the low-degree truncation (of degree \(k\)) of the observable \(O\) after Heisenberg evolution under \(\mathcal{E}\) as defined in the theorem. \[O^{\prime}=\sum_{P\in\{I,X,Y,Z\}^{\otimes n}:|P|\leq k}3^{|P|}x_{P}(O)P \tag{17}\] We now begin the proof by considering a Pauli observable \(Q\in\mathcal{P}_{n}\), with \(|Q|\leq\kappa=\mathcal{O}(1)\). Let \(\rho_{l}=|\psi_{l}^{in}\rangle\langle\psi_{l}^{in}|\). We define the random variables \(x_{P}^{\prime}(Q)\) and \(x_{P}^{\prime\prime}(Q)\) below: \[x_{P}^{\prime}(Q)=\frac{1}{N}\sum_{l=1}^{N}tr(P\rho_{l})tr(Q\mathcal{E}(\rho_{ l})) \tag{18}\] \[x_{P}^{\prime\prime}(Q)=\frac{1}{N}\sum_{l=1}^{N}tr(P\rho_{l})y_{l}(Q) \tag{19}\] where \(y_{l}(Q)\) is the output of the statistical query \(\mathsf{QPSatat}_{\mathcal{E}}(\rho_{l},Q,\tau)\), as defined in Algorithm 1. Consider the number of samples: \[N=\Omega\left(\frac{log(n^{k+\kappa}/\delta)}{(\tilde{\epsilon}-\tau)^{2}}\right) \tag{20}\] Since \(P,Q\) are Paulis, \[tr(P\rho_{l})tr(Q\mathcal{E}(\rho_{l}))\in[-1,1] \tag{21}\] Also, over the choice of sampled input states, \[x_{P}(Q)=\mathbb{E}[x_{P}^{\prime}(Q)] \tag{22}\] For \(\tau<\tilde{\epsilon}\), using Hoeffding's inequality, \[Pr(|x_{P}^{\prime}(Q)-x_{P}(Q)|>\tilde{\epsilon}-\tau) \leq 2exp\left(\frac{2N^{2}(\tilde{\epsilon}-\tau)^{2}}{\sum_{i=1 }^{N}(2)^{2}}\right)\] \[=2\ exp\left(\frac{-N(\tilde{\epsilon}-\tau)^{2}}{2}\right)\] \[\leq exp(-log(n^{k+\kappa}/\delta))\] \[=\frac{\delta}{n^{k+\kappa}}\] By taking the Union bound over all \(P,Q\in\mathcal{P}_{n},|P|\leq k,|Q|\leq\kappa\), we get \[\Pr\bigl{[}\forall P,Q\in\mathcal{P}_{n}:|x_{P}^{\prime}(Q)-x_{P}(Q)|<\tilde {\epsilon}-\tau\bigr{]}\geq 1-\delta \tag{23}\] Since \(tr(P\rho_{l})\in\{-1,0,1\}\), \[|tr(P\rho_{l})tr(Q\mathcal{E}(\rho_{l}))-tr(P\rho_{l})y_{l}(Q)|\leq\tau\] By averaging over \(l=1\) to \(N\), we get \[|x_{P}^{\prime\prime}(Q)-x_{P}^{\prime}(Q)|\leq\tau \tag{24}\] Using triangle inequality, we get with probability at least \(1-\delta\), \[|x_{P}^{\prime\prime}(Q)-x_{P}(Q)|\leq\tilde{\epsilon},\ \ \ \forall P,Q\in\mathcal{P}_{n},|P| \leq k,|Q|\leq\kappa \tag{25}\] Analogous to our definition of \(x_{P}^{\prime\prime}(Q)\), the proof in [46] begins by defining a variable \(\hat{x}_{P}(Q)\). Their proof uses a bound similar to equation 25 with \(\hat{x}_{P}(Q)\) instead. We refer to their work for the rest of the proof as it proceeds identically to theirs after this bound. By setting \(\tau\leq C\tilde{\epsilon}\) for some constant \(C<1\), we recover the same asymptotic complexity as [46]. Thus, using statistical queries with tolerance linear in one of the algorithm hyperparameters, we can perform the learning task with an efficient number of such queries and efficient computational time. In certain practical cases, the following additional assumption about the output \(\alpha\) of the oracle \(\mathsf{QPSstat}_{\tilde{\epsilon}}\) may hold true: \[\mathbb{E}[\alpha]=Tr(O\mathcal{E}(\rho_{in})) \tag{26}\] **Corollary 1**.: _Under the assumption on the output of \(\mathsf{QPSstat}\) shown in equation 26, the number of queries to \(\mathsf{QPSstat}\) of tolerance \(\tau\) to achieve the error bound of the rigorous guarantee of Theorem 1 is given by_ \[N=\Omega\left(\frac{\tau^{2}log(n^{k+\kappa}/\delta)}{\tilde{\epsilon}^{2}}\right) \tag{27}\] Proof.: Under this assumption, we can eliminate the intermediate quantity \(x_{P}^{\prime}(Q)\) in the previous proof, and directly apply Hoeffding's inequality to obtain the bound on \(|x_{P}^{\prime\prime}(Q)-x_{P}(Q)|\). Since \(Tr(P\rho_{l})\in\{-1,0,1\},\forall\rho_{l}\) \[tr(P\rho_{l})y_{l}(Q)\in[tr(P\rho_{l})tr(Q\mathcal{E}(\rho_{in}))-\tau,tr(P \rho_{l})tr(Q\mathcal{E}(\rho_{in}))+\tau],\forall l=1,2,...n \tag{28}\] Using Hoeffding's inequality for a fixed P,Q, \[\begin{split} Pr(|x_{P}^{\prime\prime}(Q)-x_{P}(Q)|>\tilde{ \epsilon})&\leq 2\ exp\left(\frac{-2N^{2}\tilde{\epsilon}^{2}}{ \sum_{i=1}^{N}\left(2\tau)^{2}\right)}\right)\\ &=2\ exp\left(\frac{-N\tilde{\epsilon}^{2}}{2\tau^{2}}\right)\\ &\leq exp(-log(n^{k+\kappa}/\delta))\\ &=\frac{\delta}{n^{k+\kappa}}\end{split} \tag{29}\] By taking the Union bound over all \(P,Q\in\mathcal{P}_{n},|P|\leq k,|Q|\leq\kappa\), we get \[Pr(|x_{P}^{\prime\prime}(Q)-x_{P}(Q)|>\tilde{\epsilon})\leq\delta \tag{30}\] Thus, we can guarantee with probability at least \(1-\delta\), \[|x_{P}^{\prime\prime}(Q)-x_{P}(Q)|\leq\tilde{\epsilon},\ \ \forall P,Q\in \mathcal{P}_{n},|P|\leq k,|Q|\leq\kappa \tag{31}\] Numerical Simulations In this section, we demonstrate the performance of our learning algorithm presented in this work through simulations. Our simulation results allow us to apply our algorithms to learn processes. The code for our simulations is available in a public Github repository 1. Before presenting our simulation result, we give a remark on our approach for providing a good and realistic simulation of data obtained via statistical queries. In order to construct the output of the statistical query oracles in our simulations, we assume the oracle _QPSStat_ uses a method to estimate the expectation value of an observable, such as the ones shown in [48, 41, 49, 85]. In our simulation, in order to emulate the behaviour of these methods, we compute the true expectation value and output the result after adding a normally distributed error to it. The error is sampled from a normal distribution such that it is within the specified tolerance with high probability. We note that our learning model puts no assumptions on the error in the output of the queries, and in theory, this error can come from any arbitrary distribution as long as it is within the tolerance with high probability. However, we will argue that this simple method of generating the sample data already captures these scenarios well enough for the purpose of the simulation. We also compare this method with the use of classical shadows for evaluating the outcome of an observable [48] in Fig 1. We see that for the same target tolerance and success probabilities, the classical shadow method produces less error than that generated using a normal distribution. In fact, the normally distributed error achieves the exact value for the success probability, while any practical method would produce the same error or less, as it would come with a bound that may not be tight. Thus, our emulated oracle would produce greater deviations than in a practical implementation, implying that the real-life performance of the algorithm would only be similar to or better than the simulations. We use these emulated oracles for the simulation of the learning algorithms. Footnote 1: [https://github.com/chirag-w/qpsq-learning](https://github.com/chirag-w/qpsq-learning) ### Learning Arbitrary Processes In figure 2, we show the simulated performance of algorithm 2 in learning 10 haar-random unitaries over 6 qubits for a range of tolerances. We consider \(O=Z\otimes I...\otimes I\), the Pauli-Z observable on the first qubit. We consider three distributions of target states, namely the uniform distributions over the computational basis states, the stabilizer product states and haar-random states. We can see from figure 2 that a lower QPSQ tolerance results in a lower prediction error for the same number of queries. We also see that the algorithm achieves similar performance when predicting the outcome on computational basis and stabilizer product states, even though the uniform distribution over the computational basis states is not locally flat and thus outside the performance guarantee. On the other hand, the distribution over haar-random states is within the guarantee, and the algorithm performs best on this distribution. Figure 1: Comparison between simulated errors generated from a normal distribution and those generated by using classical shadow tomography to evaluate the EV of the Pauli-Z observable on random single-qubit stabilizer states after evolution under a fixed haar-random unitary. We fix a tolerance value \(\tau=0.2\), and the probability of the deviation lying outside the tolerance, \(\delta=0.0455\) Figure 2: Average performance of the learning algorithm on 10 haar-random 6-qubit unitaries, in predicting the outcome of \(Z_{1}\) on three target distributions Application of QPSQ in Cryptanalysis of CR-QPUFs In this section, we present applications of our QPSQ model in the realm of cryptanalysis. Specifically, we focus on a particular class of PUFs known as Classical Readout Quantum Physically Unclonable Functions (CR-QPUFs). Our objective is to investigate how our proposed algorithm can serve as an effective strategy for attacking their security. We have chosen this specific cryptographic primitive as our case study due to the inherent compatibility between the conceptual and formal framework of CR-QPUFs and the QPSQ framework. Our results showcase a concrete application of a particular learning model and its associated algorithms in the field of cryptography, thereby establishing a connection between the domains of quantum learning theory and quantum cryptography. In the next subsection, we will delve into our new attacks and the results and we will see that these outcomes not only address an ongoing open question regarding the security of different classes of quantum physically unclonable functions but also bridge the existing gap in the security analysis of CR-QPUFs, by demonstrating the unlikelihood of achieving the desired level of security for this specific class against efficient quantum adversaries. Physical Unclonable Functions (PUFs) are hardware devices designed to resist cloning or replication, making them valuable for cryptographic tasks like authentication, identification, and fingerprinting [73, 22, 30, 28]. These devices have historically been realised in the classical realm using specific electrical circuits or optical materials [67, 40, 36, 80]. However, many of these implementations remain susceptible to various attacks, such as side-channel and machine-learning attacks [74, 35, 81, 54]. Given the inherent unclonability offered by quantum mechanics, the pursuit of physical unclonability in the quantum domain seems more appealing and natural. The concept of a Quantum PUF (QPUF) is formally introduced and analyzed in [6]. A QPUF is a quantum process that resists easy replication and, consequently, is challenging to learn from. Furthermore, it can be interacted with using general quantum states, yielding quantum outputs. Unlike classical PUFs, QPUFs can offer provable security guarantees. Nonetheless, developing practical hardware for a complete QPUF, and effectively using it, demands substantial resources like large-scale quantum processes and quantum memories that remain elusive in current quantum technology. As a result, a range of PUF variations necessitating different levels of quantum capability have been proposed and explored. Among these are the Classical Readout Quantum Physically Unclonable Functions (CR-QPUFs), examined in [68, 69]. CR-QPUFs represent a middle ground between fully quantum PUFs and classical PUFs. Their objective is to achieve the unlearnability characteristic of QPUFs while relying only on classical communication and storage. However, [69] introduces a classical machine-learning attack directed at the initial proposal for CR-QPUFs and a slightly more sophisticated quantum circuit of the same type. This expanded construction includes layers of Hadamard gates and single-qubit rotations, coupled with the inherent randomness stemming from noise and hardware imperfections. The experiment involving the application of basic regression algorithms to understand this construction, conducted on real IBM Q quantum machines as outlined in [69], indicates that this construction is learnable and hence does not provide any security. Nevertheless, because this construction entails a relatively simple and uncorrelated quantum circuit, it raises a significant unresolved question in this domain: _Does the susceptibility of CR-QPUFs to learning attacks stem from the simplicity of the quantum process itself, or does its learnability arise from a more fundamental aspect --perhaps in the way CR-QPUFs have been characterised?_ In this section, we attempt to provide an almost definitive and theoretical answer to this question, made possible through the application of our QPSQ framework and the algorithms we have devised. We apply these tools to various iterations of CR-QPUFs. Through this in-depth case study, we conclusively demonstrate that no matter the complexity or depth of the quantum circuit associated with CR-QPUFs, they remain susceptible to learning (within the relevant learning model we will elaborate upon), hence we cannot hope to achieve a secure protocol relying on them. The selection of CR-QPUFs as an ideal subject for our model comes from multiple reasons. Notably, these PUFs have been formally defined within the statistical query model, as noted in [69]. Our investigation reveals that the susceptibility of this type of PUF fundamentally arises from the fact that a large class of quantum processes can be efficiently learned in the QPSQ model. This spectrum includes processes such as Haar random unitaries which are random and extremely complicated to construct when scaling with the system size. Consequently, one cannot hope to evade the learning attacks. This finding, counterintuitive and surprising, highlights a critical fact regarding the security of the cryptographic schemes surrounding hardware assumptions, like physical unclonability: Security does not inherently rely on process complexity alone or the presumption that the quantum process cannot be fully and efficiently learnt by tomography. Instead, the crucial factors are the verification method and the nature of the data accessible to the adversary, which bears an important message for the design of secure protocols and subroutines based on hardware and physical assumptions. Before we dive into the result, we give a brief overview of the desired security property associated with the physically unclonable function and their main application, namely authentication, and then we present a structure for the authentication protocols based on CR-QPUFs, which allows us to better demonstrate our cryptanalysis using QPSQs and show the effectiveness of our attacks. ### Security property and attack model As discussed in [6], the main cryptographic notion associated with PUFs in general, and particularly for the use-case of authentication and identification, is _unforgeability_. Essentially, unforgeability means that an adversary having access to an efficient-size database of input-output pairs (also called challenge-response pairs or CRPs in the context of PUF) of a specific function of interest, should not be able to reproduce the output of the function on a _new_ input. This database in many cases is obtained via the adversary querying an oracle that realises the black-box access to the function. The unforgeability in cryptography is often defined as a formal game between a challenger and an adversary, and the forgery attacks are considered as algorithms (performed by the adversary) that win this game with non-negligible probability. Unforgeability is also extended to the quantum world generally in two directions: It can also be defined for quantum processes (instead of classical functions) [29] and for quantum adversaries having access to quantum oracles of a classical function [16, 3, 29]. As there are many levels and flavours of unforgeability that involve a quantum adversary, and the technical discussions about different formal definitions of it are outside the scope of this paper, we avoid analysing the security through formal unforgeability games and instead, we demonstrate our attacks using the security of the protocol. However, we note that the security inherently relies on this cryptographic notion which has a close connection to the notion of learnability, especially when considering quantum processes. Nevertheless, the conceptual notion of unforgeability, together with the specific properties of the CR-QPUFs guide us to characterise our attack model. For our purpose, we consider two honest parties, a verifier \(\mathcal{V}\), and a prover \(\mathcal{P}\), communicating through a quantum or classical channel (de pending on the challenge type). The purpose of the protocol is for the honest prover to prove its identity to the verifier with the promise that no quantum adversary can falsely identify themselves as the honest prover. A quantum adversary here is a quantum polynomial time (QPT) algorithm, that sits on the communication channel and can run arbitrary efficient quantum processes. The prover possesses a CR-QPUF device denoted as \(\mathcal{C}\), which they will use for identification, and which is associated with the quantum CPTP map \(\mathcal{E}\). The verifier on the other hand has a database of challenge-response pairs (CRPs) of \(\mathcal{C}\), which is obtained by having direct access to \(\mathcal{C}\) in the setup phase and recording the queries and their respective outputs. In this scenario, it is often considered that the device is later sent to the prover physically and from that point throughout the protocol will be in the possession of the prover. For a more realistic and conclusive attack model, in addition to having access to the communication channel, the adversary has also access to a polynomial-size database of CRPs of their own, which contains partial information on the process. Here, the adversarial models are often categorised into different classes depending on the level of access assumed for the adversary to obtain such data. A _Weak Adversary_ has only access to a randomly selected set of CRPs. This model often reduces to the cases where the adversary can only obtain the data by recording the communications on the channel over a certain period of time, which is why this class of adversary is also referred to as _network adversaries_. A stronger security model is one where the adversary can have oracle access to the device, i.e. issuing their desired queries to the CR-QPUFs. This adversary is referred to as _Adaptive Adversary_. Although the adaptive adversarial model seems very strong, it is still realistic and often the desired security level when it comes to identification protocols and PUF-based schemes. A reason for this is that, as mentioned earlier, the device (in this case CR-QPUF) needs to be physically transferred at least once, which gives an adversary the chance to directly query and interact with it. As a result, in our study, we also consider the adaptive adversaries. In our model, a QPT adversary \(\mathcal{A}\) can collect polynomially-many statistical queries, by issuing any desired state as input. This dataset is then used by the adversary to learn \(\mathcal{C}\) in the QPSQ model in order to provide a verifiable outcome for a new challenge state during the challenge phase of the protocol, and hence break the security. ### The general structure of a CR-QPUF-based authentication protocol Given the adversarial model in the previous section, we now define a general authentication protocol for CR-QPUFs and discuss how their security (or soundness) is defined. Later we go over specific instances of such protocols for different choices of CR-QPUFs, distributions of the challenge states and observables. For the CR-QPUFs, we work with a definition similar to [69] based on statistical queries. However, in order to use our learning attacks, we need to adapt the definition to our framework. We will discuss how our framework easily and naturally generalises to this case. However, for now, we denote the CR-QPUFs as \(\mathcal{C}\) and abstractly define it as a process, associated with the completely positive trace preserving (CPTP) map \(\mathcal{E}\) over the \(n-\)qubit state space, able to produce statistical queries for any challenge in the challenge set, given an observable \(O\) and a tolerance parameter \(\tau\). The protocol is run between a verifier \(\mathcal{V}\) and a prover \(\mathcal{P}\) as follows: We also note that for a physical device such as QPUF or CR-QPUF, the statistical query oracle QPStat abstractly models a natural and physical interaction with the device, which is querying it with the given challenge and measuring the output quantum states on a desired observable and over several copies to estimate the expectation value of the observable. In other words, the oracle is the physical device itself and not a separate entity or implementation. In the context of these authentication protocols utilizing CR-QPUF, we encounter two main factors governing the complexity of the underlying components. Firstly, there is the complexity of the channel representing CR-QPUF, which we have discussed in depth. Secondly, there is the choice of the observable. In our work, we assume the protocol is carried out using observables that are efficiently estimable, taking into account practical scenarios where both the verifier and prover can effectively measure and estimate the CR-QPUF's outcome, regardless of the underlying circuit's complexity. As such, and given that we aim to provide attacks with provable guarantees, we assume that the observable \(O\), selected during the setup phase, is an efficient observable, i.e. we assume that \(O\) has a polynomially bounded number of terms in its Pauli representation as well as a bounded number of qubits each term acts non-trivially on. This is a physically well-motivated assumption, as demonstrated in current state-of-the-art research on estimating the expectation value of an observable [85]. In [85], the authors provided a framework which unifies a number of the most advanced and commonly studied methods, such as those in [48, 49]. While this assumption covers a wide class of observables, there are some non-local observables that can be measured efficiently using more specific techniques, such as the one in [27]. Nevertheless, under this assumption, we are able to formally demonstrate the vulnerability of a very large class of CR-QPUF authentication protocols. However, that does not imply the security of the cases that might be excluded due to our assumption on the observable, and heuristic attacks might still be applicable to scenarios in which the protocol uses a complicated and highly non-local observable. The _correctness_ or _completeness_ of the protocol, which is defined as the success probability of an honest prover in the absence of any adversary or noise over the channel, should be 1. The _soundness_ or _security_ of the protocol, ensures that the success probability of any adversary (depending on the adversarial model) in passing the authentication should be negligible in the security parameter. For protocols defined as above, the completeness is straightforward, hence we only define and discuss the soundness. **Definition 14** (Soundness (security) of the CR-QPUF-based Authentication Protocol).: We say the CR-QPUF-based authentication protocol defined as 1 is secure if the success probability of any QPT adaptive adversary \(\mathcal{A}\) in producing an output \(\tilde{y}\) for any \(x\) sampled at random from a database \(D\), over a distribution \(\mathcal{D}\), that passes the verification i.e. satisfies the condition \(|y-\tilde{y}|\leq 2\tau\), is at most negligible in the security parameter. ### CR-QPUFs in QPSQ framework While we have been discussing the QPSStat oracles abstractly so far, the CR-QPUF device must, in practice, be able to take a quantum state, an observable and a tolerance parameter \(\tau\), and output with high-probability a \(\tau\)-estimate of the expectation value. When a query is made to \(\mathsf{QPStat}_{\mathcal{E}}\), the device would apply \(\mathcal{E}\) to multiple copies of the state, estimate the expectation value of the observable and respond with that value as output. There are multiple methods for this estimation, such as those shown in [48, 41, 49, 85], usually requiring measurements on \(poly(1/\tau)\) copies of the state for statistical estimation. In a real implementation of the protocol, the device would receive copies of the input state, either through a quantum communication channel or prepared by the device given a classical description as input. We note that there are some differences between our QPSQ model and the formal model of CR-QPUFs defined in [69]. However, we will clarify that the two models are aligned and hence our results are complimentary to their results. For this, we first recall the definition of CR-QPUF in [69]. The main difference between the two models is that in [69], the challenges have been defined in the form of descriptions of unitaries instead of quantum states. Starting with a fixed state initialized as \(|0\rangle\langle 0|\), the input unitary \(U_{in}\) is applied on the noisy hardware, followed by repeated measurements in the computational basis. Finally, the mean of some statistical function is computed over the measurement results. The idea behind this kind of construction is that the unique noise fingerprint of the device may result in unforgeability. We include their definition of statistical queries for CR-QPUFs below: **Definition 15** (adapted from [69]).: For a challenge unitary \(U_{in}\), the response \(\hat{R}_{out}\) is the mean of a set of i.i.d. samples from running \(U_{in}\) on the noisy quantum device identified by \(qPVF_{id}\) and measuring in the computational basis. Corresponding to the uniquely noisy device \(id\) and unitary challenge \(U_{in}\), we have the quantum process \(\Lambda_{in}^{id}\). This gives rise to the quantum state \(\rho_{in}^{id}=\Lambda_{in}^{id}(|0\rangle\langle 0|)\). Let the associated _Born distribution_ be \(\mathcal{P}_{U_{in},id}(x)=tr(M_{x}\rho_{in}^{id}M_{x}^{\dagger})\), where \(M_{x}\) is the measurement operator corresponding to outcome \(x\). The output of the statistical query oracle \(QEval\) associated with some efficiently computable function \(\phi:\{0,1\}^{n}\rightarrow\{0,1\}^{n}\) is then given by \[\hat{R}_{out}\gets QEval(qPVF_{id},U_{in}),\ |\mathbb{E}_{x\sim\mathcal{P}_{U_{ in},id}}[\phi(x)]-\hat{R}_{out}|\leq\tau \tag{32}\] We will show that the same output distribution can be obtained from querying \(\mathsf{QPSStat}\) for an unknown noise channel \(\tilde{\Lambda}^{id}\) (described below) with the input states \(\rho_{in}=U_{in}|0\rangle\langle 0|U_{in}^{\dagger}\), the observable \(O_{\phi}=\sum_{x}\phi(x)|x\rangle\langle x|\), and the same tolerance \(\tau\). We start by noting that Definition 15 considers a noise channel which depends on the fixed unitary input. We can model such gate-dependent noise channels as interleaved layers of unitary gates and noise. Assume the unitary \(U_{in}\) is decomposed of layers of n-qubit unitary operations denoted as \(U_{in}=U_{1}U_{2},\ldots,U_{L}\). Then without loss of generality \(\Lambda_{in}^{id}\) can be modeled as \(\Lambda_{in}^{id}(\rho)=\Lambda_{L}(U_{L}\ldots\Lambda_{2}(U_{2}\Lambda_{1}(U _{1}\rho U_{1}^{\dagger})U_{2}^{\dagger})\ldots U_{L}^{\dagger})\). We show that is possible to rewrite this channel in the following form: \[\Lambda_{in}^{id}(\rho)=\tilde{\Lambda}_{L}\ldots\tilde{\Lambda}_{2}\tilde{ \Lambda}_{1}(U_{L}\ldots U_{2}U_{1}\rho U_{1}^{\dagger}U_{2}^{\dagger}\ldots U _{L}^{\dagger})=\tilde{\Lambda}^{id}(U_{in}\rho U_{in}^{\dagger}) \tag{33}\] The original channel and the rearranged one have been depicted in Figure 3. In general, the intermediate noise channels (\(\Lambda_{i}\)) do not commute with the next unitary layers \(U_{i+1}\). However, we can write each \(\Lambda_{i}\) in the Kraus decomposition and carry on the non-commuting operators to the next layer, resulting in a new noise layer \(\tilde{\Lambda}_{i}\) as follows: \[\Lambda_{i}(\rho)=\sum_{i}K_{i}\rho K_{i}^{\dagger},\quad\tilde{\Lambda}_{i}( \rho)=\sum_{i}\tilde{K}_{i}\rho\tilde{K}_{i}^{\dagger} \tag{34}\] where \(K_{i}\) and \(\tilde{K}_{i}\) are the Kraus operators of \(\Lambda_{i}\) and \(\tilde{\Lambda}_{i}\) respectively and the Kraus operators are related to each other with \(\tilde{K}_{i}=U_{i+1}K_{i}U_{i+1}^{\dagger}\). Furthermore, we note that the noise channel \(\Lambda_{in}^{id}\) is an unknown channel and so is the rewritten channel \(\tilde{\Lambda}^{id}\). Hence, by issuing the input state \(\rho_{in}^{\prime}=U_{in}|0\rangle\langle 0|\) to the oracle \(\mathsf{QPSStat}_{\tilde{\Lambda}^{id}}\) with observable \(O_{\phi}\), we obtain a similar output to that obtained from \(QEval\), showing the equivalence between the two SQ models. \[\mathsf{QPSStat}_{\tilde{\Lambda}^{id}}(\rho_{in},O_{\phi},\tau)\equiv QEval (qPVF_{id},U_{in}) \tag{35}\] Figure 3: ### Cryptanalysis results We are now ready to analyse the security of the authentication protocol as described in Protocol 1, in our QSQ framework, with an arbitrary and potentially high-depth underlying quantum circuit for the CR-QPUF. To show the extreme case of our result, we can assume the quantum process \(\mathcal{E}\) consists of a Haar-random circuit over the n-qubit Hilbert space. We can also consider any arbitrary noise model on top of it to model the hardware-specific imperfections of the CR-QPUF. However, our result holds for any arbitrary process. We use the learning algorithms 1 and 2 for our specific attack strategy. We construct the attack as follows. ``` Setting hyperparameters: \(\epsilon\leftarrow\frac{\tau^{2}}{2}\) \(\tilde{\epsilon}\leftarrow\frac{\tau^{2}}{2}\) Set \(N\) according to Equation 14 for\(i=1\) to \(N\)do \(\rho_{i}\sim stab_{1}^{\otimes n}\) Issue challenge \(\rho_{i}\) to \(\mathcal{C}\) Receive response \(y_{i}\) endfor \(S_{N}\leftarrow\left\{(\rho_{i},y_{i})\right\}_{i=1}^{N}\) Learn \(h\) according to Algorithm 2 ``` **Forgery** Given challenge \(x\) from \(\mathcal{V}\), respond with \(h(\rho(x))\) ``` **Algorithm 3** Attack on Protocol 1 in QSQ with observable \(O\), tolerance \(\tau\) We show due to the provable guarantees on the Algorithm 2, this is an efficient and effective attack on the CR-QPUF-based protocols. Although the following result would be valid for any challenge distribution \(\mathcal{D}\) which satisfies the guarantees of our proposed algorithm (i.e. the local-flatness), in order to give more intuition and practical perspective to the result, we also study two specific distributions over the choice of the challenge. Let \(\mathcal{D}_{Haar}\) denote a Haar-random distribution over the \(n\)-qubit Hilbert space and let \(\mathcal{D}_{stab}\) denote uniform random distribution over tensor product Pauli stabiliser states. In the first case the challenge states \(\rho(x)\) are Haar-random states indexed by \(x\) and in the second case, the challenge states are in the form of \(\rho(x)=\bigotimes_{i=1}^{n}\left|\psi_{x^{i}}\right\rangle\left\langle\psi_{ x^{i}}\right|\), where \(x\in\{0,1,2,3,4,5\}^{n}\), and we have \(\{\left|\psi_{0}\right\rangle=\left|0\right\rangle,\left|\psi_{1}\right\rangle =\left|1\right\rangle,\left|\psi_{2}\right\rangle=\left|+\right\rangle,\left| \psi_{3}\right\rangle=\left|-\right\rangle,\left|\psi_{4}\right\rangle=\left|+ i\right\rangle,\left|\psi_{5}\right\rangle=\left|-i\right\rangle\}\). These two specific selections of distributions for challenge states give rise to two very distinct instances of the authentication protocol. In the first case where the challenges are selected from a Haar-random distribution, the challenge state is communicated through a quantum channel, in the form of multiple copies of the state \(\rho(x)\), for the prover to be able to produce the response \(y_{x}\). Intuitively we expect this to increase the security of the protocol since the adversary is unlikely to gain any information about the challenge state itself. However, this extra hiding comes with the price of generating and communicating \(n\)-qubit Haar-random states, which is often very resource-extensive. On the other hand, studying this case (especially when considering also a Haar-random unitary for \(\mathcal{E}\)) would be interesting because it would allow the comparison between a QPUF and CR-QPUF with the same level of underlying resources. We note that as shown in [6] a Haar-random unitary would satisfy the requirements of a QPUF and hence has been proven to be secure. This will highlight the importance of the type of challenge and the verification process in the security of these hardware-based protocols. Our discussed result is formalized through the following theorem. **Theorem 2**.: _Protocol 1 cannot satisfy soundness, for any general choice of CR-QPUF, any choice of observable \(O\) given as the sum of few-body observables, and under any choice of challenge state chosen uniformly from any locally-flat distribution (including \(\mathcal{D}_{Haar}\) and \(\mathcal{D}_{stab}\)) as there exists an efficient QPSQ learner than produces a prediction \(\tilde{y}\) for a challenge \(x\), which passes the verification with non-negligible probability._ Proof.: Consider the attack described in Algorithm 3. From equation 14, we can see that the adversary makes \(poly(1/\tau)\) statistical queries of tolerance \(\tau\), and the algorithm runs in \(poly(1/\tau)\) time. Algorithm 3 is thus an efficient QPSQ learning algorithm. Fitting the parameters \(\epsilon\) and \(\tilde{\epsilon}\) into equation 15, we see that on average, for \(\rho\) drawn from any locally flat distribution, and \(O\) satisfying the previously stated condition with high probability, \[|h(\rho)-Tr(O\mathcal{E}(\rho))|\leq\tau \tag{36}\] For any query \(x\) issued by \(\mathcal{V}\), the associated \(y\) stored in \(\mathcal{D}\) was received as the output of a statistical query, implying that \[|y-Tr(O\mathcal{E}(\rho(x)))|\leq\tau \tag{37}\] By triangle inequality, the difference between the adversary's prediction \(h(\rho(x))\) and the stored \(y\) is bounded by \(2\tau\). Thus, using Algorithm 3, an adversary is able to, on average over the set of challenges, efficiently pass the authentication protocol 1 with high probability, without access to \(\mathcal{C}\), breaking the soundness of the protocol. ## 7 Conclusion and Future Works We have formalised a quantum statistical query model for quantum processes as a physically and experimentally well-motivated learning model. This model will enable us to study the efficiency of learning algorithms, as well as the potential quantum advantages for learning problems. Our framework encompasses a natural notion of learning a quantum process, which considers all the crucial components: the process itself, the input state queries, and the observable. One of the main advantages of our QPSQ model is that due to its generality, it allows for the formal study of various algorithms for quantum process learning which may have been designed within different disciplines and approaches. Our arbitrary process learning algorithm imposes no restrictions on the process itself but guarantees performance only for local observables and over locally flat distributions. An exciting avenue for future research is to expand this study further and seek or design algorithms with fewer restrictions on quantum process learning. For Algorithm 2, our observations indicate that, in some applications, overcoming the locally flat distribution restriction would open up a wide range of possibilities, even if it meant restricting the family of predictable quantum processes. Conversely, in other applications, the ability to learn the process with non-local observables would be of paramount importance. The redesign of our algorithm with these alternative sets of assumptions represents an exciting area for future work, with implications for the applications of the equivalent shadow-based algorithm presented in [46]. In terms of applications, we believe there is much to explore in both proposed directions: cryptanalysis and learning theory. First, in the case of the cryptanalysis of CR-QPUFs, our attack formally breaks the security of a large class of them, for a wide range of observables. However, as discussed in section 6.2, there exists a class of efficiently estimable observables that are non-local, falling outside the formal guarantee of our attack. Therefore, while our work addresses a significant portion of the security analysis for these primitives, a narrow gap remains. It remains an open question whether it's possible to design an efficient authentication protocol based on CR-QPUFs, using such non-local observables while maintaining resistance to learning attacks. Nonetheless, it's worth noting that, given their definition within the statistical query model and the results presented in this paper, even if such secure protocols exist, proving their security against the powerful learning algorithms for QPSQ would be a challenging task. Another interesting future direction is the examination of cryptographic primitives and schemes beyond CR-QPUFs through the lens of learning quantum processes with quantum statistical queries. Investigating the cryptographic implications of the efficient learners in our model as adversarial strategies presents an intriguing direction for further study. In the realm of learning theory, our newly proposed QPSQ model provides a natural framework to study the learnability of quantum processes using statistical queries. Potential research directions using this framework include not only the study of classical SQ learning results in a quantum setting but also the adaptation of classical techniques to this setting. One such direction would be to adapt classical techniques on establishing lower bounds for learning concept classes, to learning processes in the QPSQ setting. Another compelling question in this context is whether we can show a formal separation between QPSQ and classical learners, indicating a potential path towards finding quantum advantage in learning problems. Furthermore, it would be intriguing to observe the implementation of our learner on actual hardware or its application to data acquired from real physical experiments. **Acknowledgements:** The authors thank Armando Angrisani for his valuable inputs and discussions, especially the discussions towards establishing the definition of QPSQ, and for sharing related results with us during the project. We also thank Elham Kashefi, Dominik Leichtle, Yao Ma, and Sean Thrasher for interesting discussions and comments at different stages of this project. The authors acknowledge the support of the Quantum Advantage Pathfinder (QAP), with grant reference EP/X026167/1 and the UK Engineering and Physical Sciences Research Council.
2304.00925
A polarimetrically oriented X-ray stare at the accreting pulsar EXO 2030+375
Accreting X-ray pulsars (XRPs) are presumably ideal targets for polarization measurements, as their high magnetic field strength is expected to polarize the emission up to a polarization degree of ~80%. However, such expectations are being challenged by recent observations of XRPs with the Imaging X-ray Polarimeter Explorer (IXPE). Here we report on the results of yet another XRP, EXO 2030+375, observed with IXPE and contemporarily monitored with Insight-HXMT and SRG/ART-XC. In line with recent results obtained with IXPE for similar sources, analysis of the EXO 2030+375 data returns a low polarization degree of 0%-3% in the phase-averaged study and variation in the range 2%-7% in the phase-resolved study. Using the rotating vector model we constrain the geometry of the system and obtain a value for the magnetic obliquity of ~$60^{\circ}$. Considering also the estimated pulsar inclination of ~$130^{\circ}$, this indicates that the magnetic axis swings close to the observer line of sight. Our joint polarimetric, spectral and timing analysis hint to a complex accreting geometry where magnetic multipoles with asymmetric topology and gravitational light bending significantly affect the observed source behavior.
Christian Malacaria, Jeremy Heyl, Victor Doroshenko, Sergey S. Tsygankov, Juri Poutanen, Sofia V. Forsblom, Fiamma Capitanio, Alessandro Di Marco, Yujia Du, Lorenzo Ducci, Fabio La Monaca, Alexander A. Lutovinov, Herman L. Marshall, Ilya A. Mereminskiy, Sergey V. Molkov, Mason Ng, Pierre-Olivier Petrucci, Andrea Santangelo, Andrey E. Shtykovsky, Valery F. Suleimanov, Ivan Agudo, Lucio A. Antonelli, Matteo Bachetti, Luca Baldini, Wayne H. Baumgartner, Ronaldo Bellazzini, Stefano Bianchi, Stephen D. Bongiorno, Raffaella Bonino, Alessandro Brez, Niccolo Bucciantini, Simone Castellano, Elisabetta Cavazzuti, Chien-Ting Chen, Stefano Ciprini, Enrico Costa, Alessandra De Rosa, Ettore Del Monte, Laura Di Gesu, Niccolo Di Lalla, Immacolata Donnarumma, Michal Dovciak, Steven R. Ehlert, Teruaki Enoto, Yuri Evangelista, Sergio Fabiani, Riccardo Ferrazzoli, Javier A. Garcia, Shuichi Gunji, Kiyoshi Hayashida, Wataru Iwakiri, Svetlana G. Jorstad, Philip Kaaret, Vladimir Karas, Fabian Kislat, Takao Kitaguchi, Jeffery J. Kolodziejczak1, Henric Krawczynski, Luca Latronico, Ioannis Liodakis, Simone Maldera, Alberto Manfreda, Frederic Marin, Andrea Marinucci, Alan P. Marscher, Francesco Massaro, Giorgio Matt, Ikuyuki Mitsuishi, Tsunefumi Mizuno, Fabio Muleri, Michela Negro, Chi-Yung Ng, Stephen L. O'Dell, Nicola Omodei, Chiara Oppedisano, Alessandro Papitto, George G. Pavlov, Abel L. Peirson, Matteo Perri, Melissa Pesce-Rollins, Maura Pilia, Andrea Possenti, Simonetta Puccetti, Brian D. Ramsey, John Rankin, Ajay Ratheesh, Oliver J. Roberts, Roger W. Romani, Carmelo Sgro, Patrick Slane, Paolo Soffitta, Gloria Spandre, Douglas A. Swartz, Toru Tamagawa, Fabrizio Tavecchio, Roberto Taverna, Yuzuru Tawara, Allyn F. Tennant, Nicholas E. Thomas, Francesco Tombesi, Alessio Trois, Roberto Turolla, Jacco Vink, Martin C. Weisskopf, Kinwah Wu, Fei Xie, Silvia Zane
2023-04-03T12:29:17Z
http://arxiv.org/abs/2304.00925v2
# A polarimetrically oriented X-ray state at the accreting pulsar EXO 2030+375 ###### Abstract Accreting X-ray pulsars (XRPs) are presumed to be ideal targets for polarization measurements, as their high magnetic field strength is expected to polarize the emission up to a polarization degree of \(\sim\)80%. However, such expectations are being challenged by recent observations of XRPs with the Imaging X-ray Polarimeter Explorer (_IXPE_). Here, we report on the results of yet another XRP, namely, EXO 2030+375, observed with _IXPE_ and contemporarily monitored with _Insight-HXMT_ and _SRG_/ART-XC. In line with recent results obtained with _IXPE_ for similar sources, an analysis of the EXO 2030+375 data returns a low polarization degree of 0%-3% in the phase-averaged study and a variation in the range of 2%-7% in the phase-resolved study. Using the rotating vector model, we constrained the geometry of the system and obtained a value of \(\sim\)60\({}^{o}\) for the magnetic obliquity. When considering the estimated pulsar inclination of \(\sim\)130\({}^{\circ}\), this also indicates that the magnetic axis swings close to the observer's line of sight. Our joint polarimetric, spectral, and timing analyses hint toward a complex accreting geometry, whereby magnetic multipoles with an asymmetric topology and gravitational light bending significantly affect the behavior of the observed source. Key Words.: magnetic fields - polarization - pulsars: individual: EXO 2030+375 - stars: neutron - X-rays: binaries 1 Footnote 1: Deceased. ## 1 Introduction Accreting X-ray pulsars (XRPs) are binary systems consisting of a neutron star (NS) and a donor companion star (see Mushtukov & Tsygankov 2022, for a recent review). In these systems, the NS can accrete matter supplied by the companion either via stellar wind or Roche-lobe overflow, thereby producing emission in the X-ray domain. The NS can be strongly magnetized, with a dipolar magnetic field strength on the order of 10\({}^{12}\) G. This leads to highly anisotropic accretion, where the matter is funneled by the magnetic field to the magnetic poles, giving rise to pulsating X-ray emission. Studying these systems is crucial for understanding the effects related to the interaction of X-ray radiation with strongly magnetized plasma. In fact, the emission from XRPs can be expected to be strongly polarized, up to a polarization degree (PD) of 80% due to magnetized plasma and vacuum birefringence (Gnedin et al. 1978; Pavlov and Shibanov 1979; Meszaros et al. 1988; Caiazzo and Heyl 2021b, a). However, recent observations of XRPs have revealed a polarization that is far lower than expected (Doroshenko et al. 2022; Tsygankov et al. 2022; Forsblom et al. 2023), with a phase-averaged PD of about 5%-6% and ranging from 5% to 15% in the phase-resolved analysis. EXO 2030+375 is an XRP discovered with the _EXOSAT_ observatory (Parmar et al. 1989), which also detected pulsations at about 42 s. The orbital period of 46.02 days was derived from the Type I outburst periodicity by Wilson et al. (2008). These authors also obtained an orbital solution consisting of a rather eccentric (eccentricity of \(e\sim 0.41\)) and wide (semi-major axis of \(a_{\rm x}\sin\,i=248\pm 2\) lt-s) orbit. Besides being the most regular and prolific Type I outburst XRP, EXO 2030+375 also has shown sporadic Type II (or giant) outbursts (Parmar et al. 1989; Corbet and Levine 2006; Thalhammer et al. 2021). The source spectrum showed a hint of the cyclotron resonant scattering feature (CRSF) at 36 keV (Reig and Coe 1998) and 63 keV (Klochkov et al. 2008), however, this has not been securely confirmed in other works. More recently, the source spin period was measured to be around 41.2 s (Thalhammer et al. 2021), after the source underwent a significant spin-up episode following the Type II outburst, as monitored by _Fermi_/GBM.1 The distance to the source is \(2.4^{+0.5}_{-0.4}\) kpc, as given in the _Gaia_ Data Release 3 (Bailer-Jones et al. 2021). Footnote 1: [https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/exo2030.html](https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/exo2030.html) Here, we present the results of a multi-observatory campaign on EXO 2030+375. The observations by the _Imaging X-ray Polarimeter Explorer_ (_IXPE_) were supplemented by contemporaneous observations with _Insight-HXMT_ and _Spectrum-Roentgen-Gamma_/ART-XC at the peak of a Type I outburst in 2022. ## 2 Observations and data reduction ### Ixpe _IXPE_(Weisskopf et al. 2022) is a NASA small explorer mission in collaboration with the Italian Space Agency (ASI), launched on 2021 December 9. It features three identical Mirror Module Assembly (MMAs), each comprising of a grazing incidence telescope and a polarization-sensitive Detector Unit (DU) at its focus (Baldini et al. 2021; Soffitta et al. 2021). The DUs consist of gas-pixel detectors (GPD) filled with dimethyl ether, whose interaction with X-ray photons produces photoelectrons that are ejected in a direction that is distributed as \(\cos^{2}\varphi\), where \(\varphi\) is the polarization direction of the incident radiation (Bellazzini et al. 2003). _IXPE_ provides imaging polarimetry over a nominal energy band of 2-8 keV, within a field of view of about 12.9 arcmin\({}^{2}\) for each MMA and with an energy-dependent polarization sensitivity expressed by a modulation factor (i.e., the amplitude of the instrumental response to 100% polarized radiation) peaking at \(\mu\sim\)50%-60% at 8 keV. _IXPE_ observed EXO 2030+375 over the period 2022 November 23-27 (ObsID 02250201) for a total exposure of about 181 ks. A _Swift_/BAT (Gehrels et al. 2004; Krimm et al. 2013) light curve of the relevant outburst with _IXPE_ and other pointed observations is shown in Fig. 1. _IXPE_ data have been reduced using the ixpobssim software package (Baldini et al. 2022), version 30.0.0\({}^{2}\), and using the CALDB version 20221020. Source events were extracted from a 60\({}^{\prime\prime}\) radius circle centered on the brightest pixel, while background events are negligible given the relatively high source count rate (Di Marco et al. 2023). The v12 version of the weighted response files (Di Marco et al. 2022) was used to produce and analyze spectral products. Based on Silvestri and IXPE collaboration (2023), we added a systematic error of 2% to the _IXPE_ spectra. ### Srg/art-Xc The Mikhail Pavlinsky ART-XC telescope (Pavlinsky et al. 2021) carried out two consecutive observations (ObsIDs: 12210071001, 12210071002) of EXO 2030+375, from 2022 November 25-26 (MJD 59908.87-59909.62 and 59909.71-59909.83), simultaneously with _IXPE_, with a total net exposure of 75 ks. ART-XC is a grazing incidence-focusing X-ray telescope on board the _SRG_ observatory (Sunyaev et al. 2021). The telescope includes seven independent modules and provides imaging, timing, and spectroscopy in the 4-30 keV energy range, with a total effective area of \(\sim\) 450 cm\({}^{2}\) at 6 keV, angular resolution of 45\({}^{\prime\prime}\), energy resolution of 1.4 keV at 6 keV, and timing resolution of 23 \(\mu\)s. The ART-XC data were processed with the software artproducts v1.0 and the CALDB version 20230228. We limited the ART-XC energy band to the 6.5-25 keV energy range, where the instrument calibration is better known. Following standard procedures, we merged data from both observations, rebinned the spectrum to match the energy resolution of the detectors, and added a systematic error of 2% to it. ### Insight-Hxmt Hard X-ray Modulation Telescope (_HXMT_, also dubbed as _Insight-HXMT_) excels in its broad energy band (1-250 Figure 1: _Swift_/BAT (15–50 keV) daily average light curve of EXO 2030+375 (black dots with gray error bars). Times of each continuous and pointed observations used in this work are marked by horizontal colored lines and vertical arrows, as detailed in the legend. keV) and a large effective area in the hard X-ray energy band (Zhang et al. 2020). EXO 2030+375 was observed by _HXMT_ from 2022 November 18 (MJD 59901) to November 27 (MJD 59910). In this work, we only analyze quasi-simultaneous observations with _IXPE_ from 2022 November 23 (MJD 59905) to November 27 (MJD 59910). The resulting total exposure times are 42 ks, 71 ks and 67 ks for the detectors of three payloads on board _HXMT_, LE (1-15 keV), ME (5-30 keV), and HE (20-250 keV), respectively. The detectors were used to generate the events in good time intervals (GTIs). The time resolution of the HE, ME, and LE instruments are \(\sim\)25 \(\mu\)s, \(\sim\)280 \(\mu\)s, and \(\sim\)1 ms, respectively. Data from _HXMT_ were considered in the range 2-70 keV, with the exclusion of 21-24 keV data due to the presence of an Ag feature (Li et al. 2020). Insight-HXMT Data Analysis software3 (HXMTDAS) v2.05 and HXMTCALDB v2.05 are used to analyze the data. We screened events for three payloads in HXMTDAS using legtigen, negtigen, negtigen tasks according to the following criteria for the selection of GTIs: (1) pointing offset angle \(<\) 0.1\({}^{\circ}\); (2) the elevation angle \(>\)10\({}^{\circ}\); (3) the geomagnetic cut-off rigidity \(>\)8 GeV; (4) the time before and after the South Atlantic Anomaly passage \(>\)300 s; (5) for LE observations, pointing direction above bright Earth \(>\)30\({}^{\circ}\). We selected events from the small field of views (FoVs) for LE and ME observations, and from both small and large FoVs for HE observations due to the limitation of the background calibration. The instrumental background is estimated by blocking the collimators of some detectors completely. The background model is developed by taking the correlations of the count rates between the blind and other detectors. The background is generated with lebkgmap, mebkgmap, and hebkgmap implemented in HXMTDAS, respectively. We restricted the energy band for spectral analysis to 1-10, 10-30, and 30-70 keV for LE, ME, and HE, respectively, as these ranges suffer smaller calibration uncertainties given the available observational background. Following the official team recommendations, we added a systematic error of 1% to LE and ME spectra, and 3% to the HE spectrum. Footnote 3: [http://hxmtweb.ihep.ac.cn/](http://hxmtweb.ihep.ac.cn/) ## 3 Data analysis and results Polarimetric parameters were derived following the approach based on the formalism by Kislat et al. (2015), as implemented in the pcube software algorithm and through spectro-polarimetric analysis available in xspec(Strohmayer 2017). The spectra were fitted simultaneously in xspec allowing for a cross-calibration constant to account for calibration uncertainties of different DUs with respect to other detectors and for intrinsic source variability. The source spectra were rebinned to have at least 30 counts per energy channel in order to adopt the \(\chi^{2}\) fit statistic, given the non-Poissonian nature of the source spectra. The adopted test statistic was the \(\chi^{2}\). Spectral data were analyzed with xspec version 12.13.0b (Arnaud 1996) available with heasoft v6.31. ### Timing analysis Barycentric correction was applied to the events using the barycorr tool for _IXPE_ and ART-XC, and the HXMTDAS task hxbarry for _HXMT_. DE421 Solar system ephemeris and the SIMBAD (Wenger et al. 2000) ICRS coordinates of the source were employed to this aim. Binary demodulation also was performed, employing the orbital solution from Fu et al. (2023). The final estimate of the spin period \(P_{\rm s}\) = 41.1187(1) s was then obtained using the phase connection technique (Deeter et al. 1981) and _HXMT_/LE events. The obtained spin period was used to fold the events from all employed instruments and obtain corresponding pulse profiles. For completeness, we also extracted the _IXPE_ light curve in the 2-8 keV energy band summed over the three DUs and rebinned at 300 s. The resulting light curve shows a steady count rate of about 5 cnt s\({}^{-1}\) over the whole _IXPE_ observation. ### Phase-averaged analysis #### 3.2.1 Phase-averaged polarimetric analysis Polarization quantities were initially derived following the model-independent approach described in Kislat et al. (2015) and Baldini et al. (2022). Normalized Stokes parameters, \(Q/I\) and \(U/I\), were extracted using the pcube algorithm within xspecbssim and then used to obtain the PD and PA. Figure 2 shows those parameters for the full 2-8 keV energy band, while Fig. 3 shows the same in different energy bands. Both plots show that the normalized Stokes parameters are consistent with zero, which implies that the PD is also consistent with zero and is lower than \(\sim\)3% at 99% c.l. Figure 2: Pulse phase-averaged normalized Stokes parameters \(U/I\) (y-axis) and \(Q/I\) (x-axis) over the 2–8 keV energy range. The 1\(\sigma\), 2\(\sigma\), and 3\(\sigma\) contours are plotted as concentric circles around the nominal value (continuous and dashed lines, respectively). The gray dotted circle represents loci of constant 1% PD, while radial lines are labeled for specific electric vector position angles (that is, the polarization angle, PA) with respect to north. The phase-averaged PD upper limit is about 2% at 99% c.l. #### 3.2.2 Phase-averaged spectro-polarimetric analysis To perform the spectro-polarimetric analysis, we first limited the study to _IXPE_ data only. The \(I\), \(Q\), and \(U\) spectra from the source were extracted using the xpbin algorithm for each DU. Given the narrow energy range of _IXPE_ data, the spectra can be fitted with a simpler model than that required for broader-band analysis. We therefore employed a simple absorbed power-law model for the _IXPE_-only analysis. A constant polarization component (energy-independent PD and PA) was also added to the model in xspec. The final form of the model was thus const\(\times\)tbabs(powerlaw\(\times\)polconst). This model returns a fit-statistic \(\chi^{2}\)/dof=1372/1334 (see Table 1 and Fig. 4). Errors are calculated through MCMC simulations using the Goodman-Weare algorithm of length \(2\times 10^{5}\) with 20 walkers and \(10^{4}\) burn-in steps. Best-fit results are shown in Table 1. The analysis reveals a PD of \(1.2\pm 0.6\)% at the 90% c.l. \begin{table} \begin{tabular}{l c} \hline \hline Parameter & Value \\ \hline \(C_{\rm DU1}\) (fixed) & 1 \\ \(C_{\rm DU2}\) & \(0.963\pm 0.003\) \\ \(C_{\rm DU3}\) & \(0.928\pm 0.003\) \\ \(N_{\rm H}\) [\(10^{22}\,{\rm cm}^{-2}\)] & \(1.93\pm 0.06\) \\ \(\Gamma\) & \(1.29\pm 0.01\) \\ Norm\({}^{a}\) & \(0.315\pm 0.006\) \\ PD [\%] & \(1.2\pm 0.4\) \\ PA (deg] & \(39\pm 9\) \\ Flux\({}_{\rm 2-10\,keV}\)\({}^{b}\) & \(2.47\pm 0.05\) \\ \(\chi^{2}\)/d.o.f. & 1372/1334 \\ \hline \end{tabular} 1 \end{table} Table 1: Best-fit parameters of the phase-averaged _IXPE_ data on EXO 2030\(\times\)375 obtained from spectro-polarimetric analysis using model const\(\times\)tbabs(powerlaw\(\times\)polconst) in the 2–8 keV energy band. Figure 4: EXO 2030\(\times\)375 spectral energy distribution of the phase-averaged Stokes parameters \(I\), \(Q\), and \(U\) as observed by _IXPE_ – panels (a), (b), and (c), respectively. Continuous lines in the top panels represent the best-fit model const\(\times\)tbabs(powerlaw\(\times\)polconst) reported in Table 1. Bottom panels show the residuals. Different colors represent different detectors – black for DU1, red for DU2, and green for DU3. Data have been rebinned and re-normalized for plotting purpose. Figure 3: Same details as in Fig. 2 for energy-dependent normalized Stokes parameters. Gray dotted circles represents loci of constant 1% and 2% PD. Blue, orange, and green circles represent the 2–4, 4–6 and 6–8 keV energy bands, respectively. To test a possible energy-dependence of the polarization properties in EXO 2030+375, different polarization model components were also tested, namely pollin and polpow in xspec, corresponding to a linear and a power-law dependence with energy, respectively, of the PD and PA. However, these models did not further reduce the \(\chi^{2}\) value, nor returned significantly different polarimetric quantities and were, therefore, not explored further. Finally, we simultaneously fitted _IXPE_, HXMT, and ART-XC spectra. In principle, with a broadband spectrum available, polarimetric results suffer less contamination from a possibly incorrect spectral model derived by the restricted _IXPE_ energy band. Following previous works (Klochkov et al., 2008; Epili et al., 2017; Furst et al., 2018; Tamang et al., 2022), we adopted an absorbed power-law model with high-energy cutoff and an iron K\(\alpha\) line. To this, we added a constant polarization component as above. As the iron line is produced by fluorescence, it is not expected to be polarized. We verified this by adding a separate polconst component for the continuum and for the iron line. This resulted in a best-fit model whose PD value for the iron line was pegged at its lower limit. Therefore, we left that component unaffected by polarization. The final model expression is thus const\(\times\)tbabs(powerlaw\(\times\)highecut\(\times\)polconst+gauss). For the fitting procedure, _IXPE_ and _SRG_/ART-XC spectral parameters were tied to those from _HXMT_/LE, leaving a cross-calibration constant free for each instrument. For the photoelectric absorption from neutral interstellar matter, we employed the tbabs model from Wilms et al. (2000) and relative wilm abundances. The Galactic column density in the direction of the source is about \(8.8\times 10^{21}\) cm\({}^{-2}\)(HI4PI Collaboration et al., 2016). Despite the more elaborate model (with respect to the _IXPE_-only analysis) and the broad 2-70 keV energy band, we were still able to verify that the obtained best-fit values of the PD and PA are in agreement with those reported in Sect. 3.2.1 within 1\(\sigma\). The broadband spectral results are shown in Fig. 5 and reported in Table 2. ### Phase-resolved (spectro-)polarimetric analysis To perform a phase-resolved polarization analysis of _IXPE_ data, we selected seven phase bins to sample different flux levels shown by the pulse profile (see Fig. 6). The phase bins were extracted with respect to \(T_{0}=59906.82181991\) MJD. For the polarimetric analysis, we followed the Kislat et al. (2015) formalism as outlined in Sect. 3.2.1. The results, shown in Fig. 6, exhibit only a moderate variability of the Stokes parameters as a function of the pulse phase. To perform the spectro-polarimetric analysis of the phase-resolved spectra, we used the same model as we did for the phase-averaged analysis in Sect. 3.2.2. Phase-resolved spectra were rebinned analogously to the phase-averaged analysis. For _IXPE_, cross-normalization constants were kept fixed at their correspondent phase-averaged value (see Table 1). The resulting best-fit values are reported in Table 3 and shown in Fig. 6. The analysis reveals significant detection of polarization up to about 7%. Both the PD and PA show pronounced variation with spin phase. ### Phase-resolved spectral analysis Taking advantage of the long, broadband _HXMT_ observations, we also performed a pulse phase-resolved spectral analysis of the _HXMT_ data. For this, 11 phase bins were \begin{table} \begin{tabular}{l c} \hline \hline Parameter & Value \\ \hline \(N_{\rm H}\) [\(10^{22}\) cm\({}^{-2}\)] & \(1.94\pm 0.03\) \\ \(\Gamma\) & \(1.289\pm 0.006\) \\ Norm\({}_{\rm r}\)a & \(0.310\pm 0.004\) \\ \(E_{\rm cut}\) [keV] & \(5.8\pm 0.1\) \\ \(E_{\rm fold}\) [keV] & \(23.5\pm 0.3\) \\ \(E_{\rm K\alpha}\) [keV] & \(6.56\pm 0.02\) \\ \(\sigma_{\rm K\alpha}\) [keV] & \(0.24\pm 0.03\) \\ norm\({}_{\rm K\alpha}\) [ph cm\({}^{-2}\) s\({}^{-1}\)] & \(0.0025\pm 0.0002\) \\ PD [\%] & \(1.2\pm 0.2\) \\ PA [deg] & \(39\pm 8\) \\ \(C_{\rm DU1}\) (fixed) & 1 \\ \(C_{\rm DU2}\) & \(0.963\pm 0.001\) \\ \(C_{\rm DU3}\) & \(0.928\pm 0.001\) \\ \(C_{\rm LE}\) & \(1.400\pm 0.004\) \\ \(C_{\rm ME}\) & \(1.343\pm 0.004\) \\ \(C_{\rm HE}\) & \(1.255\pm 0.002\) \\ \(C_{\rm ART-XC}\) & \(1.395\pm 0.004\) \\ Flux\({}_{\rm 1-70\,keV}\)b & \(4.44\pm 0.01\) \\ \(\chi^{2}\)/d.o.f. & \(2448/2592\) \\ \hline \end{tabular} 1 \end{table} Table 2: Best-fit parameters of the phase-averaged broadband spectrum of EXO 2030+375 as observed by _IXPE_, HXMT and ART-XC and obtained from spectro-polarimetric analysis using the model const\(\times\)tbabs(powerlaw\(\times\)highecut\(\times\)polconst+gauss) in the 2–70 keV energy band. Figure 5: Phase-averaged broadband spectrum of EXO 2030+375. _Top:_ EXO 2030+375 unfolded \(EF_{E}\) spectrum as observed by _IXPE_, _HXMT_, and ART-XC. For plotting purpose, data from the three _IXPE_ DUs are combined and re-normalized, and all spectra are rebinned. _IXPE_ data are in red, _HXMT_, LE, and HE in black, blue and green, respectively, ART-XC data are in cyan. _Bottom:_ Residuals of the best-fit model (also see Table 2). chosen to provide similar statistics of spectra in each bin. However, given the limited statistics with respect to phase-averaged analysis, we limited _HXMT_/HE data to 50 keV. To model the phase-resolved spectra, we used the same model employed for the phase-averaged spectrum (see Sect. 3.2.2). The best-fit results are reported in Fig. 7. The analysis shows strong variability of the spectral parameters with pulse phase. We notice that the observed parameter variations might at least partly due to artificial correlations of degenerate parameters. We tested this through the calculation of contour plots for different pairs of parameters and verified that although the parameters show some intrinsic correlations, their variability is still significant. Although part of this variability is known to be model-dependent (Klochkov et al., 2008; Hemphill et al., 2014), it is nonetheless useful to test luminosity-dependence of the parameters variability with pulse phase (see Sect. 4.3). ## 4 Discussion ### Polarization degree: Expectations versus observations Our analysis shows a low polarization for the X-ray radiation from EXO 2030+375, with the phase-averaged PD in the 0%-3% range and the phase-resolved PD values in the range of 2%-7%. High values for the PD were expected from theoretical models of accreting XRPs (Caiazzo & Heyl, 2021, 2021, 2021), and references therein). This is due to both plasma and vacuum birefringence, which modify the opacity in the magnetic field depending on polarization of photons. Thus, emitted photons get polarized in two normal modes, namely: ordinary (O) and extraordinary (X), representing oscillations of the electric field parallel and perpendicular to the plane formed by the local magnetic field and the photon momentum, respectively. Recently, however, those models have been challenged by _IXPE_ observations of several accreting XRPs, namely: Her X-1 (Doroshenko et al., 2022), Cen X-3 (Tsygankov et al., 2022), 4U 1626\(-\)67 (Marshall et al., 2022), Vela X-1 (Forsblom et al., 2023), and GRO J1008\(-\)57 (Tsygankov et al., 2023). In fact, all those sources show a far lower PD than expected. The observed relatively low polarization was interpreted in terms of a "vacuum resonance" occurring where the contributions from plasma and vacuum are equal (Lai & Ho, 2002). Passing through the resonance, ordinary and extraordinary polarization modes of X-ray photons would convert to each other, with a net effect of depolarizing the radiation. This process takes place at a plasma density \(\rho_{V}\approx 10^{-4}\,B_{12}^{2}\,E_{\rm keV}^{2}\) g cm\({}^{-3}\), where \(B_{12}\) is the magnetic field strength in units of \(10^{12}\) G and \(E_{\rm keV}\) is the photon energy in keV. Doroshenko et al. (2022) found that a transition layer of about 3 g cm\({}^{-2}\) (corresponding to a Thomson optical depth of about unity) would depolarize the observed radiation consistently with the measured polarimetric quantities - if the vacuum resonance is located in the overheated atmospheric layer, which happens in the subcritical (or low-) accretion regime. With the 2-10 keV flux of \(2.5\times 10^{-9}\) erg cm\({}^{-2}\) s\({}^{-1}\) (see Table 1) and at a distance of 2.4 kpc, the observed source luminosity is \(2\times 10^{36}\) erg s\({}^{-1}\). This luminosity value is comparable to the low luminosity state of Cen X-3 (Tsygankov et al., 2022) and to the bright state of GRO J1008\(-\)57 (Tsygankov et al., 2023), as observed by _IXPE_. The former also shows no significant polarization in the phase-averaged analysis, while the latter shows significant polarization of about 4%. Therefore, it is possible that some other mechanisms beyond those linked to the accretion luminosity are responsible for the observed polarization degree. One qualitative interpretation of the observed low PD is pointed by the complex pulse profile of EXO 2030+375 (see Figs. 6 and 7). Such a complexity may derive from a complex magnetic field geometry where different hot spots simultaneously contribute to the observed emission at different pulse phases. The observed low PD might therefore be interpreted as due to mixing of emission from several parts of NS surface observed at different angles. Another interpretation can be linked to the relation between the magnetic field geometry, in particular, the magnetic obliquity and the observer's line of sight. If the magnetic dipole is nearly aligned with the rotation axis and the observer looks from the side (as seems to be the case for Her X-1 and Cen X-3), the changes in the PA with the pulsar phase are rather small and the average polarization is significant. On the other hand, for a highly inclined dipole (especially when observed at small inclinations), the variations of the dipole position angle (that is reflected in the PA) are large, resulting in a strongly reduced average polarization. This interpretation is in line with the results obtained for the system geometry in EXO 2030+375 and further discussed in the next section. ### Geometry of the system The polarimetric quantity PA can be exploited to constrain the geometry of the system by fitting the unbinned polari \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Phase & \(N_{\rm H}\) & \(\Gamma\) & Norm\({}^{a}\) & PD & PA & \(\chi^{2}\)/d.o.f. \\ & (\(10^{22}\) cm\({}^{-2}\)) & & & (\%) & (deg) & \\ \hline 0.00–0.18 & \(2.9\pm 0.1\) & \(1.42\pm 0.02\) & \(4.9\pm 0.2\) & \(3.0\pm 1.1\) & \(45\pm 9\) & 991/1037 \\ 0.18–0.26 & \(3.2\pm 0.2\) & \(1.53\pm 0.04\) & \(2.3\pm 0.1\) & \(2.1\pm 1.5\) & \(81\pm 21\) & 1015/1098 \\ 0.26–0.35 & \(3.4\pm 0.1\) & \(1.44\pm 0.03\) & \(3.2\pm 0.1\) & \(6.2\pm 1.3\) & \(62\pm 6\) & 938/974 \\ 0.35–0.44 & \(2.3\pm 0.1\) & \(1.07\pm 0.03\) & \(1.4\pm 0.1\) & \(4.8\pm 1.5\) & \(38\pm 11\) & 754/788 \\ 0.44–0.63 & \(3.34\pm 0.09\) & \(1.17\pm 0.02\) & \(6.4\pm 0.2\) & \(6.3\pm 0.7\) & \(-6.5\pm 3.3\) & 878/905 \\ 0.63–0.72 & \(2.5\pm 0.1\) & \(1.22\pm 0.03\) & \(2.4\pm 0.1\) & \(2.9\pm 1.1\) & \(41\pm 11\) & 1009/1048 \\ 0.72–1.00 & \(3.06\pm 0.07\) & \(1.30\pm 0.02\) & \(10.9\pm 0.2\) & \(2.7\pm 0.6\) & \(89\pm 5\) & 1103/1222 \\ \hline \end{tabular} **Notes.** All reported errors are at 68% confidence level. \({}^{(a)}\) Normalization of the power law in units of \(10^{-2}\) photon keV\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\) at 1 keV as obtained from DU1. \end{table} Table 3: Best-fit results of the spectro-polarimetric analysis of the phase-resolved _IXPE_ data of EXO 2030+375 using the const\(\times\)tbabs(powerlaw\(\times\)polconst) model in the 2–8 keV energy band. metric measurements from individual photoelectric angles with the rotating-vector model (RVM, Radhakrishnan & Cooke 1969; Poutanen 2020). If radiation escapes in the O-mode, the RVM describes the PA as follows: \[\tan(\mathrm{PA}-\chi_{\mathrm{p}})\!=\!\frac{-\sin\theta\ \sin(\phi-\phi_{0})}{ \sin i_{\mathrm{p}}\cos\theta\!-\!\cos i_{\mathrm{p}}\sin\theta\cos(\phi\!-\! \phi_{0})}, \tag{1}\] where \(i_{\mathrm{p}}\) is the pulsar inclination (the angle between the pulsar spin vector and the line of sight), \(\chi_{\mathrm{p}}\) is the position angle (measured from north to east) of the pulsar spin, \(\theta\) is the magnetic obliquity (the angle between the magnetic dipole and the spin axis), \(\phi\) is the pulse phase, and \(\phi_{0}\) is the phase when the northern magnetic pole is closest to the observer. The other pole makes its closest approach half a period later. Using the RVM fit to the unbinned Stokes parameters on a photon-by-photon basis (Gonzalez-Caniulef et al. 2023) and running Markov chain Monte Carlo (MCMC) simulations, we obtained estimates of the pulsar inclination, namely: \(i_{\mathrm{p}}=129^{+9}_{-7}\) deg, along with the co-latitude of the magnetic pole (or magnetic obliquity), \(\theta=59^{+5}_{-6}\) deg, and the position angle of the pulsar spin, \(\chi_{\mathrm{p}}=\chi_{\mathrm{p,O}}=-30\pm 5\) deg (see Fig. 8). With the pulsar inclination and magnetic obliquity angles being almost supplementary, \(i_{\mathrm{p}}+\theta\approx 180^{\circ}\), the southern magnetic pole swings close to the observer line of sight at each pulsar rotation at half a period from phase \(\phi_{0}/(2\pi)\), that is: at \(\phi=0.11^{+0.02}_{-0.01}\). Interestingly, in the case of EXO 2030+375 the RVM suggests a relatively high magnetic obliquity. Other XRPs (e.g., Her X-1, Cen X-3) show \(\theta\approx 15^{\circ}\), while a value of \(\theta\approx 60^{\circ}\) observed from EXO 2030+375 is closer to the orthogonal rotator GRO J1008\(-\)57 (Tsygankov et al. 2023). These results indicate that EXO 2030+375 stands in between the bimodal distribution peaking at \(0^{\circ}\) and \(90^{\circ}\) of the magnetic obliquity expected for isolated NSs (Dall'Osso & Perna 2017; Lander & Jones 2018), although such results do not necessarily apply to accreting XRPs (Biryukov & Abolmasov 2021). The orbital inclination can be obtained from the orbital parameters measured by Wilson et al. (2008). For the optical companion stellar mass in the range 17-20 \(M_{\odot}\), cor Figure 6: Phase-resolved results of EXO 2030+375 in the 2–8 keV range, combining data from all _IXPE_ DUs. From top to bottom, we show the pulse profile, normalized Stokes parameters \(g\) and \(u\) based on the polarimetric analysis, and the PD and PA, as obtained from the spectro-polarimetric analysis. The blue line in the PA panel corresponds to the best-fit rotating vector model (see Sect. 4.2). Figure 7: Best-fit parameters for the broadband (\(2-\)50 keV) phase-resolved spectra of EXO 2030+375 as observed by _HXMT_. Panels from top to bottom show the pulse profile as observed by _HXMT_/LE in the 2–10 keV energy band; the column density, \(N_{\mathrm{H}}\); the power-law photon index \(\Gamma\); the cutoff energy; and the folding energy (both in keV). responding to B0V spectral class (Coe et al. 1988), and assuming a NS mass of 1.4 \(M_{\odot}\), the inclination is in the range 49\({}^{\circ}\)-55\({}^{\circ}\)(see also Laplace et al. 2017). This value is consistent with the pulsar inclination value derived through the RVM fit because the sense of rotation cannot be determined from the X-ray pulse arrival times (i.e., solutions in the range \(i_{\rm orb}=\)125\({}^{\circ}\)-131\({}^{\circ}\) are equally probable). ### Hxmt phase-resolved spectral results Spectral parameters are expected to show pulse phase-dependence due to the highly anisotropic accretion geometry in XRPs. We therefore performed phase-resolved spectroscopy of _HXMT_ EXO 2030+375 data (see Fig. 7). Phase-resolved spectroscopy of EXO 2030+375 was also performed in earlier works (Klochkov et al. 2008; Naik & Jaisawal 2015; Tamang et al. 2022). However, despite the main continuum model used in past works is similar to the one adopted here, several important differences prevent a direct comparison. In fact, XRP spectra are known to be luminosity-dependent (Mushtukov & Tsygankov 2022) and, as a consequence, different spectral components can be adopted to fit the data collected at different luminosity levels. For EXO 2030+375, the main continuum model was modified in different works with additional components such as a Gaussian absorption line around 10 keV (Klochkov et al. 2008) or a partial covering component (Naik & Jaisawal 2015; Tamang et al. 2022). Therefore, only a qualitative comparison can be made between the results obtained here and those from previous works. For example, observations carried out by Klochkov et al. (2008) Figure 8: Corner plot of the posterior distribution for the RVM parameters for the pulsar geometry obtained using the PA values. Parameters are the PD of radiation escaping from the magnetic pole, pulsar inclination, \(i_{\rm p}\); magnetic obliquity, \(\theta\); position angle, \(\chi_{\rm p}\); and the phase, \(\phi_{\rm 0}\). 2D contours correspond to 68%, 95% and 99% confidence levels. The histograms show the normalized 1D distribution for a given parameter derived from the posterior samples. The mean value and 1\(\sigma\) confidence levels for the derived parameters are reported above the corresponding histogram. of EXO 2030+375 show that the power-law photon index reaches a minimum around the main pulse profile peak (corresponding to the broad main peak at \(\phi\)\(\sim\)0.8 in Fig. 7). A similar trend has emerged also from the phase-resolved results observed in Her X-1 (Vasco et al. 2013). Here, in contrast, we observe a maximum value of the photon index around the same peak (see Fig. 7). This is likely a consequence of the luminosity difference, as the above-mentioned works derive their results in the high-luminosity accretion regime (\(10^{37-38}\) erg s\({}^{-1}\), that is near or above the critical luminosity), whereas in the present work the source has been observed at sub-critical luminosity (\(\sim 4\times 10^{36}\) erg s\({}^{-1}\)). Such a difference is reflected in two main aspects. On the one side, the accretion structure beaming pattern is expected to drastically change at different regimes. In fact, the EXO 2030+375 pulse profile observed in the high-luminosity regime (see, e.g., the 2-9 keV panel in Fig. 2 of Klochkov et al. 2008) exhibits substantial differences with what is observed in the present work (see top panel in Fig. 7). This can lead to opposite observational signatures if the observer looks through the optically deep walls of the accretion column in the super-critical regime or through the optically thin hot-spots in the sub-critical regime (Mushtukov et al. 2015; Becker & Wolff 2022). On the other hand, opposite luminosity-dependences of spectral parameters have been observed in different accretion regimes in XRPs, depending on whether a gas-mediated or a radiation-dominated shock is responsible for the infalling plasma deceleration (Klochkov et al. 2011, and references therein). Although such behavior has generally been observed in the pulse-averaged analysis (see, e.g., Muller et al. 2013; Reig & Nespoli 2013; Malacaria et al. 2015; Diez et al. 2022), pulse-to-pulse spectroscopy hints at the possibility that similar trends are at work on shorter timescales (Klochkov et al. 2011; Vybornov et al. 2017; Muller et al. 2013) and that even phase-resolved spectroscopy exhibits a pulse-phase dependence of most parameters on luminosity (Lutovinov et al. 2016). The system geometry derived in Sect. 4.2 shows that the southern magnetic pole swings close to the observer line of sight at phase 0.1 (that is, half a period from \(\phi_{0}\)). As the observation is carried out at sub-critical accretion, an accretion column with emitting walls contributing at neighbor phases is not expected. Thus, the main pulse profile peak at phase 0.8 is perhaps due to light bending from pencil beam emission at the magnetic poles. Such emission is generated in an optically thin environment at the hot-spot and it is therefore intrinsically soft, leading to a maximum of the photon index. However, this scenario would likely produce a symmetrical behavior of the spectral parameters dependence around the phase \(\phi_{0}\), which is not observed here. This result, together with the highly-structured pulse profile, hints to a more complex NS configuration, such as a multipolar or asymmetric topology of the magnetic field. This kind of magnetic field configuration has also been recently proposed for other XRPs (Postnov et al. 2013; Tsygankov et al. 2017; Israel et al. 2017; Monkkonen et al. 2022). ## 5 Summary Our main results can be summarized as follows: * EXO 2030+375 was observed in November 2022 by _IXPE_, _HXMT_ and ART-XC at the peak of a low-luminosity Type I outburst. * Only a low polarization degree of 0%-3% has been found in the phase-averaged analysis, while the phase-resolved analysis reveals a PD in the range of 2%-7%. * The observed low PD can be explained in terms of an overheated NS atmosphere scenario, with additional depolarizing mechanisms possibly at work in EXO 2030+375. We propose that mixing of emission from several parts of the NS surface observed at different angles, on one hand, and variations of the dipole position angle resulting in changes in the PA on the other, would lead to further depolarization. * By means of the rotating vector model, we constrained the geometry of the accreting pulsar. The pulsar inclination is \(\sim\)130\({}^{\circ}\), almost supplementary to the magnetic obliquity angle, at \(\sim\)60\({}^{\circ}\)(that is, their sum is \(\sim\)180\({}^{\circ}\)). The obtained pulsar geometry implies that the magnetic axis swings close to the observer line of sight and the system obliquity stands between orthogonal and aligned rotators. * The spectral phase-resolved analysis shows evidence that the pulse phase dependence of spectral parameters is different for different luminosities. This possibly reflects changes in the accretion structure at different accretion regimes, accompanied by beam pattern changes. * Polarimetric, spectral, and timing analyses all hint toward a complex accretion geometry, where magnetic multipoles with asymmetric topology and gravitational light bending have significant effects on the resulting spectral and timing behavior of EXO 2030+375. Our analysis of EXO 2030+375 characterized the X-ray polarimetric and spectral properties of the source at the sub-critical accretion regime. Additional future observations at different luminosities would help discerning the various mechanisms at work that shape the X-ray emission properties. ###### Acknowledgements. The Imaging X-ray Polarimetry Explorer (IXPE) is a joint US and Italian mission. The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC) with industry partner Ball Aerospace (contract NNMI15AA18C). The Italian contribution is supported by the Italian Space Agency (Agenia Spaziale Italiana, ASI) through contract ASI-OHBI-2017-12-10, agreements ASI-INAF-2017-12-H0 and ASI-INFN-2017-13-H0, and its Space Science Data Center (SSDC) with agreements ASI-INAF-2022-14-HH.0 and ASI-INFN 2021-43-HH.0, and by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy. This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. We acknowledge extensive use of the NASA Abstract Database Service (ADS). This research was supported by the International Space Science Institute (ISISI) in Bern, through ISSI International Team project #495. JH acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant, the Canadian Space Agency through the co-investigator grant program, and computational resources and services provided by Compute Canada, Advanced Research Computing at the University of British Columbia, and the SciServer science platform (www.sciserver.org). We also acknowledge support from the Academy of Finland grants 333112, 349144, 349373, and 349906 (SST, JP), the German Academic Exchange Service (DAAAD) travel grant 57525212 (VD, VFS), the Vaisala Foundation (SST), the Russian Science Foundation grant 19-12-00423 (AAL, IAM, SVM, AES), the French National Centre for Scientific Research (CNRS), and the French National Centre for Space Studies (CNES) (POP). We thank Lingda Kong and Youli Tuo for their helpful assistance in the HXMT data analysis.
2304.07092
Obfuscation of Discrete Data
Data obfuscation deals with the problem of masking a data-set in such a way that the utility of the data is maximized while minimizing the risk of the disclosure of sensitive information. To protect data we address some ways that may as well retain its statistical uses to some extent. One such way is to mask a data with additive noise and revert to certain desired parameters of the original distribution from the knowledge of the noise distribution and masked data. In this project, we discuss the estimation of any desired quantile and range of a quantitative data set masked with additive noise.
Saswata Naha, Sayantan Roy, Arkaprava Sanki, Diptanil Santra
2023-04-14T12:25:01Z
http://arxiv.org/abs/2304.07092v1
# Obfuscation of Discrete data ###### Abstract Data obfuscation deals with the problem of masking a data-set in such a way that the utility of the data is maximized while minimizing the risk of the disclosure of sensitive information. To protect data we address some ways that may as well retain its statistical uses to some extent. One such way is to mask a data with additive noise and revert to certain desired parameters of the original distribution from the knowledge of the noise distribution and masked data. In this project, we discuss the estimation of any desired quantile and range of a quantitative data set masked with additive noise. ## Introduction Privacy protection and data security have recently received a substantial amount of attention due to the increasing need to protect various sensitive information like economic data and medical data. In case of continuous data, a very famous example is publication of marks of students. An Institute would like to publish the performance of its students without disclosing the actual marks. Whereas, for discrete case, data on economic class study can be used, where it is of national interest to know about the economic status of the country, not disclosing the sensitive information of income of any individual. We are interested in the obfuscation of discrete case only and the desired estimations from such a data. ## Basic Problem Our main goal through this project is to construct a way of obfuscation of discrete data using some additive noise in such a way that the probability of estimating the actual data given the obfuscated data is quite low. However, given the nature of the noise added to the original data, we can estimate certain population statistics with low error. Suppose, \(X\) be the data vector containing the individual values of the population. We want to publish the distribution of \(X\), \(p_{i}=\mathbf{P}(X=x_{i})\,\forall\,x_{i}\in\) range of \(X\). For additive model, we add noise vector \(Y\) with \(q_{j}=\mathbf{P}(Y=y_{j})\,\forall\,y_{j}\in\) range of \(Y\) depending on the range of \(X\) and get the masked data \(Z=X+Y\). Our aim is to adjust \(Z\) accordingly to publish the obfuscated data \(Z^{\prime}\) such that \(\mathbf{P}(X\mid Z^{\prime})\approx 0\). As for the next part of the project we want to estimate certain statistical properties of the true data viz. **Median** or other quantiles and **Range**. For this, we would like to estimate the actual probabilities \(p_{0},p_{1},\cdots\). Note that, \(\mathbf{P}(Z=z_{k})=\sum_{i=0}^{k}\mathbf{P}(X=x_{i})\mathbf{P}(Y=z_{k}-x_{i}) =\sum_{i=0}^{k}p_{i}q_{j}\). Estimating \(\mathbf{P}(Z=z_{k})\) from the obfuscated data, we can try to solve for \(p_{i}\)'s from the given data of \(q_{j}\)'s. ## Dataset We have generated some poisson datasets and added some noise to them. Then truncated that in such a way that the probability of estimating the actual data from the obfuscated data is quite low. At first, 10 lakhs data from poisson with parameter generated from exponential (2.5). As we have mentioned before that economic data are more sensitive in nature and instead of publishing the actual data we want to obfuscate it before publishing it. This kind of data usually take positive values and they are positively skewed. We have generated the data in such a way that it can be compared with an economic data. Here in the data set, the variable in the X-axis denotes different economic class. The whole population is divided into 12 economic classes where \(0^{th}\) class denote the lowest income group and \(12^{th}\) class denote the highest income group. The variable in Y-axis denotes the frequency of each class. Then we obfuscate it in the following way: We add an additive noise from unif(0,10). Then we collect the household size data set from the website of "Census of India" and have used the data for the whole Indian region. The data is shown in the diagram below - Here X-axis denotes the household size and Y-axis denotes the frequency corresponding to each household size. Then we similarly add an additive noise from unif(1,10). ## Goodness of Obfuscation In this section, we want to show that using this obfuscation procedure, probability of getting back the actual data from the published obfuscated data is quite low. Let X,Y,Z be the values of the actual data, added noise and values of obfuscated data respectively. Now, we have to calculate \(P(X|Z)\). We know, \[\mathbf{P}(X=x|Z=z)=\frac{\mathbf{P}(X=x,Z=z)}{\mathbf{P}(Z=z)}\] To calculate the probabilities in the numerator and denominator separately, first we construct a matrix with the values of actual data (X) in the rows and the corresponding error values (Y) in the columns. So the \((i,j)^{th}\) element of the matrix represents number of individuals in class **i** has taken error values **j**. So the size of the matrix is \(12\times 10\). For our calculation, as the values of X have been fixed as x, the row wise sum of the matrix is fixed for all rows. Also note that, the sum of the off-diagonals of the matrix represents the values of Z. Fixing these values according to our data, we get the total number of possible cases, from which we can calculate the exact probability. Next, for the probability in the denominator we take all possible cases for X and Y such that the data in our case is valid in real situation. The computations are very cumbersome\({}^{*}\), however, the actual conditional probability turns out to be of the order \(\mathcal{O}(10^{-10^{6}})\). So we can conclude that, since this probability in itself is very small, if we further mask the obfuscated data, the probability of getting the true data conditioned on that will be even smaller. So this method of obfuscation can be said to be satisfactory. [* Cumbersome in the sense that, we have made an algorithm to count the number of matrices which satisfy the required conditions. In this way we got the proportion of matrices satisfying the conditions which will be close to the required probability.] ## Different approaches to estimate quantiles Quantiles are robust statistics and hence is not much affected by the presence of outlying data-points. To do the estimation of quantiles we first estimate the c.d.f. of X from the obfuscated data using different approaches. ### First Approach First we try the most common method to estimate quantiles that is, using **least square** method from the complete obfuscated data. Moreover, we know that the least square method gives an unbiased estimate of parameter. So we shall use this in the following way: We know that for a least square model, \[\tilde{Y}=X\tilde{\beta}+\tilde{\epsilon},\,\text{where,}\,\,\tilde{\epsilon} \sim\mathcal{N}(\tilde{0},\sigma^{2}I),\] We get the solution as \(\hat{\beta}=(X^{\prime}X)^{-1}X^{\prime}\tilde{Y}\). However, as the components of \(\hat{\beta}\) are probabilities and add up to 1, implementing the constraint of \(\tilde{\beta}^{\prime}\tilde{1}=1\), we get the solution as \[\hat{\beta}=(X^{\prime}X)^{-1}(X^{\prime}\tilde{Y}-\frac{\tilde{1}^{\prime}(X ^{\prime}X)^{-1}X^{\prime}\tilde{Y}-1}{\tilde{1}^{\prime}(X^{\prime}X)^{-1} \tilde{1}}).\] The result for our data set is like the following - \(\ ### Second Approach Since the first approach failed, we shall try MLE though it is not unbiased. But we know that MLE maximizes the likelihood (i.e, probability of the value of the parameter is the maximum according to the sample) and consistent. Also MLE belongs to the parameter space. Therefore, our next attempt is using **MLE** with truncation. Suppose, \(\tilde{p}\) and \(\tilde{r}\) be the probability distributions of the actual data and the obfuscated data respectively. Maximizing the likelihood w.r.t \(\tilde{r}\) we get the MLE estimates as \(r_{i}=\dfrac{n_{i}}{\Sigma n_{j}}\;\forall\;i\). So, solving w.r.t \(\tilde{p}\) we get- \[p_{0}=\dfrac{n_{0}}{\sum n_{i}}*11,\,p_{j}=\dfrac{n_{j}-n_{j-1}}{\sum n_{i}}*11 \text{ for j = 1,2,...,11 and }p_{12}=1-\sum p_{j}.\] Similarly as the above, we can also estimate \(\tilde{p}\) as - \[p_{12}=\dfrac{n_{22}}{\sum n_{i}}*11,\,p_{j}=\dfrac{n_{j+11}-n_{j+10}}{\sum n_{i }}*11\text{ for j = 1,2,...,11 and }p_{0}=1-\sum p_{j}\] Using R, both of the estimates are calculated, the results are shown below - > p1 > p2 [,1] [1,] [1,1] [1,1] [1,1] [1,1] [1,1] [2,1] 0.245828 [2,1] 0.270787 [3,1] 0.267086 [4,1] 0.193842 [4,1] 0.194051 [5,1] 0.105688 [6,1] 0.049566 [7,1] 0.020677 [8,1] 0.008503 [9,1] 0.001782 [10,1] 0.000363 [11,1] 0.00003345 [12,1] 0.000033 [13,1] 0.118515 [13,1] 0.000011 However, in this case as well, some of the estimates turn negative and one way to deal in such situation is combine the above to estimates to get a better estimate. A general technique in this combination is depending on data. Take that part of first estimate where the height of the columns of obfuscated data are increasing and that part of second estimate where the height of columns are decreasing. For our data set, the combined estimate is as following - > p3 [,1] [1,1] 0.110374 [2,1] 0.245828 [3,1] 0.270787 [4,1] 0.193842 [5,1] 0.105688 [6,1] 0.049566 [7,1] 0.016864 [8,1] 0.004840 [9,1] 0.001782 [10,1] 0.0000363 [11,1] 0.000033 [12,1] 0.0000022 [13,1] 0.000011 [Necessary R codes are given in Appendix section.] Now, there is two issues regarding this method - First of all, this method is on non-truncated obfuscated data, where there is a data leak in the extreme values. Secondly, the technique of the combination as mentioned above, failed to work if there are ups and downs in frequency all over the range. To avoid the first issue, we shall work on the truncated obfuscated data and for the second issue, we have to somehow extend our MLE approach to Constrained MLE. ### Third Approach For the constrained MLE we need to include the constraints \(p_{i}\geq 0\ \forall\ i\) i.e \(r_{1}\leq r_{2}\leq r_{3}\leq...\leq r_{n}\). For that we are transforming the unconstrained vector \((u_{1},u_{2},...,u_{n})\) to constrained vector \((r_{1},r_{2},...,r_{n})\) by using nested logistic transformation - \[r_{k}=T_{k}(u)=\frac{1}{1+\sum_{i=k}^{n}e^{-u_{i}}}\ \forall\ k=1,2,...,n\] Now by replacing \((r_{1},r_{2},...,r_{n})\) in our likelihood function and differentiating it w.r.t. each \(u_{i}\), we get - \[\frac{\delta l}{\delta u_{k}}=-n_{0}\frac{\sum_{j=1}^{k}\frac{e^{-u_{k}}}{(1+ \sum_{i=j}^{n}e^{-u_{i}})^{2}}}{1-\sum_{j=1}^{n}\frac{1}{1+\sum_{i=j}^{n}e^{-u_ {i}}}}+\sum_{j=1}^{k}\frac{n_{j}.e^{-u_{k}}}{1+\sum_{i=j}^{n}e^{-u_{i}}}=0\ \forall\ k=1,2,...,n\] To solve these equations we tried Sequential Quadratic Programming method for which we need to find the gradient and hessian matrix by differentiating the system of equations w.r.t \(u_{i}\)'s once and twice respectively. But it was very difficult to calculate. So we have tried another method. ### Fourth approach We have seen that obtaining the probabilities analytically was difficult and problematic. So we shall now try numerical method. In this method also we shall try to use constrained MLE by calculating the \(p_{i}\)'s in iterative procedure. The steps are given below: **Step 1:** Define the likelihood function as per the obfuscation procedure. **Step 2:** Initialize \(p_{i}\)'s such that \(p_{i}\geq 0\ \forall\ i=1,2,...,n\) and \(\sum p_{i}=1\) **Step 3:** To determine the values of \(p_{i}\)'s we want the precision of the values of \(p_{i}\)'s upto third digit (say). In each step of the iteration we fix the sum and so we update \(p_{i}\) by \(sum\times\frac{1}{1000}\) and \(p_{i+1}\) by \(sum\times\frac{999}{1000}\) and check whether the value of the likelihood function is greater or not. In this way we shall update \(p_{i}\) and \(p_{i+1}\) by \(sum\times\frac{j}{1000}\) and \(p_{i+1}\) by \(sum\times\frac{(1000-j)}{1000}\)\(\forall\ j=0,1,2,...,1000\) and compare with the likelihood value. In this way we shall get the values of \(p_{i}\) and \(p_{i+1}\) for which likelihood function will be maximum. We do the same thing for different values of i and repeat the whole process k times. Where k is sufficiently large. **Correctness of the above process:** In the process we are keeping the sum fixed and vary the 'j' to see for which proportion of'sum' should be \(p_{i}\). So the sum of the \(p_{i}\)'s are always be 1 and they all are positive. [Thus we avoided the problems occurred in the previous methods.] In each step we are comparing the value of likelihood function in current step with that of the previous step. And as we are taking k to be sufficiently large (i.e we are repeating the iterative process sufficient no. of times), the process will return the correct constrained MLE values. ### Implementation 1. We first implement this procedure to the generated data set which we have used in previous approaches. Here the log likelihood function is \(L=n[1]\times log(x[1])+n[2]\times log(x[1]+x[2])+\cdots+n[11]\times log(x[1]+x[ 2]+x[3]+\cdots+x[11])+n[12]\times log(x[2]+x[3]+\cdots+x[12])+\cdots+n[23] \times log(x[13])\) where, \(x[i]\) be the probaility of \(i^{th}\) class and \(n[i]\) be the observed count in that class. [Necessary R codes are given in Appendix section.] The results are as following - > p [1] 1.112308e-01 2.460090e-01 2.697332e-01 1.939751e-01 1.081224e-01 [6] 4.567903e-01 1.823512e-02 4.820237e-03 1.764786e-03 3.644619e-04 [11] 3.292022e-05 2.189045e-05 1.108924e-05 > table(U)/sum(table(U)) U 0 1 2 3 4 5 6 7 0.112400 0.244873 0.268106 0.195803 0.107910 0.046550 0.017189 0.005198 8 9 10 11 12 0.001498 0.000372 0.000079 0.000017 0.000005 The vector 'p' gives the estimate of \(p_{i}\)'s using the iterative method and 'table(u)/sum(table(u))' shows the proportions of values from actual data. We can observe that the estimation is quite well i.e the values in the two vectors are close enough. That's why quantiles can be estimated with high accuracy by taking the cumulative sums of the vector p. 2. To check the consistency we generated a data set in the same way as the previous and implemented the procedure for that data set. The results are shown below: > p [1] 3.838245e-02 1.263491e-01 2.059402e-01 2.194853e-01 1.812095e-01 [6] 1.180739e-01 6.191681e-02 3.006158e-02 1.188655e-02 4.477023e-03 [11] 1.680210e-03 3.311785e-04 1.432893e-04 3.291323e-05 0.000000e+00 [16] 2.197540e-05 > table(U)/sum(table(U)) U 0 1 2 3 4 5 6 7 0.038949 0.125894 0.205321 0.220638 0.180763 0.117134 0.063574 0.029280 8 9 10 11 12 13 14 15 0.012061 0.004366 0.001413 0.000455 0.000114 0.00030 0.000004 0.000002 16 0.000002 Here also the vector 'p' gives the estimate of \(p_{i}\)'s and 'table(u)/sum(table(u))' shows the proportions of values from actual data. We can observe that in first few components the estimation is quite well but in last part the difference between the estimated values and the actual values are not that small and also there is a 0(zero) in \(15^{th}\) component. To describe this observation we go through the diagram of obfuscated data. We can notice that there is no observation for the value 24 which is affecting the likelihood function and consequently giving a poor estimate near to corresponding value in actual data. So for such cases we suggest to merge the class with any of the neighbour class and modify the likelihood function accordingly. 3. Now we shall try to implement the procedure for the household size data set. Before implementing, we need to construct the raw data set from the summarize data. The main issue for construction is that there are some classes in the data set that contain a class of values and also the last class is unbounded. So we need to think about some techniques about breaking the classes in discrete points in order to construct the raw data set that we need. We have considered poisson distribution with parameter average household size (i.e 4.8) as the best fit for the data. We break those classes into discrete values according to the proportion of expected frequencies corresponding to each value. After breaking the data set is like following: > table(data) data 1 2 3 4 5 6 7 8 9 10 11 101943 241341 343813 568300 462313 309346 188353 113012 60273 28931 30788 12 13 14 15 16 17 18 19 20 21 22 12315 4547 1559 12190 3656 1032 275 69 16 3 1 Here, the likelihood function is - \(L=n[1]\times log(x[1])+n[2]\times log(x[1]+x[2])+\cdots+n[10]\times log(x[1 ]+x[2]+\cdots+x[10])+n[11]\times log(x[2]+x[3]+\cdots+x[11])+\cdots+n[22] \times log(x[13]+x[14]+\cdots+x[22])+\cdots+n[29]\times log(x[20]+x[21]+x[22])\) where, \(x[i]\) be the probaility of \(i^{th}\) class and \(n[i]\) be the observed count in that class. The results are as following: > p [1] 4.075434e-02 9.749157e-02 1.380606e-01 2.298257e-01 1.855334e-01 1.228633e-01 7.559650e-02 [8] 4.711241e-02 2.400597e-02 1.083759e-02 1.295609e-02 4.880236e-03 1.700121e-03 7.594322e-04 [15] 4.865951e-03 1.525657e-03 3.773305e-04 1.244388e-04 3.227236e-05 4.029507e-06 0.000000e+00 [22] 0.000000e+00 > n/sum(m) [1] 4.103860e-02 9.715524e-02 1.380606e-01 2.287772e-01 1.861107e-01 1.245316e-01 7.582417e-02 [8] 4.549455e-02 2.426375e-02 1.164655e-02 1.239415e-02 4.957578e-03 1.830459e-03 6.275975e-04 [15] 4.907257e-03 1.471775e-03 4.154462e-04 1.107051e-04 2.777693e-05 6.441027e-06 1.207693e-06 [22] 4.025642e-07 Similarly as the generated data for this household size data set also this procedure gives a quite good estimate of the distribution as we can see in the result. Here we are using the obfuscated data set for our estimation but to maintain the data privacy at extreme values we need to do truncation. So our next aim is to implement our procedure for the truncated data set. We shall use our household size data set and will truncate the obfuscated data at 22. The truncated obfuscated data set will be as following: In this case the log likelihood function is -\(L=n[1]\times log(x[1])+n[2]\times log(x[1]+x[2])+\cdots+n[10]\times log(x[1]+x[2]+ \cdots+x[10])+n[11]\times log(x[2]+x[3]+\cdots+x[11])+\cdots+n[21]\times log(x[1 2]+x[14]+\cdots+x[21])+n[22]\times log(10-10\times(x[1]+\cdots+x[12])-9\times x[1 3]-\cdots-2\times x[20]-x[21])\) where, \(x[i]\) be the probability of \(i^{th}\) class and \(n[i]\) be the observed count in that class. The result are shown below - > p [1] 4.075108e-02 9.685235e-02 1.382262e-01 2.325940e-01 1.849859e-01 [6] 1.219021e-01 7.558322e-02 4.703516e-02 2.397664e-02 1.082255e-02 [11] 1.300026e-02 4.212354e-04 3.934297e-03 6.935544e-03 2.925312e-03 [16] 5.059034e-05 0.000000e+00 0.000000e+00 0.000000e+00 [21] 0.000000e+00 0.000000e+00 0.000000e+00 > n/sum(m) [1] 4.103860e-02 9.715524e-02 1.384068e-01 2.287772e-01 1.861107e-01 [6] 1.245316e-01 7.582417e-02 4.549458e-02 2.426375e-02 1.164658e-02 [11] 1.239415e-02 4.957578e-03 1.830459e-03 6.275975e-04 4.907257e-03 [16] 1.471775e-03 4.154462e-04 1.107051e-04 2.777693e-05 6.441027e-06 [21] 1.207693e-06 4.025642e-07 As we can see from the result, the first few estimated probabilities are very close to that of actual data but the estimation gets worse at the end. This is because the fact that we truncated the observations at 22 and hence considering all the observations that have value more than or equal 22 as same quantity. Although if we focus on estimation of quantiles, we proceed to check the cumulative sum of each vector and the result are as following - > cumsum(p) [1] 0.04075108 0.13760344 0.27582961 0.50842362 0.65341311 0.81531520 [7] 0.89089842 0.93793358 0.96191022 0.97273276 0.98573302 0.98615426 [13] 0.99008855 0.99702410 0.99994941 1.00000000 1.00000000 1.00000000 [19] 1.00000000 1.00000000 1.00000000 > cumsum(n/sum(n)) [1] 0.0410386 0.1381938 0.2766006 0.5053779 0.6914885 0.8160201 0.8918443 [8] 0.9373389 0.9616026 0.9732492 0.9856434 0.9906009 0.9924314 0.9930590 [15] 0.9979662 0.9994380 0.9998535 0.999642 0.9999919 0.9999984 0.9999996 [22] 1.0000000 As we can observe upto \(99^{th}\) percentile, the estimation is quite accurate. Only the last quantile can't be predicted due to the same reason of truncation. Therefore, our procedure works fine if the data need to be published after truncation. Point to be noted here, in the whole procedure we described, needed the likelihood function to be perfectly defined. Now to define the likelihood function, it is necessary to have a bit of knowledge of the range and if the range is published again we have compromise some data privacy. To maintain the privacy, we suggest to publish a wider range and we have checked that our procedure can be implemented with a knowledge of wider range also. **Implementation:** Here we consider the without truncation case where the actual range was (1,22), instead here we are publishing a wider range (1,25). Therefore, for this case, log likelihood function is - \(L=n[1]\times log(x[1])+n[2]\times log(x[1]+x[2])+\cdots+n[10]\times log(x[1] +x[2]+\cdots+x[10])+n[11]\times log(x[2]+x[3]+\cdots+x[11])+\cdots+n[22] \times log(x[13]+x[14]+\cdots+x[22])+\cdots+n[29]\times log(x[20]+x[21]+x[22]+ \cdots+x[25])\) where, \(x[i]\) be the probaility of \(i^{th}\) class and \(n[i]\) be the observed count in that class. > p2 [1] 4.092541e-02 9.687059e-02 1.383680e-01 2.296320e-01 1.848580e-01 1.255999e-01 7.500836e-02 [8] 4.536684e-02 2.493819e-02 1.146792e-02 1.283319e-02 4.867760e-03 1.911850e-03 2.459943e-04 [15] 5.064380e-03 1.484763e-03 3.879129e-04 1.204920e-04 2.824641e-05 1.206162e-05 8.041082e-06 [22] 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.0000000e+00 0.000000e+00 0.0000000e+00 ] data2/sum(data2) [1] 4.103858e-02 9.715520e-02 1.384067e-01 2.287771e-01 1.861106e-01 1.245316e-01 7.582414e-02 [8] 4.549456e-02 2.426374e-02 1.164658e-02 1.239414e-02 4.957576e-03 1.830459e-03 6.275973e-04 [15] 4.907255e-03 1.47217e-03 4.154461e-04 1.107051e-04 2.77/692e-05 6.441024e-06 1.207692e-06 [22] 4.025640e-07 So we can see from the result that even if we declare a wider range, the estimated probability are close to the actual value and the probabilities corresponding to values 23-25 which are not in the actual data coming to be zero as expected. ## Goodness of estimates To test for the goodness of the estimates we compute the individual variance and MSEs of each component of the estimate. Bootstrapping from the predicted data we numerically generate multiple copies of the raw obfuscated data and numerically calculate the variances and MSEs. The computed vectors are as seen below: > mse [1] 1.153214e-03 1.950502e-03 2.483242e-03 2.110927e-03 1.580662e-03 1.103193e-03 5.785101e-04 3.158853e-04 [9] 1.616583e-04 6.869356e-05 2.579026e-05 1.913528e-05 1.059432e-05 > vari[1] 1.153212e-03 1.950502e-03 2.483238e-03 2.110925e-03 1.580661e-03 1.103192e-03 5.785098e-04 3.158852e-04 [9] 1.616583e-04 6.869356e-05 2.579026e-05 1.913528e-05 1.059432e-05 We note that the variances are of the order \(\mathcal{O}(10^{-3})\) and the corresponding MSEs are very close to them. Hence, we can say that the estimates are of low bias and variance and so it is consistent. ## Approaches for estimating Range Since our actual goal is to hide the information about the raw data. So we want to see whether the estimate of range of the actual data can be estimated sufficiently close to the real value. **Using Law of large number5:** Footnote 5: Nandi Mridul & Roy Bimal [2021] Estimation of extremum of obfuscated data and its error of estimation We first try to estimate the range from the data set that we generated from poisson distribution and added an additive noise Discrete Uniform (0,10) to it. Suppose'W' is the obfuscated data. We shall add the same noise 999 times to the obfuscated data. So we have 'W1'(say) which is a data after obfuscation the actual data 1000 times. If we consider 1000 as a large number, we know that the maximum element of actual data should be close to (the maximum element of 'W1'-5*1000)[Using Law of large number]. But we know that the maximum should be 12 according to our data and we observe the results to be in between 450-500. The results in R are given below: > set.seed(13) > mu=rexp(1,2.5)*13 > mu [1] 2.187353 > O=C(rep(1,23)) > U=ropolis(10000000,mu) > V=sample(0:10,1000000,TRUE) > W=U+\(\nu\) > WL=W > for( i in1:999) + { + WL=WL+sample(0:10,1000000,TRUE) + } > m=max(w1-5000) > m [1] 477 We have also observed that if we increase the number of noise, the resulted maximum value is more far from that we expect. ### Another approach We know the minimum household size is 1. So here we are trying to estimate the maximum of the household size from the obfuscated data. **Method of finding maximum:** **Step 1:** Estimate the quantiles from obfuscated data by previously mentioned method (Fourth approach to estimate quantiles). **Step 2:** The maximum is estimated as the value at \(100^{th}\) percentile. It may happen that the \(100^{th}\) percentile is not unique. In that case, the maximum will be estimated from the minimum value that is at \(100^{th}\) percentile. **Implementation:** **Case 1:** (Without Truncation) * cumsum(p) [1] 0.04094531 0.13786301 0.27628805 0.50581810 0.6905195 0.81624070 0.89135206 0.93663602 [9] 0.96155976 0.97303164 0.98586836 0.99073746 0.99264984 0.99289421 0.99795769 0.99944331 [17] 0.9998311 0.99997991 0.99999194 1.00000000 1.00000000 * cumsum(data2/sum(data2)) [1] 0.04103858 0.13819378 0.27660052 0.50537765 0.69148823 0.81601979 0.89184393 0.93733850 [9] 0.96160224 0.97324882 0.98564296 0.99060053 0.99243099 0.99305859 0.99796584 0.99943802 [17] 0.99985347 0.99996417 0.99999195 0.99999839 0.99999960 1.00000000 ``` By the procedure described above we check the \(100^{th}\) percentile using our estimated probabilities and we can see that \(100^{th}\) percentile occurs first at value 21. [The maximum value of the actual data is 22. So there is a small error in estimation.] **Case 2:** (Truncated) * cumsum(pl) [1] 0.04092044 0.13731780 0.27546390 0.50752239 0.69208596 0.81668734 0.89144817 0.93668867 [9] 0.96158646 0.97303786 0.98587162 0.98753797 0.98962847 0.99621439 1.00000000 1.0000000 [17] 1.000000000 1.00000000 1.0000000 1.0000000 1.00000000 1.0000000 1.0000000 1.00000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.000000 1.0000000 1.0000000 1.0000000 1.0000000 1.000000 1.0000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.00000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 1.00000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.00000 1.00000 1.000000 1.00000 1.000000 1.00000 1.000000 1.000000 1.000000 1.00000 1.000000 1.00000 1.00000 1.000000 1.00000 1.00000 1.00000 1.00000 1.00000 1.000000 1.00000 1.00000 1.000000 1.00000 1.00000 1.00000 1.00000 1.000000 1.000000 1.00000 1.00000 1.00000 1.000000 1.000000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.000000 1.0000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.0000 1.00000 1.0000 1.00000 1.00000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.00000 1.00000 1.0000 1.0000 1.0000 1.00000 1. We can see that in this case \(100^{th}\) percentile occurs first at value \(15\), which is far from that of the actual value of maximum. This result was expected because we have truncated the obfuscated data at \(22\), that's why the values close to the end are given less priority. So, to estimate the maximum value from the truncated obfuscated data is not as easy as the previous case. ## Conclusion **1.** The method that we have used to obfuscate our data i.e. the additive white noise model is a good way of masking a discrete data set i.e. probability of estimating the actual values of the data (e.g. household size) from the obfuscated data is very low. **2.** Given real life data set related to household size, we have shown the actual probability with estimated probability for different household size in the following plot: We can notice in the above plot that the actual probability and the estimated values of the probabilities are very close to each other (the plots almost coincide with each other. **3.** To estimate the probabilities from obfuscated data we tried Least square, MLE and constrained MLE methods. But there were some problems (e.g. estimate outside parameter space) which were solved in the fourth approach (numerical method). Also we have seen that the result obtained using iteration in this approach was very good. **4.** While estimating the range of the raw data set, we first tried the method using LLN. But in that case the estimated values were very far from the actual value. Then we have tried another approach to estimate maximum value of the raw data set. In this method, we see that upto the \(99^{th}\) percentile can be estimated quite accurately, however, the estimate for the \(100^{th}\) percentile varies quite a lot from its real value. So there is further scope of study in this regard. ## Appendix: R codes for First Approach : P=matrix(0,23,13) o=matrix(1,13,1) for(i in 1:13) for(j in i:(i+10)) P[j,i]-1 Q=t(P)**P/121 C=solve(Q) R=t(P)**P/11 n=(0)*t*C**R-1 d=t(0)*C**C**a0 Beta=C*a*(R-(n/d)[1]*o) R codes for the second approach : p1=matrix(0,13,1) p2=matrix(0,13,1) p1[1]=r[1]*11 for(i in 2:12) p1[i]=(r[i]-r[i-1])*11 p1[13]=1-sum(p1[1:12]) p2[13]=r[23]*11 for(i in 2:12) p2[1]=(r[i+10]-r[i+11])*11 p2[1]=1-sum(p2[2:13]) p3=matrix(0,13,1) p3[1:6]=p1[1:6] p3[8:13]=p2[8:13] p3[7]=1-sum(p3) R codes for Fourth approach : Code for the generated data : L=function(x,n){ n[1]*log(x[1])+ n[2]*log(x[1]+x[2])+ n[3]*log(x[1]+x[2]+x[3])+ n[4]*log(x[1]+x[2]+x[3])+ n[5]*log(x[1]+x[2]+x[3]+x[4]+x[5])+ n[6]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6])+ n[7]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[8]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9])+ n[9]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10])+ n[10]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10]+x[11])+ n[11]*log(x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10]+x[11]+x[12])+ n[13]*log(x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10]+x[11]+x[12]+x[13])+ n[14]*log(x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10]+x[11]+x[12]+x[13])+ n[15]*log(x[5]+x[6]+x[7]+x[8]+x[9]+x[10]+x[11]+x[12]+x[13])+ n[16]*log(x[6]+x[7]+x[8]+x[9]+x[10]+x[11]+x[12]+x[13])+ n[17]*log(x[7]+x[8]*x[9]+x[10]+x[11]+x[12]+x[13])+ n[18]*log(x[9]+x[10]+x[11]+x[12]+x[13])+ n[19]*log(x[10]+x[11]+x[12]+x[13])+ n[20]*log(x[10]+x[11]+x[12]+x[13])+ n[21]*log(x[11]+x[12]+x[13])+ n[22]*log(x[12]+x[13])+ n[23]*log(x[13]) p=rep(1,13)/13 |=4(p,2) |=1(p,2) |=1(p,2) |=rep(1,22)/2 |=1(p,2) |=rep (k in 1:10) |=1(p,2) |=1(p,2) |=1(p,2) |=1(p,2) ) | ) Code for the non-truncated household size data : [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 ] [command=lines, width= 1 1 ] [command=lines, width= 1 ] [command Code for the truncated household size data : L=function(x,n)( n[1]*log(x[1])+ n[2]*log(x[1]+x[2])+ n[3]*log(x[1]+x[2]+x[3])+ n[4]*log(x[1]+x[2]+x[3]+x[4])+ n[5]*log(x[1]+x[2]+x[3]*x[4]+x[5])+ n[6]*log(x[1]+x[2]+x[3]+x[4]+x[5]*x[6])+ n[7]*log(x[1]+x[2]+x[3]+x[4]+x[5]*x[6]+x[7])+ n[8]*log(x[1]+x[2]+x[3]+x[4]+x[5]*x[6]+x[7]+x[8])+ n[9]*log(x[1]+x[2]+x[3]+x[4]+x[5]*x[6]+x[7]+x[8]*x[9])+ n[10]*log(x[1]+x[2]+x[3]+x[4]+x[5]*x[6]+x[7]+x[8]*x[9]+x[10])+ n[11]*log(x[1]+x[3]+x[4]+x[5]*x[6]+x[7]+x[8]*x[9]+x[10]+x[11])+ n[12]*log(x[3]+x[4]+x[5]*x[6]+x[7]+x[8]*x[9]*x[10]+x[11]+x[12])+ n[13]*log(x[4]+x[5]*x[6]+x[7]+x[8]*x[9]*x[10]+x[11]+x[12]+x[13])+ n[14]*log(x[5]+x[6]+x[7]+x[8]*x[9]*x[10]+x[11]+x[12]+x[13]+x[14]+x[14])+ n[15]*log(x[6]+x[7]+x[8]*x[9]*x[10]+x[11]+x[12]+x[13]+x[14]+x[15])+ n[16]*log(x[7]+x[8]*x[9]*x[10]+x[11]+x[12]+x[13]+x[14]+x[15]*x[16])+ n[17]*log(x[8]+x[9]*x[10]+x[11]+x[12]+x[13]+x[14]+x[15]*x[16]+x[17])+ n[18]*log(x[9]+x[10]+x[11]+x[12]+x[13]+x[14]+x[15]*x[16]+x[17]+x[18])+ n[19]*log(x[10]+x[11]+x[11]+x[12]+x[13]+x[14]+x[15]*x[16]+x[17]+x[18]+x[19])+ n[20]*log(x[11]+x[12]+x[13]+x[14]+x[15]*x[16]+x[17]+x[18]+x[19]*x[20])+ n[21]*log(x[12]+x[13]+x[14]+x[15]*x[16]+x[17]+x[18]+x[19]*x[20])+ n[22]*log(10-10*(x[1]+x[2]+x[3]*x[4]+x[5]*x[6]+x[7]+x[8]*x[9]*x[10]*x[11]+x[12]) b*x[13]-8*x[14]-7*x[15]-6*x[16]-5*x[17]-4*x[18]-3*x[19]-2*x[20]-x[21]) } p=rep(1,22)/22 l=L(p,21) num=1000 for(k in l:50) { for(j in l:21) { p1=p s=sum(p1[3],p1[3+l]) for(i in l:num) { pl[3+l]=s*(l-i/num) pl[3]=s*(l/num) if(L(p1,Z1)>l) { p=p1 l=L(p,Z1) Code for the non-truncated household size data with wider range given: ``` L3=function(x,n) { n[ ]*log(x[1])+ n[ 2]*log(x[1]+x[2])+ n[ 3]*log(x[3]+x[1]+x[2])+ n[ 4]*log(x[1]+x[2]+x[3]+x[4]+x[5])+ n[ 5]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6])+ n[ 7]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7])+ n[ 8]*log(x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[ 9]*log(x[9]+x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[10]*log(x[10]+x[9]+x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[11]*log(x[11]+x[10]+x[9]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[12]*log(x[12]+x[11]+x[10]+x[9]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[13]*log(x[13]+x[12]+x[11]+x[10]+x[9]+x[4]+x[5]+x[6]+x[7]+x[8])+ n[14]*log(x[14]+x[13]+x[12]+x[11]+x[10]+x[9]+x[8]+x[7]+x[6])+ n[15]*log(x[15]+x[14]+x[13]+x[12]+x[11]+x[10]+x[9]+x[8]+x[7]+x[6])+ n[16]*log(x[16]+x[15]+x[14]+x[13]+x[12]+x[11]+x[10]+x[9]+x[8]+x[7])+ n[17]*log(x[17]+x[16]+x[15]+x[14]+x[13]+x[12]+x[11]+x[10]+x[9]+x[8]+x[7]+x[6])+ n[18]*log(x[18]+x[17]+x[16]+x[15]+x[14]+x[13]+x[12]+x[11]+x[10]+x[9]+x[8]+x[7])+ n[19]*log(x[19]+x[18]+x[17]+x[16]+x[15]+x[14]+x[13]+x[12]+x[11]+x[10]+x[9]+x[9]+x[19]+x[18]+x[7])+ n[20]*log(x[20]+x[19]+x[18]+x[17]+x[16]+x[15]+x[14]+x[13]+x[12]+x[11])+ n[21]*log(x[21]+x[20]+x[19]+x[18]+x[17]+x[16]+x[15]+x[14]+x[13]+x[12]+x[11])+ n[22]*log(x[22]+x[21]+x[20]+x[19]+x[18]+x[17]+x[16]+x[15]+x[14]+x[13]+x[12]+x[11])+ n[23]*log(x[23]+x[22]+x[21]+x[20]+x[19]+x[18]+x[17]+x[16]+x[15]+x[14]+x[13])+ n[24]*log(x[24]+x[23]+x[22]+x[21]+x[20]+x[19]+x[18]+x[17]+x[16]+x[15]+x[14]+x[13])+ n[25]*log(x[25]+x[24]+x[23]+x[22]+x[20]+x[19]+x[18]+x[17]+x[16]+x[15]+x[14])+ n[26]*log(x[25]+x[24]+x[23]+x[22]+x[20]+x[19]+x[18]+x[17]+x[16])+ n[27]*log(x[25]+x[24]+x[23]+x[22]+x[21]+x[20]+x[19]+x[18]+x[17]+x[16])+ n[26]*log(x[25]+x[24]+x[23]+x[22]+x[21]+x[20]+x[19]+x[18]+x[17]+x[16])+ n[27]*log(x[25]+x[24]+x[23]+x[22]+x[21]+x[20]+x[19]+x[18]+x[17]+x[16])+ n[28]*log(x[25]+x[24]+x[23]+x[22]+x[21]+x[20]+x[19]+x[18]+x[19])+ n[29]*log(x[25]+x[24]+x[23]+x[22]+x[21]+x[20])+ n[30]*log(x[25]+x[24]+x[23]+x[22]+x[21])+ ``` ## Acknowledgements: We give our thanks to Prof. Bimal K. Roy, Prof. Nachiketa Chottopadhyay and Prof. Prabal Choudhury for their kind advice and support in this project. This project was also inspired by some previous works done by Prof. Bimal K. Roy et al.
2305.08088
Make Prompt-based Black-Box Tuning Colorful: Boosting Model Generalization from Three Orthogonal Perspectives
Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks. However, tuning these models for downstream tasks usually needs exorbitant costs or is unavailable due to commercial considerations. Recently, black-box tuning has been proposed to address this problem by optimizing task-specific prompts without accessing the gradients and hidden representations. However, most existing works have yet fully exploited the potential of gradient-free optimization under the scenario of few-shot learning. In this paper, we describe BBT-RGB, a suite of straightforward and complementary techniques for enhancing the efficiency and performance of black-box optimization. Specifically, our method includes three plug-and-play components: (1) Two-stage derivative-free optimization strategy that facilitates fast convergence and mitigates overfitting; (2) Automatic verbalizer construction with its novel usage under few-shot settings; (3) Better prompt initialization policy based on instruction search and auto-selected demonstration. Extensive experiments across various tasks on natural language understanding and inference demonstrate the effectiveness of our method. Our codes are publicly available at https://github.com/QiushiSun/BBT-RGB.
Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li, Ming Gao
2023-05-14T07:33:59Z
http://arxiv.org/abs/2305.08088v2
Make Prompt-based Black-Box Tuning Colorful: Boosting Model Generalization from Three Orthogonal Perspectives ###### Abstract Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks. However, tuning these models for downstream tasks usually needs exorbitant costs or is unavailable due to commercial considerations. Recently, black-box tuning has been proposed to address this problem by optimizing task-specific prompts without accessing the gradients and hidden representations. However, most existing works have yet fully exploited the potential of gradient-free optimization under the scenario of few-shot learning. In this paper, we describe BBT-RGB, a suite of straightforward and complementary techniques for enhancing the efficiency and performance of black-box optimization. Specifically, our method includes three plug-and-play components: (1) Two-stage derivative-free optimization strategy that facilitates fast convergence and mitigates overfitting; (2) Automatic verbalizer construction with its novel usage under few-shot settings; (3) Better prompt initialization policy based on instruction search and auto-selected demonstration. Extensive experiments across various tasks on natural language understanding and inference demonstrate the effectiveness of our method. Our codes are publicly available at [https://github.com/QiushiSun/BBT-RGB](https://github.com/QiushiSun/BBT-RGB). ## 1 Introduction Transformer-based Language models (Vaswani et al., 2017) have achieved remarkable improvements among various NLP tasks (Qiu et al., 2020; Lin et al., 2022) in recent years. These models are mainly first pre-trained on a large-scale unsupervised corpus and then fine-tuned on a specific downstream task. However, this paradigm of pre-train and fine-tune face challenges in the era of Large Language Models (LLMs) (Brown et al., 2020; Ouyang et al., 2022; Chowdhery et al., 2022; Zhang et al., 2022; Scao et al., 2022; Bai et al., 2022; Touvron et al., 2023; OpenAI, 2023). The ever-growing model size leads to a non-stop increase in the cost of tuning, and deploying separate copies of LLMs in real applications becomes exorbitantly expensive. Though recent research on Parameter-Efficient Tuning (Li and Liang, 2021; Lester et al., 2021, _inter alia_) alleviates the problem by tuning a small percentage of parameters while keeping the backbone frozen, the second problem arises: _most LLMs are released as a service, and users can only access them through Black-Box APIs_. This implies that the aforementioned tuning strategies become less viable owing to the inaccessibility of parameters and gradients, thereby causing a dilemma for downstream applications. Sun et al. (2022) describe this scenario as Language Model-as-a-Service (LMaaS): Users are unable to tune the model parameters but can accomplish the tasks of interest by finding appropriate prompts with limited examples. Then, Black-Box Tuning (BBT) is proposed as a framework for derivative-free optimization under few-shot settings. Recently, BBTv2 (Sun et al., 2022) has been presented as an improved version that prepends prompts to hidden states of models instead of only injecting prompt tokens in the input layer. However, the potential of black-box optimization is still not fully exploited. Previous tuning methods are prone to overfit / fall into local optimum under the scenario of few-shot learning. This phenomenon is triggered by both the characteristics of the Derivative-free optimization (DFO) algorithm and the unavailability of pre-trained prompts under few-shot settings. In this paper, we present BBT-RGB, a suite of straightforward, complementary and pluggable techniques that further explore the possibility of black-box tuning. We take one step forward in black-box tuning from the following three aspects 1) Employing a two-stage DFO strategy for the attenuation of overfitting. 2) Utilizing multiple auto-selected verbalizers to exploit the context further. 3) Combining manual prompt with new search approach for task instructions improvement. Extensive experiments across various NLP downstream tasks demonstrate the superiority of our method. Besides, BBT-RGB can significantly outperform current gradient-based Parameter-Efficient tuning methods (Houlsby et al., 2019; Ben Zaken et al., 2022; Hu et al., 2022; Liu et al., 2022) under the scenario of few-shot learning. Moreover, by employing our method, optimization1 under the derivative-free framework can reach comparative performance to full fine-tuning while preserving much fewer tunable parameters. Footnote 1: We follow btv2 (Sun et al., 2022) to use random projection matrices to transform prompt parameters into low-dimensional subspaces. Our main contributions can be summarized as follows: * We propose a two-stage derivative-free optimization strategy that enables stable convergence of training tunable prompts while effectively mitigating the issue of overfitting. * To further exploit the LLM's output, we propose a verbalizer selection process to derive multiple appropriate candidates. Moreover, instruction with demonstration is adopted for robust prompt initialization. * As is shown in Figure 1, a wide range of NLP tasks are covered to verify the effectiveness of our approach. Additionally, we further discuss the benefits BBT-RGB brings to black-box optimization. ## 2 Preliminaries ### Large Language Models and APIs Large language models (LLMs) (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020) have revolutionized the NLP landscape in the past few years. Given some examples of tasks as input, LLMs can be "prompted" to conduct a wide range of NLP tasks. These huge models are usually released as a service (Brown et al., 2020; Chen et al., 2021; Ouyang et al., 2022), which allows users to interact with the models deployed on the cloud servers through APIs. Unlike some popular open-source LMs (Devlin et al., 2019; Liu et al., 2019) that can be directly utilized by researchers, access to the parameters and gradients of LLMs is restricted due to commercial, ethical, and security concerns. ### Prompt-based Learning Prompt-based learning (Liu et al., 2023) transforms an NLP downstream task into a masked language modeling (MLM) task and narrows the discrepancy between pre-training and fine-tuning. Based on the prompt format, prompt-based learning can be categorized into discrete prompts and continuous prompts. Discrete prompts can be designed manually (Brown et al., 2020; Schick et al., 2020) or generated automatically (Gao et al., 2021). Continuous prompts are designed as a sequence of vectors (Qin and Eisner, 2021; Lester et al., 2021) that are usually prepended to the input and optimized by gradients. Recently, Sun et al. (2022) propose BBT for optimizing prompts under gradient-free settings, as is shown in section 3. We mainly focus on the optimization of continuous prompts under the black-box settings in this paper. ### Derivative-free Optimization Derivative-free optimization (DFO) algorithms are capable of solving complex problems without the back-propagation process. DFO generally employs a sampling-and-updating framework (Rios and Sahinidis, 2013; Wierstra et al., 2014; Qian et al., 2016) to improve the solution iteratively. For instance, Covariance Matrix Adaptation Evolution Strategy (Hansen and Ostermeier, 2001; Hansen et al., 2003), namely CMA-ES, is a widely adopted evolutionary algorithm for non-linear non-convex continuous optimization. At each iteration, the Figure 1: Comparing BBT-RGB with other tuning methods on average performance over seven tasks described in section 5.1. The size of the circle is proportional to the standard deviation. algorithm samples new potential solutions from a parameterized distribution model (e.g., multivariate normal distribution). Besides, we have COBYLA algorithm (Constrained Optimization BY Linear Approximation) (Powell, 1994, 1998) that builds a linear approximation model of the objective function and constraints within a trust region, iteratively updating the model and trust region based on the progress made in minimizing the objective function. ## 3 Derivative-Free Prompt Tuning Given a batch of samples \((X,Y)\) converted with prompt templates and label words, the original derivative-free prompt learning, as introduced by Sun et al. (2022) first use a set of prompt embeddings \(p\) to concatenate the input tokens, creating the prompted input for LLMs with frozen backbones. The prompt \(p=p_{0}+p_{\theta}\) consists of the initial prompt \(p_{0}\in\mathbb{R}^{D}\), which is manually/randomly selected and a tunable prompt \(p_{\theta}\in\mathbb{R}^{D}\) that is progressively optimized through a DFO algorithm like CMA-ES (Hansen et al., 2003). DFOs suffer slow convergence on high-dimensional problems, but fortunately, Aghajanyan et al. (2021) discover that PLMs exhibit low-dimensional reparameterization that is as effective for fine-tuning as the full parameter space. This finding indicates that the search space of \(p_{\theta}\) can be condensed into an intrinsic dimensionality \(z\in\mathbb{R}^{d}\left(d\ll D\right)\) by using a (frozen) random projection matrix \(\Pi\in\mathbb{R}^{D\times d}\), such that \(p_{\theta}=\Pi\cdot z\) will significantly decrease the cost of optimization. Subsequently, the task-specific inference of model \(f\) through API Call is performed to determine the fitness of candidate prompts using an objective function \(\mathcal{L}(f([P;X]),Y)\), where \(\mathcal{L}\) is a loss function such as cross-entropy. Finally, the DFO algorithm iteratively refines the prompt for seeking \(p^{*}=\operatorname*{arg\,min}_{p}\mathcal{L}(f([P;X]),Y)\). In the era of large language models, black-box optimization is a promising research target that can drive models for few-shot learning without access to gradients. Sun et al. (2022) first propose black-box tuning (BBT) that focuses on optimizing continuous prompt by only accessing inference APIs and then present BBTv2 (Sun et al., 2022) as an improved version. While some recent works focus on optimizing discrete prompts concurrent with our work. Diao et al. (2023) present black-box discrete prompt learning with gradient estimation as their key feature. Hou et al. (2022) first use gradient-free methods to sample sub-optimal discrete prompts and then ensemble them by boosting algorithm. And Chai et al. (2022) acquire informative feedback to enhance derivative-free optimization through using frozen subnetworks as critics. ## 4 Bbt-Rgb We formally introduce our method: BBT-RGB, which contains three orthogonal optimization per Figure 2: An illustration of BBT-RGB. We use Red, Green and Blue to indicate three distinct aspects of our strategy, which inspired the naming of our method. M\({}^{2}\) Verbalizers (Multi-Mixed Verbalizers) further utilize the information provided by the LLMs. In\({}^{2}\) Initialization (Instruction learning + In-context learning) improves prompt-based tuning by integrating both instruction and demonstration. And Two-Stage DFOs exploit the advantages of different optimization methods. \(\lx@sectionsign\)5 represents the combination of derivative-free optimizers. (Best viewed in color.) spectives, as is shown in Figure 2. ### Two-Stage DFOs Previous works of black-box tuning mainly use CMA-ES to optimize the intrinsic dimensionality Aghajanyan et al. (2021) of LLMs. Nonetheless, in the early training stage, the evolutionary algorithm (EA) exhibits a considerably faster convergence rate compared to the search-based algorithm (SA), which potentially causes fast overfitting. Then, the following steps would be futile. Thus, we design a novel two-stage DFO algorithm2 for black-box tuning, as is shown in algorithm 1. Footnote 2: Due to space limitations, we put the detailed algorithm in Appendix B. Alg 1 stands for a simplified version. We leverage the advantages of two different kinds of DFOs respectively. In stage I, we use EA to perform coarse-grained population-level optimization, which has a specific budget (Number of API Calls) to move toward the target swiftly. And the SA will use the remaining budgets in stage II for approximating the solution by dimension-level fine-grained search. ### M2 Verbalizers Most prior works employ a single verbalizer for gradient-free optimization, which cannot make full use of the information, _i.e._, logits returned by the black box model. To address this problem, we propose **M**ulti-**M**ixed verbalizers, which are constructed through the following methods: 1) manual verbalizer selection3. 2) search-based verbalizer construction based on word importance estimation by TF-IDF. 3) auto verbalizer generation based on neural nets Gao et al. (2021). After verbalizers are selected by the aforementioned approaches, the confidence of each category is represented by the average prediction probability of multiple verbalizers. Compared with the previous approach, M2 verbalizers make one step forward to exploit the information provided by the black-box model. Additionally, this approach can prevent the negative impact on model performance caused by a single unsuitable label word. Footnote 3: Specifically, we use synonyms in practice. ### In2 Initialization An appropriate initialization has proven to play an essential role in effective prompt-based tuning An et al. (2022); Prasad et al. (2022). Inspired by previous efforts, we propose a model-agnostic strategy named as In2 initialization. The first component of our approach is a task-specific manual **I**nstruction. For the second part, we iterate through the training set and take each sample as a demonstration Min et al. (2022), which is assessed on the validation set together with the pre-selected instruction. After that, the sample with the best performance is selected for **I**n-context learning. ## 5 Experiments ### Experimental Settings BackboneWe use RoBERTa-Large4Liu et al. (2019) as backbone throughout the experiments. Footnote 4: [https://huggingface.co/roberta-large](https://huggingface.co/roberta-large) DatasetsTo evaluate our proposed methods, we choose a series of tasks from the GLUE benchmark Wang et al. (2018). Specifically, we employ SST-2 Socher et al. (2013) and Yelp Zhang et al. (2015) for sentiment analysis, AGNews and DBPedia Zhang et al. (2015) for topic classification, SNLI Bowman et al. (2015) and RTE Dagan et al. (2005) for natural language inference, and MRPC Dolan and Brockett (2005) for semantic paraphrasing. Methods and HyperparametersFor all the experiments we covered, the methods of BBT-RGB we employed and hyperparameters are showcased in Table 2 and Table 3. ### Main Results As is demonstrated in table 1, we compare5 BBT-RGB with both gradient-based and gradient-free tuning methods. We observed different levels of improvement on various NLP tasks. Sentiment AnalysisOn both the SST-2 and Yelp datasets, our method surpasses all prior white-box methods, consistently demonstrating superior performance compared to the established baselines. Topic ClassificationCompared with the previous gradient-free method, BBT-RGB has a significant advancement in the evaluation based on DBPedia and AGNews but still needs to catch up to full model tuning. We hold the view that this is caused by a relatively large number of classes (categories), and it is difficult for the model to learn enough knowledge under few-shot settings. Entailment and InferenceBBT-RGB benefits entailment and natural language inference tasks significantly; both experiments on SNLI and MRPC indicate surpassing full fine-tuning performance. In addition, we can observe a leap in the accuracy of RTE compared with previous baselines. ### Analysis We select two cases6 to analyze the effectiveness of two-stage DFOs on Yelp dataset. In Figure 3, the training loss (orange curve) converges to zero for both methods. While the oscillation of validation loss observed in pure CMA-ES case is mainly attributed to the nature of the adaptive algorithm. Footnote 6: We choose CMA-ES (8,000 budgets) and COBYLA (12,000 budgets) for the two-stage DFOs in this illustration. In stage II of our proposed two-stage DFOs method, a relatively gentle decrease in validation loss can be observed, demonstrating that dimension-level updates by COBYLA make the overall learning process smoother, which helps us curb the problem of fast overfitting. Another case can be found in section A. ## 6 Conclusion This paper proposes BBT-RGB, a set of simple but effective techniques to enhance black-box tuning. Our method drives more powerful prompt-based learning when the parameters and gradients of the model are invisible. We make improvements from three independent perspectives: (1) Two-stage derivative-free optimization algorithms for attenuating overfitting; (2) Versatile verbalizer construction with a robust selection process; (3) Using Instruction learning and demonstrations to exploit in-context information. All these three modules are "plug-and-play", and empirical studies across a series of NLP tasks verify the effectiveness of our method. vergence, some of the tasks require more API calls, which may lead to extra costs when running on commercial models. However, the essence of our contributions could be extended to broader scenarios under gradient-free settings, and we leave them as future research. ## Ethical Considerations Our method: BBT-RGB, aims to exploit the potential of black-box tuning further, and the contribution in this paper is fully methodological. Therefore, this contribution has no direct negative social or ethical impacts.
2306.11089
ALPACA: A New Semi-Analytic Model for Metal Absorption Lines Emerging from Clumpy Galactic Environments
We present a new semi-analytic formalism for modeling metal absorption lines that emerge from a clumpy galactic environment, ALPACA. We predict the ''down-the-barrel'' (DTB) metal absorption line profiles and the EW of absorption at different impact parameters as a function of the properties of the clumps, including the clump kinematics, the clump volume filling factor, the clump number density profile and the clump ion column densities. With ALPACA, we jointly model the stacked DTB CII$\lambda$1334 spectrum of a sample of $z \sim$ 3 Lyman break galaxies and the EW v.s. $b$ profile of a sample of $z \sim$ 2 star-forming galaxy-galaxy pairs. ALPACA successfully reproduced two datasets simultaneously, and the best-fit prefers a low clump volume filling factor ($\sim 3 \times 10^{-3}$). The radial velocities of the clumps are a superposition of a rapidly accelerated outflow with a maximum velocity of $\sim 400\,\rm km\,s^{-1}$ and a velocity dispersion of $\sigma_{\rm cl} \sim\,120 \rm km\,s^{-1}$. The joint modeling reveals a physical scenario where the absorption observed at a particular velocity is contributed by the clumps distributed over a fairly broad range of radii. We also find that the commonly adopted Sobolev approximation is at best only applicable within a narrow range of radii where the clumps are undergoing rapid acceleration in a non-volume-filling clumpy medium. Lastly, we find that the clump radial velocity profile may not be fully constrained by the joint modeling and spatially-resolved Ly$\alpha$ emission modeling may help break the degeneracy.
Zhihui Li, Max Gronke, Charles Steidel
2023-06-19T18:00:00Z
http://arxiv.org/abs/2306.11089v1
ALPACA: A New Semi-Analytic Model for Metal Absorption Lines Emerging from Clumpy Galactic Environments ###### Abstract We present a new semi-analytic formalism for modeling metal absorption lines that emerge from a clumpy galactic environment, ALPACA. We predict the "down-the-barrel" (DTB) metal absorption line profiles and the EW of absorption at different impact parameters as a function of the properties of the clumps, including the clump kinematics, the clump volume filling factor, the clump number density profile and the clump ion column densities. With ALPACA, we jointly model the stacked DTB C i\(\lambda\)1334 spectrum of a sample of \(z\sim 3\) Lyman break galaxies and the EW v.s. \(b\) profile of a sample of \(z\sim 2\) star-forming galaxy-galaxy pairs. ALPACA successfully reproduced two datasets simultaneously, and the best-fit prefers a low clump volume filling factor (\(\sim 3\times 10^{-3}\)). The radial velocities of the clumps are a superposition of a rapidly accelerated outflow with a maximum velocity of \(\sim 400\) km s\({}^{-1}\) and a velocity dispersion of \(\sigma_{\rm d}\sim 120\) km s\({}^{-1}\). The joint modeling reveals a physical scenario where the absorption observed at a particular velocity is contributed by the clumps distributed over a fairly broad range of radii. We also find that the commonly adopted Sobolev approximation is at best only applicable within a narrow range of radii where the clumps are undergoing rapid acceleration in a non-volume-filling clumpy medium. Lastly, we find that the clump radial velocity profile may not be fully constrained by the joint modeling and spatially-resolved Ly\(\alpha\) emission modeling may help break the degeneracy. keywords: galaxies: high-redshift -- galaxies: ISM -- line: formation -- radiative transfer -- scattering ## 1 Introduction Metal absorption lines observed in the rest-frame ultraviolet (UV) encode abundant information about the physical properties of the gaseous matter in a galactic environment - from the interstellar medium (ISM; Tacconi et al., 2020) to the circumgalactic medium (CGM; Tumlinson et al., 2017; Faucher-Giguere and Oh, 2023) to the intergalactic medium (IGM; McQuinn, 2016). Such absorption lines are typically produced via the transition of an atom or ion from the ground state to an excited state by absorbing the energetic UV continuum photons produced in star-forming regions. Depending on whether the ground state is further split into fine-structure levels, such transitions can be either resonant (e.g. Ly\(\alpha\) and Mg i\(\lambda\)\(\lambda\)2796, 2803) or non-resonant (e.g. Si ii\(\lambda\)1260 and C ii\(\lambda\)1334), the latter of which is considered to have "fluorescent" channels through which the photons at the resonant wavelength can be emitted at a slightly lower energy. A typical metal absorption line observed against a galaxy's own starlight (namely "down-the-barrel"; DTB) is "sawtooth" shaped (e.g. Weiner et al., 2009; Rubin et al., 2010; Martin et al., 2012), although in reality it exhibits a wide variety of spectral morphologies. Specifically, the minimum flux density (the "trough") is often located at a few hundred km s\({}^{-1}\) blueward (or even redward in rare cases; see e.g. Rubin et al., 2012; Martin et al., 2012; Bouche et al., 2013; Ford et al., 2014; Ho et al., 2017; Zabl et al., 2019; Afruni et al., 2022; Weldon et al., 2023) of the systemic velocity. On both sides of the trough, the flux density gradually rises to meet the continuum, yet in general, it rises significantly more steeply on the red side than the blue side. The spectral features of the metal absorption lines can then be used to infer the physical properties of the absorbing gas. For example, the velocity range of the absorption line profile traces the gas outflow velocities, and the depth of the absorption probes the gas column density or covering fraction. In particular, the absorption lines from low-ionization states (LIS), such as Si ii, C ii and O i, closely trace neutral hydrogen due to their similar ionization potential. The derived gas properties from the LIS lines can therefore be utilized to constrain several important galactic properties, such as the mass outflow rates, the escape fraction of ionizing photons, etc. (e.g., Rupke et al., 2005; Martin, 2005; Weiner et al., 2009; Martin and Bouche, 2009; Rubin et al., 2014; Erb, 2015; Chisholm et al., 2016, 2018; Steidel et al., 2018; Gazagnes et al., 2018, 2020; Mauerhofer et al., 2021; Xu et al., 2022). Thus far, a number of attempts have been made to model the metal absorption lines in DTB galaxy spectra. Most have adopted a "picket-fence" model (e.g. Steidel et al., 2010; Heckman et al., 2011; Zackrisson et al., 2013; Jones et al., 2013; Bothakur et al., 2014; Rivera-Thorsen et al., 2015; Reddy et al., 2016; Rivera-Thorsen et al., 2017; Steidel et al., 2018; Gazagnes et al., 2018, 2020; Xu et al., 2022), which assumes that the stellar continuum is partially covered by optically thick absorbing gaseous material. Some have further accounted for radial variation of the gas outflow velocity to reproduce the line profiles of particular transitions (e.g. Steidel et al., 2010; Chisholm et al., 2016). Other work has explored, using semi-analytic models or Monte Carlo simulations, the absorption line profile resulting from transmission through a homogeneous, expanding wind (e.g. Prochaska et al., 2011; Scarlata and Panagia, 2015; Carr et al., 2021, while others have used cosmological simulations to predict the absorption line profiles emerging from realistic galactic environments (e.g. Kimm et al., 2011; Mauerhofer et al., 2021; Gazagnes et al., 2023). Many of the models have successfully produced absorption line profiles that closely resemble observations. Nevertheless, the majority of the models proposed in previous works rely on simplifying assumptions, e.g. that the gas column density is always high enough to result in saturated absorption so that the depth of absorption relative to the continuum directly traces the gas covering fraction; that continuum photons will be absorbed by the outflowing gas with a large velocity gradient only if they appear resonant in the reference frame of the gas (namely the Sobolev approximation), or that the absorbing gaseous medium is homogeneous without any holes or clumps. These assumptions may be (at least in part) unphysical or in tension with the most recent observations. For example, theoretical models, simulations and observations have revealed that galactic winds may reach a "plateau" phase at large radii where the wind velocity remains approximately constant (e.g. Chevalier and Clegg, 1985; Veilleux et al., 2005; Dorfi and Breitschwerdt, 2012; Zhang, 2018). Recent work also highlighted the importance of accounting for the multiphase, turbulent and kinematically complex structure of galactic winds (Schneider et al., 2020; Kim et al., 2020; Fielding & Bryan, 2022; Steinwandel et al., 2022; Rathjen et al., 2023). As these recent findings have posed significant challenges to the aforementioned simplifying assumptions, the models that depend on them should benefit from re-examination. In this work, we build on previous models and present a new semi-analytic model for the UV metal absorption lines. Thus far, the clumpy nature of the "cool" (\(T\sim 10^{4}\,\mathrm{K}\)) gas in the ISM / CGM has been supported by abundant observational evidence (e.g. Rauch et al., 1999, 2001a, 2002; Ellison et al., 2004; Schaye et al., 2007; Rogerson & Hall, 2012; Crighton et al., 2015; Arrigoni Battaia et al., 2015; Rubin et al., 2018; Kulkarni et al., 2019; Zahedy et al., 2019, 2021). More specifically, the cool gas (which is responsible for producing the LIS lines) is likely to exist in the form of a clumpy mist or fog of cloudlets with a large area covering fraction but a small volume filling factor (McCourt et al., 2018; Fielding et al., 2020; Gronke & Oh, 2020; Nelson et al., 2020). In light of this physical picture, we explore the formation of metal absorption lines from a clumpy galactic outflow. Not only do we model the DTB absorption line profiles, but we also predict the strength of absorption as a function of impact parameter (Steidel et al., 2010). The ultimate goal of this work is to develop a simple, usable model for the community to fit and interpret the observed metal absorption lines fast and robustly. The structure of this paper is as follows. In SS2, we describe the general formalism and a practical implementation of the analytic model. In SS3, we validate the analytic model by comparing it to Monte-Carlo numerical simulations. In SS4, we discuss the effect of each individual parameter of the analytic model. In SS5, we show an example of applying the analytic model to the composite C ii \(\lambda 1334\) spectrum of a sample of \(z\sim 3\) Lyman break galaxies (LBG) observed for the Keck Lyman Continuum Spectroscopic Survey (KLCS; Steidel et al., 2018) and the EW v.s. \(b\) profile observed for a sample of \(z\sim 2\) star-forming galaxy-galaxy pairs. In SS6, we discuss the definition and relationship between the gas covering and volume filling parameters. In SS7, we compare the models that use or not use the Sobolev approximation. In SS8, we discuss previous work modeling the UV absorption lines in comparison with our model. In SS9, we discuss the limitations of our model and possible developments in the future. In SS10, we summarize and conclude. ## 2 Alpaca: A non-Sobolev clumpy model for metal absorption lines We introduce the semi-analytic model that we use in this work, ALPACA (Absorption Line Profiles Arising from Clumpy Absorbers)1. Footnote 1: The code for ALPACA is publicly available at: [https://github.com/astro-zhihuili/ALPACA](https://github.com/astro-zhihuili/ALPACA). ### General Formalism #### 2.1.1 Down the Barrel Absorption As illustrated in Figure 1, we consider the escape of photons from an idealized, spherical halo filled by an ensemble of spherical clumps that contain the corresponding metal ions (e.g. Si\({}^{+}\) or C\({}^{+}\)) that produce the absorption. For the sake of computational convenience, we assume a spherical halo with inner and outer boundaries defined by the clump launch radius \(r_{\mathrm{min}}\) and the halo extent \(n_{\mathrm{r}}\), respectively; we then divide the halo into a series of concentric shells, equally-spaced in radius. The absorption contributed by all concentric shells constitutes the total absorption of the model. The interval between the midplanes of two adjacent radial shells is \(d=(r_{\mathrm{n}}-r_{\mathrm{min}})/N_{\mathrm{shell}}\), where \(N_{\mathrm{shell}}\) is the total number of shells. The optimal way of choosing \(N_{\mathrm{shell}}\) will be discussed later in this paper. The ALPACA model accounts for the non-zero width of the absorption cross section in the velocity space and does not use the commonly adopted Sobolev approximation, which assumes that absorption occurs only when a photon appears exactly at the line center in the reference frame of the absorbing gas. Instead, in ALPACA, each outgoing photon will suffer from absorption by clumps with a range of velocities, even if it is not at the line center (i.e. out of resonance) in the reference frame of a clump. As we will demonstrate later in Section 7, this is particularly important when the velocity gradient of the clumps is small or the clump random motion is non-negligible. To escape, each photon must pass through every shell consecutively. In each shell, a photon may either pass freely through "holes" where no clump exists (with a probability of \(1-C_{i}\), where \(C_{i}\) is the geometric covering fraction of the clumps), or penetrate though a clump (with a probability of \(C_{i}e^{-r_{\mathrm{min}}}\), where \(r_{\mathrm{min}}\) is the optical depth of one clump of the relevant transition). Therefore, the probability of escape for a photon originating from the ISM of a galaxy can be expressed as: Figure 1: **Schematic for ALPACA, a non-Sobolev clumpy model for metal absorption lines.** For the DTB absorption line profile, it is assumed that a central source emits continuum photons isotropically and that all photons travel radially in a spherical halo that contains a number of absorbing clumps. For computational convenience, the halo is divided into a series of equally-spaced, concentric shells. The probability of escape for a continuum photon observed at a particular velocity is determined by the product of transmission probabilities through all radial shells. In each shell, the transmission probability is the sum of the probabilities of propagating through "holes" that are not occupied by any clumps (given by \(1-C(r_{i})\)) and penetrating through clumps (given by \(C_{i}(r_{i})e^{-r_{\mathrm{min}}(r-r_{i})}\)). The EW vs. \(b\) profile can be similarly calculated at different impact parameters. We refer the readers to Section 2.1 for a detailed derivation. \[P_{\rm esc}(-v)=\prod_{i=1}^{N_{\rm tot,d}}(1-C_{i}(r_{i})+C_{i}(r_{i})e^{-r_{\rm in }(v-v_{i})}) \tag{1}\] Here \(-v\) represents the location in the rest-frame velocity space. This implies, e.g., if the clumps are outflowing with \(v>0\), absorption on the blue side at \(-v<0\) will be observed. \(v_{i}\) is the (average) clump velocity in the \(i\)-th shell, determined by the clump radial velocity profile: \[v_{i}=v_{\rm d}(r)|_{r=v_{i}} \tag{2}\] where \(r_{i}(i=1,2,...,N_{\rm total})\) are the radial locations of the midplanes of all the shells where absorption will be calculated. \(C_{i}(r_{i})\) is the clump geometric covering fraction at \(r_{i}\), and \(r_{\rm in}(v-v_{i})\) is the clump optical depth of the relevant transition evaluated at \(v-v_{i}\), which is the photon's apparent frequency in velocity space in the reference frame of the clumps outflowing at \(v_{i}\). The geometric gas covering fraction \(C_{i}\), which is the fraction of the halo area covered by clumps at radius \(r\), is given by (see also Dijkstra & Kramer, 2012): \[C_{i}(r)\approx\pi\int_{r-\frac{r}{4}}^{r+\frac{r}{4}}dr^{\prime}n_{\rm d}(r^ {\prime})[R_{\rm G}^{2}(r^{\prime})-(r-r^{\prime})^{2}]\approx\pi n_{\rm d}(r )[R_{\rm G}^{2}(r)d-\frac{d^{3}}{12}] \tag{3}\] where \(n_{\rm d}(r)\) and \(R_{\rm G}(r)\) are the number density2 and radius of the clumps at \(r\), respectively. Eq. (3) comes from the operation that each clump is assigned to a shell (with a thickness of \(d\)) within which its center is located and will only contribute to the absorption of this shell. Therefore, the clumps that contribute to the absorption of a particular shell must have their centers located within \(r\pm d/2\). The way of calculating the contribution to \(C_{i}(r)\) for each clump is illustrated in Figure 2. Footnote 2: Here the “clump number density” refers to the volumetric number density of the clumps, defined as the number of clumps per physical volume. Not to be confused with the number density of ions _within_ the clumps, which we do not use in the model (we consider the ion column density within the clumps instead). Recall that the clump optical depth can be written as: \[\tau_{\rm ion}(v-v_{i})=N_{\rm ion,d}(r_{i})\sigma_{\rm ion}(v-v_{i}) \tag{4}\] where \(N_{\rm ion,d}(r)\) is the clump's ion column density3 at \(r_{i}\), and \(\sigma_{\rm ion}(v)\) is the cross section of the ion (both as a function of velocity), given by: Footnote 3: For a spherical clump, \(N_{\rm ion,d}=\frac{r}{4}n_{\rm ion,d}R_{\rm G}\), where \(n_{\rm ion,d}\) is the ion number density within the clump (Gronke et al., 2017). \[\sigma_{\rm ion}(v)=\frac{\sqrt{\pi}e^{2}f_{\rm line}}{m_{e}c\Delta\nu_{\rm D }}H(a,x) \tag{5}\] where \(e\) is the electron charge, \(m_{e}\) is the electron mass, \(f_{\rm line}\) is the oscillator strength of the line transition, \(\Delta\nu_{\rm D}=b_{\rm D}\nu_{\rm D}/c\) is the Doppler width, \(b_{\rm D}\) is the Doppler parameter within a single clump, \(a=A_{\rm line}/(4\pi\Delta\nu_{\rm D})\) is the normalized natural line width (where \(A_{\rm line}\) is the Einstein coefficient of the transition), \(x=(\nu-\nu_{\rm D})/\Delta\nu_{\rm D}=-v/b_{\rm D}\) is the unitless photon frequency, and \(H(a,x)\) is the Voigt function: \[H(a,x)=\frac{a}{\pi}\int_{-\infty}^{+\infty}\frac{e^{-v^{2}}}{(y-x)^{2}+a^{2}} \,{\rm d}y \tag{6}\] Next, specific radial profiles can be assumed for clump outflow velocity, clump number density, clump ion column density and clump radius as a function of the clumps' galactocentric radius \(r\), namely \(v_{\rm d}(r)\), \(n_{\rm d}(r)\), \(N_{\rm ion,d}(r)\) and \(R_{\rm G}(r)\), respectively. The values of \(n_{\rm d}(r)\) and \(R_{\rm G}(r)\) can be used to calculate \(C_{i}(r)\) using Eq. (3), whereas \(v_{\rm d}(r)\) and \(N_{\rm ion,d}(r)\) can be used to calculate \(\tau_{\rm ion}(v)\) using Eq. (4) - (6). Finally, with Eq. (1) to (6), one can derive a (normalized) model absorption line profile, whose intensity is proportional to the photons' escape probability: \[\frac{I(v)}{I_{\rm cont}}=P_{\rm esc}(v) \tag{7}\] where \(I_{\rm cont}\) is the intensity level of the continuum. #### 2.1.2 Absorption at Different Impact Parameters ALPACA can also model the absorption of photons that are emitted from a background source at a particular impact parameter (i.e. along transverse sightlines), which is suitable for studying quasar-quasar-galaxy/galaxy-galaxy pairs (see e.g. Hennawi et al., 2006; Hennawi & Prochaska, 2007; Prochaska & Hennawi, 2009; Steidel et al., 2010; Hennawi & Prochaska, 2013; Prochaska et al., 2013). For example, the equivalent width (EW) of the absorption as a function of impact parameter can be predicted in a manner similar to the derivation above (see also Dijkstra & Kramer, 2012, which focused on Ly\(\alpha\) absorption). The EW at a particular impact parameter \(b\) is given by: \[{\rm EW}(b)=\int_{b}{\rm d}\lambda(1-e^{-r(\lambda)})=\frac{1}{\nu_{0}}\int_{b} {\rm d}v(1-e^{-r(v)}) \tag{8}\] where \(\nu_{0}\) is the line center frequency of the transition, and the integral is performed over the observed velocities \(v\) where absorption is seen at impact parameter \(b\). The transmission at a particular velocity, \(e^{-r(v)}\), comes from the contribution of individual clumps along the transverse sightline at \(b\). It can be calculated by separating the sightline into a number of line segments (which is analogous to separating the spherical halo into different shells): \[e^{-r(v)}=\prod_{i=1}^{N_{\rm seg}}(1-C_{\rm\rm\frac{1}{i}}(r_{i})+C_{\rm\frac{ 1}{i}}(r_{i})e^{-r_{\rm in}(v-v_{\rm\frac{1}{i}})}) \tag{9}\] where the product is evaluated over all \(N_{\rm seg}\) line segments. The Figure 2: **Schematic for calculating the geometric covering fraction of clumps at radius \(r\).** The contribution of one clump (as shown in blue) to the geometric covering fraction at a shell midplane at \(r\) (as shown by the dotted black arc) is given by \(\sim\pi[R_{\rm G}^{2}(r^{\prime})-(r-r^{\prime})^{2}]\), which, after being integrated over \(r\pm d/2\), gives the total geometric covering fraction at \(r\), \(C_{\rm f}(r)\). galactocentric radii of the centers of the line segments constitute an array of \(r_{i}\) where the clump quantities are evaluated. Assuming that the angle between the vector \(-\vec{r_{i}}\) and the line of sight towards the observer is \(\theta\) (\(\in[\arcsin(b/r_{\rm n}),\pi/2]\)), the clump covering fraction and the clump radial velocity projected along the line of sight at \(r_{i}\), \(C_{\rm\xi,\parallel}(r_{i})\) and \(v_{\rm\xi,\parallel}\), are given by: \[C_{\rm\xi,\parallel}(r_{i})=f_{\rm c}(r_{\rm t})\Delta I \tag{10}\] \[v_{\rm\xi,\parallel}=v_{\rm d}(r_{i}){\rm cos}\theta \tag{11}\] where \(f_{\rm c}(r_{\rm t})=\pi n_{\rm ta}(r_{\rm t})R_{\rm d}^{2}(r_{\rm t})\) is the clump covering _factor_ at \(r_{i}\) (see Eq. 27 below in Section 6), \(\Delta I=2\sqrt{r_{\rm t}^{2}-b^{2}}/N_{\rm eg}\) is the length of each line segment, and \(v_{\rm d}(r_{\rm t})\) is the clump radial velocity at \(r_{i}\). Combined with Eq. (4) - (6), the equations presented above can be used to derive the EW of a particular transition as a function of the impact parameter \(b\). ### A Practical Implementation In practice, the solution to the ALPACA model can be further simplified by assuming specific functional forms (e.g. power-laws) for the radial profiles of clump parameters. In this section, we explore a practical implementation of the model, so that it can be conveniently applied to model observational data. #### 2.2.1 Clump Outflow Kinematics The cool clumps in a galactic outflow can be accelerated via a number of different mechanisms, including radiation pressure (e.g. Murray et al., 2005; Thompson et al., 2005; Martin, 2005), ram pressure (e.g. Murray et al., 2005; Fujita et al., 2009; Martin & Bouche, 2009; Sharma & Nath, 2012; Thompson et al., 2016) from the hot wind, and cosmic rays (e.g. Socrates et al., 2008; Everett et al., 2008; Dorfi & Breitschwert, 2012; Recchia et al., 2016; Zweibel, 2017; Mao & Ostriker, 2018; Jacob et al., 2018; Chan et al., 2019; Quataert et al., 2022a, b). Since these mechanisms are often dependent on multiple physical parameters and the clumps are likely to be accelerated by several mechanisms at the same time, the actual scaling of the acceleration force with radius is uncertain and difficult to determine observationally. For simplicity, we explore an \(r^{-\alpha}\) acceleration force, where the power-law index \(\alpha\) describes how fast the acceleration force drops with the galactocentric radius. For example, \(\alpha=2\) is an approximate scaling expected for acceleration due to optically thin radiation pressure, ram pressure, or cosmic rays4(Murray et al., 2005; Socrates et al., 2008; Martin & Bouche, 2009; Chisholm et al., 2016). We stress that the formalism of the ALPACA model is general and applicable to other radial scalings of the acceleration force. Footnote 4: Such a scaling is derived for the acceleration force per unit area; for cool clumps that are in pressure equilibrium with a hot wind, one might expect \(\alpha=4/3\)(Steidel et al., 2010). In addition to acceleration, the outflowing clumps will inevitably suffer from gravitational deceleration from the mass of the dark matter halo. Therefore, the kinematic equation of an outflowing clump, is given by: \[\frac{{\rm d}v_{\rm d,out}(r)}{{\rm d}t}=-\frac{GM(r)}{r^{2}}+Ar^{-\alpha} \tag{12}\] Assuming a Navarro-Frenk-White (NFW, Navarro et al., 1995) profile for the dark matter halo, the mass within radius \(r\), is given by: \[M(r)=4\pi\rho v_{\rm d}^{3}\left[{\rm ln}(1+r/r_{\rm s})-\frac{r/r_{\rm s}}{ 1+r/r_{\rm s}}\right] \tag{13}\] where \(\rho_{0}\) is the central density, given by: \[\rho_{0}=\frac{M_{\rm vir}}{4\pi r_{\rm s}^{3}\left[{\rm ln}(1+r_{\rm vir}/r_{ \rm s})-\frac{r_{\rm vir}/r_{\rm s}}{1+r_{\rm vir}/r_{\rm s}}\right]} \tag{14}\] where \(M_{\rm vir}\) and \(r_{\rm vir}\) are the halo virial mass and virial radius, respectively. \(r_{\rm s}=r_{\rm vir}/c\) is the scale radius, where \(c\) is the concentration parameter of the halo. In this context, Eq. (12) can be further simplified as: \[\frac{{\rm d}\big{(}\frac{1}{2}v_{\rm d,out}^{2}(r)\big{)}}{{\rm d}r}=-\frac{ 4\pi G\rho v_{\rm s}^{3}}{r^{2}}\left[{\rm ln}(1+r/r_{\rm s})-\frac{r/r_{\rm s }}{1+r/r_{\rm s}}\right]+\frac{A}{r_{\rm s}^{3}}\left(\frac{r_{\rm s}}{r} \right)^{\alpha} \tag{15}\] Integrating the equation above from the clump launch radius \(r_{\rm min}\) to \(r\) yields the following solution: \[v_{\rm d,out}(r)=\Big{\{}\frac{2GM_{\rm vir}}{{\rm ln}(1+c)-c/(1+c)} \left[\frac{{\rm ln}(1+r/r_{\rm s})}{r}-\frac{{\rm ln}(1+r_{\rm min}/r_{\rm s })}{r_{\rm min}}\right] \tag{16}\] \[+{\cal V}^{2}\left[1-\left(\frac{r}{r_{\rm min}}\right)^{1-\alpha }\right]\Big{\}}^{1/2}\] where we have replaced \(A\) with \({\cal V}(\equiv\sqrt{2Ar_{\rm min}^{1-\alpha}/(\alpha-1)})\), which is the asymptotic outflow velocity if there were no gravitational deceleration. Eq. (16) shows that \(v_{\rm d,out}(r)\) can be fully determined by six parameters in total: the virial mass \(M_{\rm vir}\), the virial radius \(r_{\rm vir}\), the concentration parameter \(c\), the clump launch radius \(r_{\rm min}\), the asymptotic velocity \({\cal V}\), and the power-law index \(\alpha\). Among these parameters, \(M_{\rm vir}\) and \(r_{\rm vir}\) can be inferred via the stellar mass-halo mass relation (e.g. Tasitsiomi et al., 2004; Moster et al., 2010, 2013; Behroozi et al., 2010, 2013; Rodriguez-Puebla et al., 2015, 2017; Kravtsov et al., 2018; Girelli et al., 2020) if the galaxy's stellar mass is known (e.g. from SED fitting), and can be inferred via the concentration-halo mass relation (e.g. Wechsler et al., 2002; Prada et al., 2012; Dutton & Maccio, 2014; Ludlow et al., 2014; Diemer & Kravtsov, 2015; Child et al., 2018; Diemer & Joyce, 2019). Therefore, for a given galaxy, this kinematic model has only three free parameters: \({\cal V}\), \(r_{\rm min}\) and \(\alpha\). In our following modeling, we simply fix \(r_{\rm min}\) to 1 kpc as its effect on \(v_{\rm d,out}(r)\) is relatively minor. In Figure 3, we show several example \(v_{\rm d,out}(r)\) profiles by varying \(M_{\rm vir}\), \({\cal V}\) and \(\alpha\) individually (assuming \(r_{\rm min}=1\) kpc). #### 2.2.2 Clump Number Density and Radius Heuristically, the clump number density \(n_{\rm d}(r)\) and the clump radius \(R_{\rm d}(r)\), can be assumed to vary radially in the form of a power-law: \[n_{\rm d}(r)=n_{\rm d,0}\Big{(}\frac{r}{r_{\rm min}}\Big{)}^{-\gamma} \tag{17}\] \[R_{\rm d}(r)=R_{\rm d,0}\Big{(}\frac{r}{r_{\rm min}}\Big{)}^{\delta} \tag{18}\] where \(n_{\rm d,0}=n_{\rm d}(r=r_{\rm min})\) and \(R_{\rm d,0}=R_{\rm d}(r=r_{\rm min})\). Although it is reasonable to assume that \(\gamma\geq 0\) and \(\delta\geq 0\) due to the increase in the volume of the halo and the decrease in ambient pressure at large \(r\), we allow \(\gamma\) and \(\delta\) to be negative as other physical mechanisms may be at play in clump destruction, fragmentation and (re)formation. Next, we normalize \(n_{\rm d}\) properly by introducing the total volume factor of the halo, \(F_{\rm V}\), which is the fraction of the halo volume occupied by the clumps. The total volume of the clumps in the halo, is given by: \[\begin{split} V_{\rm cl,total}&=\int_{r_{\rm min}}^{n}4 \pi r^{2}n_{\rm cl}(r)V_{\rm cl}(r){\rm d}r\\ &=\frac{16\pi^{2}\pi n_{\rm cl}\,n_{\rm cl}^{3}\,n_{\rm cl}^{3}n_{ \rm min}^{3}}{3(3\delta+3-\gamma)}\Big{[}\Big{(}\frac{n_{\rm h}}{r_{\rm min}} \Big{)}^{3\delta+3-\gamma}-1\Big{]}\end{split} \tag{19}\] On the other hand, \[V_{\rm cl,total}=F_{\rm V}V_{\rm h}=F_{\rm V}\frac{4}{3}\pi(r_{\rm h}^{3}-r_{ \rm min}^{3}) \tag{20}\] Eq. (19) and (20) can be used to further simplify the expression for \(C_{\rm f}(r)\) (Eq. 3) to: \[\begin{split} C_{\rm f}(r)&\approx\pi n_{\rm cl}(r) R_{\rm cl}^{2}(r)d\\ &=\frac{(3\delta+3-\gamma)F_{\rm f}\left[\big{(}\frac{n_{\rm h}} {r_{\rm min}}\big{)}^{3}-1\right]}{4R_{\rm cl,0}\left[\big{(}\frac{n_{\rm h}} {r_{\rm min}}\big{)}^{3\delta+3-\gamma}-1\right]}\Big{(}\frac{r}{r_{\rm min}} \Big{)}^{2\delta-\gamma}d\end{split} \tag{21}\] Eq. (21) implies that there is a triple degeneracy among \(F_{\rm V}\), \(\delta\) and \(\gamma\) - specifically, a parameter set {\(F_{\rm V}\), \(\gamma\), \(\delta\)} gives an identical \(C_{\rm f}(r)\) profile to the following parameter set (where \(\Delta\) represents a particular variation in \(\gamma\)): \[\{\frac{\big{[}\big{(}\frac{n_{\rm h}}{r_{\rm min}}\big{)}^{3\delta+3-\gamma+ \Delta/2}-1\big{]}\big{(}3\delta+3-\gamma\big{)}}{[\big{(}\frac{n_{\rm h}}{r_{ \rm min}}\big{)}^{3\delta+3-\gamma}-1\big{]}\big{(}3\delta+3-\gamma+\Delta/2 \big{)}}F_{\rm V},\gamma+\Delta,\delta+\frac{1}{2}\Delta\} \tag{22}\] As the model absorption lines are only sensitive to the \(C_{\rm f}(r)\) profiles rather than the individual values of \(F_{\rm V}\), \(\gamma\) or \(\delta\), in the following modeling, we simply fix \(\delta=0\) while keeping \(F_{\rm V}\) and \(\gamma\) as free parameters in order to reduce the parameter degeneracies and computational cost. The readers should keep in mind that in reality, it is likely that the clump radius varies with the galactocentric radius; nevertheless, such an effect is indistinguishable from a change in the radial distribution of the clumps under the current formalism of ALPaca. #### 2.2.3 Number of Shells Although in principle, the choice of the spacing between two adjacent shells \(d\) is arbitrary, it is advantageous to choose a relatively small \(d\) to better sample the radial velocity profile and improve the accuracy of the model. In Appendix A, we show that for \(\sigma_{\rm cl}\lesssim 100\,{\rm km\,s^{-1}}\), the model converges at \(d/R_{\rm cl}\sim 0.1\). Therefore, in the next section, we adopt \(d/R_{\rm cl}\sim 0.1\) as it achieves sufficient accuracy with reasonable computational cost. ## 3 Model Validation In this section, we test the validity of the ALPACA model with a Monte-Carlo radiative transfer (RT) code, tlac (Gronke & Dijkstra, 2014; Gronke et al., 2015). tlac is specifically designed for simulating the RT process of Ly\(\alpha\) photons with idealized configurations. Nevertheless, the RT processes of Ly\(\alpha\) and metal lines (e.g. Si ii\(\lambda 1260\) and C ii\(\lambda 1334\)) are very similar in nature, despite the following two subtle differences: (1) The line cross section is different for different transitions. Such a difference can be easily accounted for by replacing the relevant coefficients and physical constants used in calculating the line cross section, namely the oscillator strength of the line transition \(f_{\rm line}\), the Einstein coefficient of the transition \(A_{\rm line}\), the wavelength of the transition \(\lambda_{\rm line}\), and the ion mass \(m_{\rm ion}\); (2) Unlike Ly\(\alpha\), the metal lines often have nearby non-resonant transitions (as the ground state is split into \({}^{2}P_{1/2}\) and \({}^{2}P_{3/2}\)), so that a significant portion of the absorbed resonant photons can be re-emitted as non-resonant emission (e.g. Si ii\({}^{*}\lambda 1265\)). We neglect such fluorescent emission for the moment as we are mostly focused on the absorption line profile in this work. With those in mind, we can test whether the ALPACA model gives the correct absorption line profile with different clump radial velocity profiles, clump number density profiles, clump radii and clump ion column densities as inputs, by comparing with the Monte-Carlo RT simulations performed by tlac using the Ly\(\alpha\)\(\lambda 1216\) line. Once the model is validated, we can use it to predict other metal (e.g. Si and C) absorption line profiles by simply switching to a different transition. In order to perform Monte-Carlo RT simulations, tlac requires several input parameters / radial profiles, namely the total clump volume filling factor \(F_{\rm V}\), the radial distribution of the clumps, the clump radial velocity (including outflow and random motion), the clump column densities, and the clump radii. After the parameters of the clumps are fully specified, in each model, a UV continuum source is placed at the center of a spherically symmetric halo that emits \(10^{5}\) photons in the form of a flat continuum within \(\pm 1500\,{\rm km\,s^{-1}}\) of the rest-frame wavelength of the Ly\(\alpha\) transition (\(1215.67\,{\rm\AA}\)). All the photons will eventually escape from the halo (as we only consider a dust-free medium in this work), whereas a fraction of the emitted photons will be resonantly scattered by the clumps by one or more times before they escape. Figure 3: **Clump radial outflow velocity profiles \(v_{\rm cl,out}(r)\) with different {\(M_{\rm vir}\), \(\mathcal{V}\), \(\alpha\)} values as given by Eq. (16). In each panel, only one parameter is varied while the other two are fixed. The \(v_{\rm cl,out}(r)\) profiles derived from varying one of the parameters are shown in different colors.** For each Monte-Carlo RT simulation, all the photons that are scattered by the clumps at least once are filtered out and only the photons that have zero scatterings are used to construct the model absorption line profile5. In this way, the output absorption line profile from \(\tt{tlac}\), which does not account for the contribution of re-emission from scattered photons (see Section 9.1 for a discussion of such re-emission and the associated "infilling" effect), can be directly compared to that of ALPACA. Footnote 5: This is essentially assuming all the scattered photons will not re-enter the line of sight of the observer in a real observation. Our tests are based on the practical implementation described in Section 2.2, and are performed by varying the following key parameters or radial profiles, on at a time, with respect to the fiducial set of parameters (\(F_{\rm v}=0.005,\mathcal{V}=700\,{\rm km\,s}^{-1},\alpha=2.0,\sigma_{\rm cl}= 0\,{\rm km\,s}^{-1},\gamma=2.0,\log N_{\rm HI,cl}=15,R_{\rm cl}=500\,pc,b_{\rm D }=12.85\,{\rm km\,s}^{-1}\)): 1. the total clump volume filling factor \(F_{\rm v}\); 2. the clump radial velocity profile, including the clump outflow velocity \(v_{\rm cl,out}(r)\) (which is a function of \(\mathcal{V}\) and \(\alpha\)) and random velocity \(v_{\rm cl,mat}(r)\). The total clump radial velocity \(v_{\rm cl}(r)\), is given by: \[v_{\rm cl}(r)=v_{\rm cl,out}(r)+v_{\rm cl,mat}(r)\] (23) where \[v_{\rm cl,mat}(r)\sim\mathcal{N}(v,\mu=0,\sigma=\sigma_{\rm d})\] (24) is a random velocity field in the form of a normalized Gaussian distribution that is characterized by \(\sigma_{\rm cl}\), the 1D macroscopic velocity dispersion among the clumps; 3. the shape of the clump number density profile, namely the power-law index \(\gamma\) in Eq. (17); 4. the clump H i column density \(N_{\rm HI,cl}\); 5. the clump radius \(R_{\rm cl}\); 6. the Doppler parameter within a single clump \(b_{\rm D}\). These tests are designed to verify the consistency of the absorption line profiles predicted by \(\tt{tlac}\) and ALPACA over a wide range of physical parameters. We note that at present, \(\tt{tlac}\) only supports radially-varying \(v_{\rm d}(r)\) and \(n_{\rm cl}(r)\), but not \(N_{\rm HI,cl}(r)\) or \(R_{\rm cl}(r)\), i.e. the clump column density and radius cannot yet be varied continuously as a function of radius. These tests are therefore our first attempt to validate the ALPACA model with the currently available capabilities of \(\tt{tlac}\). In Section 5 where we apply ALPACA to observational data, we fix the clump radius to be constant (see the justification for such a choice in Section 2.2 above) and restrict ourselves to using constant clump ion column densities. We therefore consider the tests described above sufficient for the validation and application of ALPACA in this work. The results of these validation tests are presented in Figure 4. In each test, we consider a \(z\sim 3\) galaxy with a halo mass of \(M_{\rm vir}\sim 10^{11.8}M_{\odot}\) and assume that the clump launch radius \(r_{\rm min}=1\) kpc, the halo radius \(n_{\rm 100}\) kpc. It can be seen that the ALPACA model is highly consistent with the \(\tt{tlac}\) model over a wide range of physical parameters, suggesting that the simple formalism that we introduced in Section 2.1 is remarkably successful at describing the absorption of photons. ## 4 Effect of Individual Parameters Figure 4 also illustrates the effects of different physical parameters in the ALPACA model, and we summarize them as follows: * Clump volume filling factor \(F_{\rm V}\): Increasing \(F_{\rm V}\) will increase the depth of and broaden the width of the flux minimum ("trough") while keeping the location of the trough and the velocity range of Figure 4: **Comparison of the absorption line profiles predicted by ALPACA and \(\tt{tlac}\).** The fiducial set of parameters are: \(F_{\rm v}=0.005,\mathcal{V}=700\,{\rm km\,s}^{-1},\alpha=2.0,\sigma_{\rm cl}= 0\,{\rm km\,s}^{-1},\gamma=2.0,\log N_{\rm HI,cl}=15,R_{\rm cl}=500\,{\rm pc,and}\,b_{\rm D}=12.85\,{\rm km\,s}^{-1}\). In each panel, only one parameter is varied (as indicated by three different colors) while the other parameters are fixed at the fiducial values. The model spectra predicted by ALPACA and \(\tt{tlac}\) are shown in thick and thin curves, respectively. The absorption line profiles predicted by the two models are highly consistent over a wide range of physical parameters, suggesting that the formalism that we introduced in Section 2.1 is remarkably successful. the absorption profile roughly constant. This is because the clump covering fraction \(C_{\rm f}(r)\) has increased proportionally at each radius (cf. Eq. 21). * Clump asymptotic outflow velocity \(\mathcal{V}\): Modifying \(\mathcal{V}\) simply shifts the overall spectrum horizontally without changing the shape of the profile. Note that the location of the trough corresponds to the maximum of \(|v_{\rm cl}(r)|\), which is always smaller than \(\mathcal{V}\) due to gravitational deceleration (cf. Eq. 16). * Power-law index in the clump acceleration force profile \(\alpha\): Similar to \(\mathcal{V}\), changing \(\alpha\) also shifts the spectrum horizontally, although in a rather non-linear way. As \(\alpha\) increases, the maximum clump outflow velocity increases (see Figure 3), and the location of the trough shifts bluewards correspondingly. * Clump radial velocity dispersion \(\sigma_{\rm cl}\): Increasing \(\sigma_{\rm cl}\) tends to reduce the depth of the trough and broaden the "wings" of the absorption line profile. This can be understood as an effective broadening in the range of the clump velocities that produces the absorption (cf. Eq. 23). * Power-law index in the clump number density profile \(\gamma\): Decreasing \(\gamma\), which yields a flatter radial declining profile for the number density of the clumps, tends to decrease the depth of the trough and shift the location of the trough nearer to the line center. This is because decreasing \(\gamma\) reduces \(C_{\rm f}(r)\) (cf. Eq. 21) and effectively places more clumps in the outer region of the halo where the clumps have decelerated to lower velocities. * Clump H i column density \(N_{\rm HI,cl}\): Increasing \(N_{\rm HI,cl}\) deepens the trough and broadens the wings of the absorption by increasing the clump optical depth at all velocities (cf. Eq. 4). * Clump radius \(R_{\rm cl}\): Increasing \(R_{\rm cl}\) tends to decrease the depth of the trough, as it also changes \(n_{\rm cl}(r)\) at a fixed \(F_{\rm V}\) and the net effect is to decrease \(C_{\rm f}(r)\) and produce less absorption (cf. Eq. 21). * Clump Doppler parameter \(p_{\rm D}\): Increasing \(p_{\rm D}\) yields more absorption at different velocities without increasing the observed velocity range of absorption, because the clump velocity distribution remains unchanged, yet there is more non-resonant absorption at each observed velocity. ## 5 Application example: modeling the ISM & CGM of a sample of Lyman break galaxies Now that we have verified that the absorption line profiles predicted by the ALPACA model are reasonable by comparing them to the Monte-Carlo simulations carried out by tlac, next we apply ALPACA to the low-ionization, metal absorption lines observed in the rest-frame UV wavelengths. ### Fitting the Composite DTB Spectrum and the EW v.s. \(b\) Relation Simultaneously To tightly constrain the properties of the ISM and CGM of high-\(z\) galaxies, we utilize both the DTB absorption line spectrum and the observed EW v.s. \(b\) relation and model them simultaneously with ALPACA. #### 5.1.1 The Composite DTB Absorption Line Profile We use the stacked DTB C ii \(\lambda\)1334 spectrum of a sample of 55 (out of 124 in total) \(z\sim 3\) LBGs that are observed as part of the Keck Lyman Continuum Spectroscopic Survey (KLCS; Steidel et al., 2018). The rest-UV spectra are obtained by the Low Resolution Imaging Spectrometer (LRIS) spectrograph on the Keck I telescope. This composite UV spectrum is constructed by stacking 55 individual spectra with accurate systemic redshift measurements (with uncertainties \(<20\) km s\({}^{-1}\)) determined from nebular emission lines observed by MOSFIRE, which minimizes the effect of stochastic line-of-sight variation in the CGM and IGM attenuation compared to the spectrum of any single galaxy. The spectral resolution achieved at the wavelength of C ii \(\lambda\)1334 is \(R\sim 1300\), or equivalently, FWHM \(\sim 230\) km s\({}^{-1}\) or \(\sigma\sim 98\) km s\({}^{-1}\). Before performing spectrum modeling, we have corrected the observed composite spectrum for CGM and IGM attenuation using the average transmission curve at \(z\sim 3.05\). #### 5.1.2 The EW v.s. \(b\) Profile We use the observed EW v.s. \(b\) relation of a sample of \(z\sim 2\) star-forming galaxy-galaxy pairs obtained by LRIS (Steidel et al., 2010). The rest-frame EWs at three different impact parameters \(\langle b\rangle=31\), 63, and 103 kpc are obtained by integrating over the corresponding stacked spectra of 42, 164, and 306 background galaxies, respectively. In addition, the EW at \(b\sim 0\) is also estimated from the DTB spectra of all the foreground galaxies. We adopt the values given in Table 4 of Steidel et al. (2010) for C ii \(\lambda\)1334 for our modeling. #### 5.1.3 Joint Modeling of Two Datasets To self-consistently model the DTB spectrum and the EW v.s. \(b\) profile from two samples at different redshifts, we first check whether any correction needs to be applied to the datasets. We integrate the composite DTB C ii \(\lambda\)1334 absorption line profile of the \(z\sim 3\) LBG sample to derive a rest-frame EW (1.57 \(\pm\) 0.03 A) and compare with the average rest-frame EW at \(b\sim 0\) measured from the foreground galaxies of the \(z\sim 2\) star-forming galaxy sample (1.72 \(\pm\) 0.02 A). We then apply a correction factor \(f_{\rm corr}=1.57/1.72=0.91\) to the three EWs measured at \(b>0\). In this way, we can model the two different datasets jointly as if they were both obtained at \(z\sim 3\). We note that the joint modeling we perform here is somewhat expedient; ideally one should do joint modeling on a sample with both \(b=0\) and \(b>0\) observations self-consistently. In addition, we assume that there is a non-outflowing ISM component that also contributes to absorption on top of the clumpy, outflowing CGM component described above (Steidel et al., 2010). The ISM absorption component is assumed to be a Gaussian centered at \(v=0\): \(f_{\rm abs,ISM}=A_{\rm ISM}e^{-v^{2}/2\sigma_{\rm ISM}^{2}}\), where \(A_{\rm ISM}\) and \(\sigma_{\rm ISM}\) are the amplitude and standard deviation of the absorption, respectively. \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Definition & Prior Range \\ (1) & (2) & (3) \\ \hline \(A_{\rm ISM}\) & Amplitude of the ISM absorption component & [0, 1] \\ \(\sigma_{\rm ISM}\) (km s\({}^{-1}\)) & Standard deviation of the ISM absorption component & [50, 200] \\ log \(F_{\rm V}\) & Clump total volume filling factor & [-4.0, -0.5] \\ \(\mathcal{V}\) (km s\({}^{-1}\)) & Clump asymptotic outflow velocity & [300, 2000] \\ \(\alpha\) & Power-law index in the clump acceleration force profile & [1.05, 2] \\ \(\gamma\) & Power-law index in the clump number density / covering fraction profile & [-5, 5] \\ \hline \hline \end{tabular} * **Notes.** The definitions and prior ranges of the free parameters of ALPACA used to fit the composite DTB C ii \(\lambda\)1334 spectrum and the EW v.s. \(b\) profile jointly. The columns are: (1) parameter name; (2) parameter definition; (3) prior range of the parameter. \end{table} Table 1: Definitions and priors of the free parameters of ALPACA used to perform joint fitting. Note that \(\sigma_{\rm{ISM}}\) and \(\sigma_{\rm{cl}}\) are two independent parameters that characterize the gas velocity dispersion in the ISM and CGM, respectively. To reduce the dimensionality of the parameter space, we take into account that the typical stellar mass of the \(z\sim 3\) LBG sample is \(M_{\star}\sim 10^{9.7}\,M_{\odot}\)(Pahl et al., 2022). Using the stellar mass-halo mass relation from Moster et al. (2010) and the concentration-halo mass relation from Dutton & Maccio (2014), such a stellar mass corresponds to 6 a virial mass of the halo \(M_{\rm vir}\sim 10^{12}\,M_{\odot}\), a virial radius \(r_{\rm vir}\sim 76\) kpc, and a concentration parameter \(c\sim 8.3\). For simplicity, in the model, we assume the halo radius \(r_{\rm n}=100\) kpc and the clump launch radius \(r_{\rm min}=1\) kpc. We remind the readers that the results are not sensitive to these choices. Footnote 6: We have adopted \(H_{\rm 0}=70\) km s\({}^{-1}\)Mpc\({}^{-1},\Omega_{\rm{m,0}}=0.3\) and \(\Omega_{\Lambda,0}=0.7\). We further assume that the clump radius \(R_{\rm d}=100\) pc (Zahedy Figure 5: **Results of joint modeling the composite DTB absorption line profile and the EW vs. \(b\) profile of Cii\(\lambda\)1334.** The posterior PDF is shown, along with the 1-\(\sigma\) confidence intervals of the timed parameters. The location of the maximum likelihood point is indicated by red dashed lines. On the upper right, panel (a) shows the best-fit model to the DTB absorption line profile. The non-outflowing ISM component and the outflowing CGM component are shown in green and red colors, respectively. Panel (b) shows the best-fit model (red) to the observed EW vs. \(b\) profile (black) at three different impact parameters: \(b/r_{\rm n}\simeq\{1\over 3},\frac{3}{3},11\). Also shown are twenty models with the highest likelihoods (blue). Panel (c) shows the clump outflow velocity profiles of twenty models (blue) with the highest likelihoods in the parameter space, as well as the best-fit outflow velocity profile (red). The level of the clump radial velocity dispersion (\(\sigma_{\rm{cl}}=120\)km s\({}^{-1}\)) is shown by a horizontal black dashed line. et al., 2019), clump C+ column density \(N_{\rm C+,cl}=10^{15}\,{\rm cm}^{-2}\) clump Doppler parameter \(b_{\rm D}=15\,{\rm km}\,{\rm s}^{-1}\) (i.e. moderate internal turbulence), clump radial velocity dispersion \(\sigma_{\rm cl}=120\,{\rm km}\,{\rm s}^{-1}\), which is close to the largest observed nebular emission line widths but slightly smaller than \(1/\sqrt{3}\) of the circular velocity of the halo that we consider. As a result, the ALPACA model used to jointly fit the composite DTR C ii\(\lambda 1334\) spectrum and the EW v.s. \(b\) profile contains six parameters in total: the amplitude of the ISM absorption component \(A_{\rm ISM}\), the standard deviation of the ISM absorption component \(\sigma_{\rm ISM}\), the total clump volume filling factor \(F_{\rm V}\) (Eq. 21), the clump asymptotic outflow velocity \(\mathcal{V}\) (Eq. 16), the power-law index in the clump acceleration force profile \(\alpha\) (Eq. 16), and the power-law index in the clump number density (or covering fraction) \(\gamma\) (Eq. 21). We use the nested sampling package dynesty(Skilling, 2004, 2006; Speagle, 2020) in our fitting pipeline to map the posterior in such a multi-dimensional parameter space and find the best-fit parameters. At each sampled point in the parameter space, a model spectrum is calculated semi-analytically on-the-fly and convolved with the LRIS line spread function (LSF) with \(\sigma\simeq 100\,{\rm km}\,{\rm s}^{-1}\) before being compared to the input observed spectrum, and three EWs at \(b=33\), 66, and 99 kpc are also calculated to be compared with the three observed EWs at \(b>0\) correspondingly. The likelihood of each sampled point is the sum of the likelihoods of the model for the DTR spectrum and the EW v.s. \(b\) profile. Each fitting run yields a posterior probability distribution function (PDF) of the six free model parameters. The uncertainties in the fitted parameters are determined as certain quantiles (e.g. 16% - 84%, or 1-\(\sigma\) confidence intervals) of the samples in the marginalized PDF. The priors of the parameters used for fitting are listed in Table 1. In Figure 5, we present the best-fit parameters of the fitting run and the posterior PDF. We also present the best-fit DTB model absorption line profile, EW v.s. \(b\) profile, and the clump radial outflow velocity profiles in three subpanels. ### Interpreting the Modeling Results We hereby examine the best-fit parameters of the model to understand the corresponding physical scenario. In the best-fit model, the ISM component is preferred to contribute significantly to the absorption near the line center, with a standard deviation of \(\sigma_{\rm ISM}\sim 100\,{\rm km}\,{\rm s}^{-1}\). Such a value is consistent with the nebular emission line widths convolved with the instrumental LSF. As for the CGM component, the clumps are referred to be highly non-volume-filling (\(F_{\rm V}\simeq 3\times 10^{-3}\ll 1\)), which corresponds to \(f_{\rm c}=\frac{3}{2}F\,\frac{\gamma-\gamma_{\rm m}}{F_{\rm d}}\simeq 4\) clumps with \(R_{\rm cl}\sim 100\,{\rm pc}\) along each radial sightline. Such an \(f_{\rm c}\) value implies that the halo is essentially fully covered by clumps to an external observer, as the likelihood for a radial sightline to contain zero clumps is \(\sim e^{-4}<2\%\). (see Eq. 33 and 34 in Section 6). As shown in Figure 5, the clump radial velocities are preferred to be a superposition of outflow and velocity dispersion. The outflowing component has a rapid acceleration phase (\(r/r_{\rm min}\lesssim 5\)) towards a maximum outflow velocity of \(v_{\rm rot,max}\sim 400\,{\rm km}\,{\rm s}^{-1}\) and then gradually decelerates until \(r/r_{\rm min}\sim 100\). The location of the absorption trough basically corresponds to \(-v_{\rm rot,max}\), because the velocity gradient near \(v=v_{\rm rot,max}\) is close to zero and the number of clumps that provide resonant or nearly-resonant absorption at this velocity is the largest. The broad wings of the CGM absorption profile (especially on the blue side of the trough), however, are due to the perturbation on the clump outflow by a velocity dispersion of \(\sigma_{\rm cl}=120\,{\rm km}\,{\rm s}^{-1}\). The total clump radial velocities range from \(\sim-250\,{\rm km}\,{\rm s}^{-1}\) to \(\sim+700\,{\rm km}\,{\rm s}^{-1}\), which is slightly narrower than the velocity range where significant absorption is seen (\(v_{\rm obs}\sim-800-300\,{\rm km}\,{\rm s}^{-1}\)), because (1) the non-resonant absorption of clumps with \(b_{\rm D}=15\,{\rm km}\,{\rm s}^{-1}\) is accounted for; (2) the model spectrum is smoothed with \(\sigma\simeq 100\,{\rm km}\,{\rm s}^{-1}\). The best-fit power-law index in the clump acceleration force profile, \(\alpha\simeq 1.3\), is consistent with the expected scaling (\(\alpha=4/3\)) for cool clumps of constant mass that are in pressure equilibrium with a hot wind (Steidel et al., 2010). The power-law index in the clump number density or covering fraction, \(\gamma\), is preferred to be \(\simeq 1\), which corresponds to a relatively steep decrease with radius. In general, at large \(r\), the clump number density is expected to decrease due to the increase of the halo volume and the destruction of cold gas. On the other hand, the clumps are expected to expand in size due to the decrease in the pressure of the confining hot medium in the outer halo or grow due to various mixing and cooling processes. Our modeling suggests that the effect of the former physical process is more dominant over the latter. Finally, we performed a fitting run by only fitting the DTB spectrum without using the EW measurements at \(b>0\). We find that in this case, \(\gamma\) becomes poorly constrained, yet the values of the other five free parameters remain basically the same. Such an experiment emphasizes the importance of incorporating the information about the absorption at \(b>0\), which is to help constrain the radial profile of the clump number density and covering fraction. ### Parameter Degeneracies As is shown in the posterior distribution in Figure 5, there are a number of significant degeneracies between the parameters of the ALPACA model. Here we discuss them as follows: * \(A_{\rm ISM}\) and \(F_{\rm V}\): These two parameters are anti-correlated, as they contribute to the total absorption by modulating the amplitude of the ISM component and the clump covering fraction of the CGM component, respectively. * \(A_{\rm ISM}\) and \(\mathcal{V}\): These two parameters are positively correlated, as increasing \(A_{\rm ISM}\) effectively adds more absorption around the line center and shifts the trough towards less negative velocities, whereas increasing \(\mathcal{V}\) shifts the trough towards more negative velocities (see Figure 4 in Section 4). * \(\mathcal{V}\) and \(\alpha\): These two parameters are anti-correlated, as a larger \(\mathcal{V}\) increases the maximum clump outflow velocity, whereas a smaller \(\alpha\) decreases the maximum clump outflow velocity (assuming \(\mathcal{V}\) is fixed; see Figure 3 in Section 2.2.1). Such a degeneracy also translates to an anti-correlation between \(A_{\rm ISM}\) and \(\alpha\). * \(F_{\rm V}\) and \(\gamma\): These two parameters are anti-correlated, as increasing \(F_{\rm V}\) or decreasing \(\gamma\) while keeping other parameters fixed results in an increase in the clump covering fraction and hence the total amount of absorption (see Eq. 21 and Figure 4). The parameter degeneracies of the ALPACA model may be broken with additional modeling or observations. For example, \({\rm Ly}\alpha\) emission modeling can be used to further constrain \(\mathcal{V}\) and \(\alpha\) and help break corresponding degeneracies, as the clump kinematic parameters are strongly correlated with particular \({\rm Ly}\alpha\) spectral features, e.g. the location of the double peaks and the blue-to-red peak ratio (Li et al., 2021, 2021; Li & Gronke, 2021; Erb et al., 2022). ### Model Anatomy: Where Does the Absorption Come from in the CGM? In ALPACA, since significant clump velocity dispersion is accounted for and there is no simple one-to-one mapping between the velocity space and the real space, it is not straightforward to describe where the majority of the absorption originates from. Therefore, here we zoom in on the internal structure of the model and reveal the relative contributions to the total absorption from the clumps located at different radii in the CGM. We examine three observed velocities in the DTB absorption line profile: \(v_{\rm obs}=+100,-200,{\rm and}\,-500\,{\rm km\,s}^{-1}\). As shown by Eq. (1), the "attenuation" factor of each shell, namely the fraction of flux density absorbed by the clumps, is given by \(C_{\rm f}(r)(1-e^{-\tau(r)})\), where \(C_{\rm f}(r)\) and \(\tau(r)\) are the clump covering fraction and optical depth at \(r\), respectively. In Figure 6, we plot the probability density distributions of the normalized galactocentric radii of the clumps, \(r/r_{\rm min}\), weighted by the attenuation factor \(C_{\rm f}(r)(1-e^{-\tau(r)})\) at three observed velocities. In this way, we can clearly see where the clumps contribute most to the total absorption in the CGM. In Figure 6, we see that at all three different velocities, the largest contribution to the total absorption comes from \(r/r_{\rm min}\sim 1\). This is because in the best-fit model, \(C_{\rm f}(r)\propto n_{\rm d}(r)\propto r^{-1}\), i.e. the clump number density or covering fraction peaks at \(r/r_{\rm min}\sim 1\) and decreases fairly significantly with radius. For \(v_{\rm obs}=-500\,{\rm km\,s}^{-1}\), the probability density distribution decreases monotonically with radius, and the majority of absorption comes from \(r/r_{\rm min}\la 40\), within which the total velocity of the clumps is able to reach the corresponding resonant velocity \(v_{\rm d}=500\,{\rm km\,s}^{-1}\). For \(v_{\rm obs}=-200\,{\rm km\,s}^{-1}\), the contribution to the total absorption comes from all over the halo. For \(v_{\rm obs}=+100\,{\rm km\,s}^{-1}\), the majority of absorption comes from \(r/r_{\rm min}\sim 1\) and \(r/r_{\rm min}\ga 30\), whereas the contribution within \(1<r/r_{\rm min}<30\) is negligible, although the clump number density is high. This is because the only location for the clumps to have a net negative total velocity \(v_{\rm rot}=-100\,{\rm km\,s}^{-1}\) is where the clump kinematics are random velocity-dominated; i.e. \(v_{\rm rot}\simeq 0\) and \(\sigma_{\rm cl}\simeq 120\,{\rm km\,s}^{-1}\), which is best satisfied at \(r/r_{\rm min}\sim 1\) and \(\sim 100\). The fact that the attenuation factor is highly velocity-dependent suggests that the clump optical depth \(\tau(r)\) is still the dominant contributor to the total absorption, rather than the clump covering fraction \(C_{\rm f}(r)\). Overall, compared to the models that do not account for significant clump random motion, ALPACA reveals a physical scenario where the absorption observed at a particular velocity is contributed by the clumps from a fairly broad range of radii, rather than from a single point of resonance. We will investigate these differences further in Section 7. ### Alternative Clump Radial Outflow Velocity Profiles Although the formalism of the ALPACA model is general, any practical implementation for the purpose of application of the model will inevitably restrict the model to particular physical regimes. For example, the kinematic model of the clump outflow that we explored in Section 2.2.1 (Eq. 12) is highly simplistic and model-dependent, and will not capture all possible radial profiles of clump outflows. Therefore, here we explore a different type of radial profile for clump outflow velocities and see whether it can also provide a reasonable fit to the observational data. We consider a scenario where the gravitational deceleration force is weak and negligible compared to the power-law acceleration force. In this case, the kinematic equation of an outflowing clump, is simply given by: \[\frac{dv_{\rm cl,tot}(r)}{dt}=Ar^{-\alpha} \tag{25}\] which can be solved as: \[v_{\rm cl,tot}(r)=\mathcal{V}\Big{(}1-\Big{(}\frac{r}{r_{\rm min}}\Big{)}^{1- \alpha}\Big{)}^{1/2} \tag{26}\] where we have replaced \(A\) with \(\mathcal{V}(\equiv\sqrt{2Ar_{\rm min}^{1-\alpha}/(\alpha-1)})\), the asymptotic clump outflow velocity at \(r\to+\infty\). Eq. (26) is exactly the radial velocity profile used by Steidel et al. (2010). We find that such a monotonically increasing radial outflow velocity profile is also able to yield a reasonable fit to the composite C ii\(\lambda 1334\) DTB spectrum and the EW v.s. \(b\) profile modeled in Section 5.1 with the following best-fit parameters: \(A_{\rm ISM}=0.46^{+0.08}_{-0.06}\), \(v_{\rm ISM}=103^{+14}_{-12}\,{\rm km\,s}^{-1},\log F_{\rm V}=-2.66^{+0.04}_{-0.04},\mathcal{V}=452^{+299}_{-29}\,{\rm km\,s}^{-1},\alpha=1.30^{+0.40}_{-0. 20},\gamma=1.05^{+0.08}_{-0.09}\log N_{\rm C+,cl}=15,\sigma_{\rm d}=120\,{\rm km \,s}^{-1}\). In Figure 7, we show the best-fit models and the \(v_{\rm cl,tot}(r)\) profiles of this joint fitting run. Such an experiment reminds us that there is still some freedom in the clump radial velocity distribution that may not be fully constrained by the joint fitting of the DTB spectrum and the EW v.s. \(b\) profile. Nonetheless, these different radial velocity distributions do share one thing in common: they can all be decomposed into two velocity components - a velocity dispersion and a radially-varying outflow, the latter of which is smaller by several hundred \(\,{\rm km\,s}^{-1}\) than the maximum velocity of absorption. One promising way to further constrain the clump radial velocity profile is to incorporate spatially-resolved Ly\(\alpha\) emission modeling, assuming that the gas that produces LIS absorption lines is also responsible for producing extended Ly\(\alpha\) emission via resonant scattering. As the Ly\(\alpha\) blue-to-red peak flux ratio is sensitive to the local clump outflow velocity, one can distinguish whether the clump outflow has decelerated significantly or remains at a high speed at large radii by modeling the Ly\(\alpha\) profiles observed at the halo outskirts (Erb et al., 2022). Recently work on mapping the 2D line-of-sight kinematics via Ly\(\alpha\) absorption may also help break the degeneracy (Chen et al., 2020). ## 6 Covering and volume filling parameters of the cool gas The physical properties of the "cool" (\(T\sim 10^{4}\)K) gas in a galactic environment, which is responsible for producing the UV absorption lines of the low ions, have been studied extensively in recent years, both theoretically and observationally. Notably, McCourt et al. (2018) first carried out a comprehensive analysis by Figure 6: **Probability density distribution of the normalized galactocentric radii of the clumps, \(r/r_{\rm min}\), weighted by the attenuation factor \(C_{\rm f}(r)(1-e^{-\tau(r)})\) at three different observed velocities.** For \(v_{\rm obs}=-500\,{\rm km\,s}^{-1}\), the probability density distribution decreases monotonically with radius, and the majority of absorption comes from \(r/r_{\rm min}\la 40\). For \(v_{\rm obs}=-200\,{\rm km\,s}^{-1}\), the contribution to the total absorption comes from all over the halo. For \(v_{\rm obs}=+100{\rm km\,s}^{-1}\), the majority of absorption comes from \(r/r_{\rm min}\sim 1\) and \(r/r_{\rm min}\ga 30\), whereas the contribution within \(1<r/r_{\rm min}<30\) is negligible. combining hydrodynamic simulations with observations and summarized with the following physical picture for the cool gas: a mist or fog of cloudlets with a large area covering \(factor^{7}\)\(f_{\rm c}\gg 1\) but a small total volume filling factor \(F_{\rm V}\ll 1\)(see also Liang & Remming, 2020). This physical picture is supported by a number of observational studies, e.g. Stocke et al. (2013) report the volume filling factor of the cool clouds in the CGM of a sample of low-\(z\) galaxies is on average a few percent (see also Keeney et al., 2017), and Zahedy et al. (2019) find that the mean volume filling factor of the cool gas is about \(10^{-3}\) for massive ellipticals at \(z\sim 0.4\).6. On the other hand, a close-to-unity coverage by the cool gas has been observed for the CGM halos of both galaxies and luminous quasars (e.g. Prochaska et al., 2013; Cantalupo et al., 2014; Hennawi et al., 2015; Borisova et al., 2016; Cai et al., 2017; Zahedy et al., 2019; Rudie et al., 2019). Footnote 6: Note that we have used a different terminology from McCourt et al. (2018). In this paper, we use the area covering \(factor\) to refer to the average number of cloudlets intercepted per line of sight, and the covering \(function\) to refer to the fraction of area covered by the clumpy gas. The volume filling fraction defined in McCourt et al. (2018) has the same meaning as the volume filling factor defined in this work. Footnote 7: See also Prochaska et al. (2019), who use FRB constraints and derive that the volume filling factor of the clumpy cool gas is \(<10^{-4}\) for a massive galaxy at \(z\sim 0.4\). Before we move on, it is instructive to clarify the definition of the covering fraction \(C_{\rm f}\), the covering factor \(f_{\rm c}\), and the volume filling factor \(F_{\rm V}\) of the cool gas. All these three parameters can be evaluated as either a global quantity of a halo or a radially-varying profile as a function of radius or velocity. The expression for the covering fraction \(C_{\rm f}\) as a function of radius has been derived in Eq. (3). As for \(f_{\rm c}\) and \(F_{\rm V}\), considering the case of an idealized clumpy medium that only consists of spherical clumps and derived a relation between the covering factor and the volume filling factor. Specifically, the covering factor _at a particular radius_, \(f_{\rm c}(r)\), is given by (Dijkstra & Kramer, 2012): \[f_{\rm c}(r)=n_{\rm d}(r)\sigma_{\rm d}(r)=n_{\rm d}(r)\pi R_{\rm d}^{2}(r) \tag{27}\] where \(r\) is the radial location of the clumps, \(n_{\rm d}(r)\) is the number density of the clumps and \(\sigma_{\rm d}(r)=\pi R_{\rm d}^{2}(r)\) is the geometric cross-section of a clump of radius \(R_{\rm d}(r)\). \(f_{\rm c}(r)\) has the units of length\({}^{-1}\) and is analogous to the opacity \(\kappa(r)\) in a homogeneous medium. The volume filling factor _at a particular radius_, \(F_{\rm V}(r)\), is given by: \[F_{\rm V}(r)=n_{\rm d}(r)V_{\rm d}(r)=n_{\rm d}(r)\frac{4}{3}\pi R_{\rm d}^{3} (r) \tag{28}\] where \(V_{\rm d}(r)=\frac{4}{3}\pi R_{\rm d}^{3}(r)\) is the geometric volume of a clump with radius \(R_{\rm d}(r)\). Comparing with Eq. (3) and (27), we have: \[F_{\rm V}(r)=\frac{4}{3}R_{\rm c}(r)f_{\rm c}(r) \tag{29}\] Note that although the above relation is derived under the assumption of spherical clumps, it also holds (modulo a geometric correction factor) for a more general geometric configuration of the clumpy gas. This is because \(F_{\rm V}(r)\) and \(f_{\rm c}(r)\) will always be proportional and different by a factor of \(V_{\rm d}(r)/\sigma_{\rm d}(r)\), the clump volume-to-cross-section ratio. One can further consider the following corresponding spatially-integrated quantities: * The _total_ volume filling factor of the halo, \(F_{\rm V}\), is given by: \[F_{\rm V}=\frac{1}{V_{\rm b}}\int_{r_{\rm min}}^{r_{\rm h}}F_{\rm V}(r){\rm d }V(r)\] (30) * The _integrated_ gas covering factor, \(f_{\rm c}\), i.e. the mean number of clumps along a line of sight at impact parameter \(b\), is given by: \[f_{\rm c}(b)=2\int_{b}^{r_{\rm h}}\frac{r{\rm d}r}{\sqrt{r^{2}-b^{2}}}f_{\rm c }(r)\] (31) In particular, at \(b=0\) ("down the barrel"), \(f_{\rm c}\) is given by: \[f_{\rm c}(0)=2\int_{r_{\rm min}}^{r_{\rm h}}f_{\rm c}(r){\rm d}r=\frac{3}{2} \int_{r_{\rm min}}^{r_{\rm h}}\frac{F_{\rm V}(r)}{R_{\rm d}(r)}dr \tag{32}\] which, in the special case where both \(F_{\rm V}(r)\) and \(R_{\rm d}(r)\) are constant, can be simplified to: \[f_{\rm c}(0)=\frac{3}{2}F_{\rm V}\frac{r_{\rm h}-r_{\rm min}}{R_{\rm d}} \tag{33}\] It can be seen that in the limit of \(r_{\rm h}/R_{\rm d}\gg 1\), even a small \(F_{\rm V}\ll 1\) can yield a large \(f_{\rm c}(0)\gg 1\)(McCourt et al., 2018). Figure 7: **Results of joint modeling using an alternative clump outflow velocity profile assuming gravitational deceleration is negligible (see Eq. 25).** Panel (a) shows the best-fit model to the DTB absorption line profile. The non-outflowing ISM component and the outflowing CGM component are shown in green and red colors, respectively. Panel (b) shows the best-fit model (red) to the observed EW v.s. \(b\) profile (black) at three different impact parameters: \(b/r_{\rm h}\simeq\{1\},\frac{3}{2},11\). Also shown are twenty models with the highest likelihoods (blue). Panel (c) shows the clump outflow velocity profiles of twenty models (blue) with the highest likelihoods in the parameter space, as well as the best-fit outflow velocity profile (red). The level of the clump radial velocity dispersion (\(\sigma_{\rm d}=120{\rm km\,s^{-1}}\)) is shown by a horizontal black dashed line. \(\bullet\) The _integrated_ gas covering fraction of the halo, \(C_{i}\), has the physical meaning of the fraction of sightlines at a particular impact parameter intercepted by at least one clump to an external observer. The Poisson probability of having sightlines at impact parameter \(b\) that contain zero clumps is: \[P(N_{\rm{laump}}=0|f_{c}(b))=e^{-f_{c}(b)} \tag{34}\] hence \(C_{i}(b)=1-P(N_{\rm{dump}}=0|f_{c}(b))=1-e^{-f_{c}(b)}\). ## 7 Sobolev vs. non-Sobolev In this section, we explore the effect of the Sobolev approximation in the context of a clumpy galactic environment. The idea of the Sobolev approximation is that if the width of the absorption cross section in velocity space is much smaller than the change in the velocity of the absorbing gas within a short distance, the absorption at each velocity can be approximated as that absorption that only happens at the resonance point. Traditionally, the Sobolev approximation is usually applied to a continuous medium, such as a homogeneous wind (e.g. Prochaska et al., 2011; Scarlata and Panagia, 2015; Carr et al., 2022). We hereby examine the use of the Sobolev approximation in a clumpy medium, and present a quantitative comparison between Sobolev and non-Sobolev modeling. ### A Homogeneous Medium v.s. An Extremely Clumpy Medium In a homogeneous expanding wind, the Sobolev approximation gives the line optical depth at a given radius \(\tau_{\rm{S}}(r)\), which is solely determined by the gas number density and velocity gradient at that radius (Sobolev, 1960; Lamers and Cassinelli, 1999): \[\tau_{\rm{S}}(r)=\frac{\pi e^{2}}{m_{\rm{i}}c}f_{\rm{line}}\lambda_{\rm{line }}n_{\rm{I}}(r)\Big{|}\frac{{\rm d}v}{{\rm d}r}\Big{|}_{r}^{-1} \tag{35}\] where \(n_{\rm{I}}\) is the number density of the relevant ion at the lower level of the transition. The essence of Eq. (35) is that it reduces the interaction between the photons and the ions to a _local_ process, which simplifies the calculation of optical depth (which is generally an integral over distance) to an evaluation of the properties of the absorbing gas at a single point. The DTB absorption line profile can be calculated as: \[I(v)=e^{-\tau_{\rm{S}}(v(v))} \tag{36}\] where \(r(v)\) is the relation between velocity and radius that expresses the optical depth (and hence the line intensity) as a function of velocity. We compare this solution obtained for a homogeneous medium with a hypothetical extremely clumpy model, where the halo is fully filled with clumps. We calculate the absorption of this model in a non-Sobolev way, meaning that the clumps are separated into a series of concentric shells (as we did in Section 2.1) that all contribute to the absorption at a particular observed velocity, regardless of whether the absorption is resonant. To ensure a direct comparison with the corresponding homogeneous model, we assign the following column densities to a particular shell whose midplane is located at \(r\): \[N_{\rm{ion,cl}}=n_{\rm{r}}(r)d_{\rm{shell}} \tag{37}\] where \(n_{\rm{I}}(r)\) is the corresponding ion number density in the homogeneous medium in Eq. (35). As in an extremely clumpy medium, \(C_{\rm{I}}(r)\simeq 1\) everywhere, the escape probability of a photon observed at velocity \(-v\) is simply given by: \[P_{\rm{esc}}(-v)=\prod_{i=1}^{N_{\rm{halo}}}e^{-\tau_{\rm{max}}(r-v_{i})} \tag{38}\] Eqs. (37) and (38) can be combined with Eqs. (4) - (7) to derive the normalized absorption line intensity as a function of velocity, \(I(v)\), for an extremely clumpy medium. To compare Sobolev modeling in a homogeneous medium with non-Sobolev modeling in an extremely clumpy medium more quantitatively, we have designed several numerical experiments. For the sake of simplicity, we assign both the ions in the homogeneous medium and the clumps in the clumpy medium a radial outflow velocity profile that increases linearly with \(r\): \[v_{\rm{out}}(r)=\frac{r-r_{\rm{min}}}{r_{\rm{h}}-r_{\rm{min}}}v_{\rm{max}} \tag{39}\] where \(v_{\rm{max}}\) is the maximum outflow velocity achieved at \(r_{\rm{h}}\). In this case, the radial velocity gradient, \({\rm d}v_{\rm{out}}/{\rm d}r=v_{\rm{max}}/(r_{\rm{h}}-r_{\rm{min}})\) is constant. For the fiducial model, we assume \(n_{\rm{I}}(r)=n_{\rm{i,0}}(r/r_{\rm{min}})^{-\gamma}\) and \(n_{\rm{i,0}}=10^{-7}\,{\rm cm}^{-3}\). We present the results in Figure 8. In each panel, two sets of models with \(\gamma=1.0\) and \(2.0\) are shown. In the left panel, we set \(v_{\rm{max}}=1000\,{\rm km\,s}^{-1}\) and \(b_{\rm{D}}=1.3\,{\rm km\,s}^{-1}\). Such a choice satisfies the "large velocity gradient" criterion derived by Carr et al. (2022): \[\frac{\eta}{\gamma}\gg\frac{b_{\rm{D}}}{v_{\rm{out}}(r)} \tag{40}\] where \(\eta\) is the power-law index in the velocity scaling with \(r\): \(v_{\rm{out}}\propto r^{\eta}\). For an outflow velocity profile that increases linearly with \(r\), we have \(\eta=1\). In this case, the Sobolev and non-Sobolev models are fully consistent with each other, suggesting that the Sobolev approximation is working well. However, in the middle panel, we show the models with a decreased \(v_{\rm{max}}\) of \(100{\rm km\,s}^{-1}\) and an increased \(b_{\rm{D}}\) of \(13\,{\rm km\,s}^{-1}\), which corresponds to a low velocity gradient scenario that does not satisfy Eq. (40) anymore. In this case, the Sobolev models underpredict the amount of absorption on both the red and blue sides of the line center, suggesting that the Sobolev approximation is starting to break down. In the right panel, we keep the large velocity gradient by setting \(v_{\rm{max}}=1000\,{\rm km\,s}^{-1}\) and \(b_{\rm{D}}=1.3\,{\rm km\,s}^{-1}\) but add a small random velocity (\(\sigma=20\,{\rm km\,s}^{-1}\)) to the ions and clumps. In this case, the absorption line profiles predicted by the Sobolev approximation quickly become chaotic and noisy due to the stochasticity added to the velocity gradient \({\rm d}v/{\rm d}r\). In contrast, the profiles predicted by non-Sobolev modeling remain basically unperturbed and stable. In short, these experiments have demonstrated that in a homogeneous medium or an extremely clumpy medium, the Sobolev approximation only works where the velocity gradient is sufficiently large and the random motion of the gas is negligible. ### A Not-So-Clumpy (\(F_{\rm{V}}\ll 1\)) Medium In a realistic CGM, the clumps are likely to be non-volume-filling and hence there will be many holes in the medium that the photons can pass through freely. In this case, the Sobolev optical depth given by Eq. (35) can no longer be used directly, as the absorption now depends on both the clump optical depth and the clump covering fraction. In fact, unlike the ion number density \(n_{\rm{I}}(r)\) or the velocity gradient \({\rm d}v\,/\,{\rm d}r\), the clump covering fraction \(C_{\rm{I}}(r)\) is not simply a function of the local properties of the clumps at \(r\). In order to calculate \(C_{\rm{I}}(r)\), one need to know the number of clumps that contribute to the geometric coverage of a sphere with radius \(r\), which requires specifying and integrating over a finite width for such a sphere. This is why it is necessary to use a series of shells to properly calculate the absorption in ALPACA; one can also see from Eq. (3) that \(C_{\rm f}(r)\) not only depends on \(n_{\rm d}(r)\) and \(R_{\rm d}(r)\), but also the shell width \(d\)9. Footnote 9: Strictly speaking, one can derive a \(C(r)\) profile that is merely a function of \(r\) and independent of the shell width \(d\). For example, Dijkstra & Kramer (2012) set \(d=2R_{\rm d}(r)\) and derived \(C_{\rm f}(r)\simeq\frac{4}{3}\pi n_{\rm d}(r)R_{\rm d}^{3}(r)\), which is essentially accounting for all the clumps that can possibly intersect the sphere at \(r\). In ALPACA, however, the shell width is always chosen to be smaller than \(R_{\rm d}(r)\), so that each clump may intersect multiple shells. In this case, we need to use the shell-width-dependent version of \(C_{\rm f}(r)\) (given by Eq. 3) to avoid multiple-counting the contribution of the clumps. However, it is still possible to utilize the idea of the Sobolev approximation in a non-volume-filling clumpy medium. One can imagine that if the radial velocity gradient of the clumps is sufficiently large, the clumps that are moving at non-resonant velocities are shifted far away from resonance and will contribute negligibly to the absorption. In this case, we can possibly only account for resonant absorption and neglect all the non-resonant absorption, similar to how we applied the Sobolev approximation in a homogeneous medium. With regard to this, we similarly design several experiments to compare Sobolev and non-Sobolev modeling in such a non-volume-filling clumpy medium. In this regime, Monte-Carlo simulations from tlac can be conveniently performed at low computational costs to compare with the analytic models as an independent check. We use the following set of parameters for the fiducial model: \(F_{\rm V}=0.005,\log N_{\rm HI,d}=14,\gamma=2.0,{\rm and}\,R_{\rm d}=500\,{ \rm pc}\). We then assume the linearly-increasing radial velocity profile as we do in Section 7.1. The non-Sobolev models are generated under the standard formalism of ALPACA, whereas the Sobolev models are generated with the same clump parameters, but we assume that the clumps only produce resonant absorption at one particular velocity. We present the results in Figure 9. In the left panel, we set \(v_{\rm max}=1000\,{\rm km\,s^{-1}}\) and \(b_{\rm D}=1.3\,{\rm km\,s^{-1}}\), i.e. a large velocity gradient for the clumps. We show that the model absorption line profiles predicted by ALPACA with an increasing number of shells (hence decreasing \(d/R_{\rm d}\) values) converge to the Monte-Carlo simulation result from tlac. This is because as the number of shells used in the model increases, the decrease of \(C_{\rm f}(r)\) in each shell is compensated for by the increase of the total number of shells. However, the amount of absorption predicted by the Sobolev approximation (shown by thick curves) decreases as the number of shells increases, which is simply because the number of clumps that provide resonant absorption at each velocity has decreased. In other words, the model will not converge under the Sobolev approximation. Interestingly, the Sobolev model with \(d/R_{\rm d}=1\) is actually consistent with the non-Sobolev models. We find that in this case, the velocity difference between two adjacent shells \(\Delta v_{\rm d}=v_{\rm max}/N_{\rm shell}\) is about four times as large as the clump Doppler parameter \(b_{\rm D}\), which guarantees that at each observed velocity, only one shell can possibly contribute to the absorption and the other shells are all shifted far away from resonance. Therefore, in this case, whether or not using the Sobolev approximation gives the same absorption line profile. Such a coincidental consistency between the Sobolev and non-Sobolev models is not always achievable. In the middle panel of Figure 9, we consider another scenario where \(v_{\rm max}=100\,{\rm km\,s^{-1}}\) and \(b_{\rm D}=13\,{\rm km\,s^{-1}}\), i.e. a low velocity gradient scenario. We find that in this case, the Sobolev models always underestimate the amount of the absorption, regardless of the choice of the number of shells. This is because now \(\Delta v_{\rm d}\) cannot be much larger than \(b_{\rm D}\), so that at each observed velocity, there are always multiple shells that contribute to the absorption. Only accounting for resonant absorption and ignoring the other that happens near the line center will inevitably miss a significant portion of the absorption. Lastly, in the right panel of Figure 9, we set \(v_{\rm max}=1000\,{\rm km\,s^{-1}}\) and \(b_{\rm D}=1.3\,{\rm km\,s^{-1}}\) while adding a small random motion to the clumps, \(\sigma_{\rm d}=5\,{\rm km\,s^{-1}}\). This time we only compare two models with the tlac simulation: a non-Sobolev model with \(d/R_{\rm d}=0.1\) and a Sobolev model with \(d/R_{\rm d}=1\) - the one where the Sobolev approximation happens to predict the same line profile with the non-Sobolev models in the \(\sigma_{\rm d}=0\) case. It can be seen that the non-Sobolev model is still consistent with the tlac simulations, but the Sobolev model exhibits a significant deviation from the other two models, suggesting that the Sobolev approximation is breaking down. For the simple linear velocity profile given by Eq. (39), one can Figure 8: **Experiments designed to demonstrate the difference between the Sobolev and non-Sobolev modeling.** We consider a homogeneous medium v.s. an extremely clumpy medium where the covering fraction \(C(r)\simeq 1\) everywhere. _Left_: If the radial velocity gradient is large, the Sobolev models (thick curves) are consistent with the corresponding non-Sobolev models (thin curves), suggesting that the Sobolev approximation works in this regime. Two sets of models with two different number density profiles of the ions or clumps (\(n(r)\propto r^{-1}\) and \(r^{-2}\)) are shown in green and blue, respectively. _Middle_: If the radial velocity gradient is small, the Sobolev models underpredict the amount of absorption on both the red and blue sides of the line center, suggesting that the Sobolev approximation starts to break down. _Right_: If the radial velocity gradient is large but there is a small random velocity (\(\sigma=20\,{\rm km\,s^{-1}}\)), the absorption line profiles predicted by the Sobolev approximation quickly become chaotic and noisy due to the stochasticity added to the velocity gradient, whereas the profiles predicted by non-Sobolev modeling remain basically unperturbed and stable. estimate the condition under which the Sobolev approximation may happen to predict the correct line profile in a non-volume-filling clumpy medium. The condition that is required for the Sobolev approximation to work is: \[\Delta v_{\rm cl}=v_{\rm max}/N_{\rm shell}\gg b_{\rm D} \tag{41}\] where \(N_{\rm shell}\) is the number of shells used in the model. Note that the shell width cannot be chosen to be arbitrarily large; considering the clumps that can possibly contribute to the covering fraction of a shell at \(r\) should have their centers located within \(r\pm R_{\rm cl}\), \(d/R_{\rm cl}\lesssim 2\) should be satisfied. This condition translates to the following inequality: \[N_{\rm shell}\gtrsim\frac{r_{\rm h}-r_{\rm min}}{2R_{\rm cl}} \tag{42}\] Combining Eqs. (41) and (42), we have: \[v_{\rm max}\gg\frac{r_{\rm h}-r_{\rm min}}{2R_{\rm cl}}b_{\rm D} \tag{43}\] We stress that in a realistic galactic environment (e.g. CGM), this requirement is especially difficult to satisfy, considering the following two constraints: (1) the clump Doppler parameter \(b_{\rm D}\) can be \(10\sim 20\,{\rm km\,s^{-1}}\) due to moderate internal turbulence (e.g. Rudie et al., 2019; Qu et al., 2022), or even \(\sim 100\,{\rm km\,s^{-1}}\) if a "clump" is actually an ensemble of smaller droplets entrained by the hot medium (e.g. Gronke et al., 2022); (2) the clump size is often much smaller than the halo size (by two to three orders of magnitude, see e.g. McCourt et al., 2018; Zahely et al., 2019). Moreover, note that Eq. (43) is derived under the assumption that the clumps' outflow velocity increases linearly with \(r\). In reality, the clumps are usually accelerated to several hundred \(\,{\rm km\,s^{-1}}\) at first, but the deceleration forces (e.g. due to gravity) start to dominate at large radii so that the velocity gradient of the clumps starts to decrease significantly. In addition, the velocity dispersion \(\sigma_{\rm cl}\) of the clumps will effectively smooth out the clump velocity gradient. Therefore, the \(\Delta v_{\rm cl}\gg b\) condition is at best only satisfied within a narrow range of radii where the clumps are undergoing rapid acceleration and the Sobolev approximation may be applicable. In general, applying the Sobolev approximation to the entire halo will likely result in underestimating the amount of absorption significantly. When modeling the observed absorption line profiles, any attempt to only account for "local" absorption will generally overestimate the clump covering fraction, as the omission of non-resonant absorption needs to be compensated for by larger clump covering fractions at different radii. ## 8 Previous work modeling UV absorption lines In this section, we briefly summarize previous work modeling UV absorption lines and compare it with ALPACA. ### The Picket-Fence Model One of the most widely used models for decoding UV absorption lines is the "picket-fence" model (e.g., Steidel et al., 2010; Heckman et al., 2011; Zackrisson et al., 2013; Jones et al., 2013; Borthakur et al., 2014; Alexandroff et al., 2015; Rivera-Thorsen et al., 2015; Erb, 2015; Vasei et al., 2016; Reddy et al., 2016; Rivera-Thorsen et al., 2017; Steidel et al., 2018; Gazagnes et al., 2018, 2020), which assumes that the emitting source is partially covered by optically thick, clumpy gas material. Specifically, depending on whether dust is assumed to be present in the uncovered region (i.e. the "holes"), the normalized line intensity can be expressed as: \[I_{\lambda}=10^{-0.4\lambda_{\lambda},E(B-V)}(C_{f}e^{-\tau_{\lambda}}+1-C_{f}) \tag{44}\] with a uniform foreground dust screen, or: \[I_{\lambda}=10^{-0.4\lambda_{\lambda},E(B-V)}C_{f}e^{-\tau_{\lambda}}+(1-C_{f}) \tag{45}\] if no dust is present in the uncovered region. In both cases, the observed photons escape either after being attenuated by the optically-thick absorbing gas (the \(C_{f}e^{-\tau_{\lambda}}\) term) or from the holes (the \(1-C_{f}\) term), and dust will provide additional attenuation to the spectrum. The picket-fence model adopts a rather phenomenological prescription for the absorbing gas, as it parameterizes the effective absorption at each wavelength or velocity empirically with the gas Figure 9: **Same as Figure 8, but for a non-volume-filling clumpy medium.** We consider a non-volume-filling clumpy medium that contains holes through which the photons can pass freely. _Left:_ If the radial velocity gradient is large, the non-Sobolev models tend to converge to the Monte-Carlo simulations by tlac as the number of shells increases (or equivalently, as \(d/R_{\rm cl}\) decreases). However, the amount of absorption predicted by the Sobolev models (shown by thick curves) decreases as the number of clumps that produce resonant absorption at each velocity has decreased. Note that the \(d/R_{\rm cl}=1\) Sobolev model is coincidentally consistent with the non-Sobolev models (see discussion in Section 7.2). _Middle:_ If the radial velocity gradient is small, the Sobolev models always underestimate the amount of the absorption, regardless of the choice of the number of shells. _Right:_ If the radial velocity gradient is large but there is a small random velocity (\(\sigma_{\rm cl}=5\,{\rm km\,s^{-1}}\)), the Sobolev model exhibits a significant deviation from the non-Sobolev model and the tlac prediction, suggesting that the Sobolev approximation is breaking down. covering fraction \(C_{f}\) and the effective optical depth \(\tau_{\lambda}\) without considering the details of the interaction between the photons emitted at different frequencies and the atoms moving at different velocities. Depending on whether an individual line profile or a series of lines are fitted, the gas covering fraction \(C_{f}\) can be determined as either a function of velocity (e.g. Steidel et al., 2010; Jones et al., 2013; Rivera-Thorsen et al., 2015, 2017) or as a wavelength-independent constant (e.g. Reddy et al., 2016; Steidel et al., 2018; Gazagnes et al., 2018, 2020). In the former case, the Sobolev approximation (as explained in Section 8.2 below) is generally adopted, and some work has used empirical treatments to account for the internal differential velocity structure of the absorbing gas, e.g. Steidel et al. (2010) and Chisholm et al. (2016) both considered an accelerated radial outflow with slightly different analytic forms. ### The Expanding Wind Model Another major way of modeling the UV absorption lines is to assume a uniform, expanding wind of cool gas with radially-varying densities and velocities (e.g. Prochaska et al., 2011; Scarlata and Panagia, 2015; Carr et al., 2018, 2021). This type of model accounts for the interaction between the emitted photons and moving atoms by using the Sobolev approximation (e.g. Sobolev, 1960; Lamers and Cassinelli, 1999), which assumes that the photons emitted at a given wavelength or velocity interact only with the outflowing gas at a single point of resonance in the reference frame of the gas. More specifically, the absorbing gas outflowing at \(\nu\) will only interact with the photons emitted at a frequency away from the line center by \(\Delta\nu=-\frac{\nu}{2}\nu_{0}\). The outflowing gas will not have any effect on the photons that do not appear at resonance in the reference frame of the gas. Such an approximation holds when the radial velocity gradient of the clumps is much larger than the Doppler parameter of the absorbing gas, but may not necessarily be valid otherwise (Carr et al., 2022). The expanding wind model can be used to predict absorption model spectra via either Monte Carlo RT simulations (e.g. Prochaska et al., 2011) or semi-analytical calculations (e.g. Scarlata and Panagia, 2015). The model can also be upgraded to account for more complex gas geometries, e.g. hemispherical or bi-conical (Prochaska et al., 2011; Carr et al., 2018, 2021), as well as additional gas kinematics, e.g. inflows (Carr and Scarlata, 2022). However, in general, the model assumes a homogeneous absorbing medium and does not account for the existence of holes or clumps in the outflow (Carr et al., 2022). ### Comparison between ALPACA and Previous Models We have identified the following differences between the results derived from modeling metal absorption lines using ALPACA and previous models: * The gas covering fraction: in previous models, the gas covering fraction is either 1 (in a homogeneous medium, Scarlata and Panagia, 2015), or constant (in a bi-conical medium, Carr et al., 2021), or a decreasing function with respect to \(r\) (in a clumpy medium, Steidel et al., 2010; Chisholm et al., 2016). The maximum gas covering fraction derived from these models is usually of order-of-unity. In non-Sobolev modeling, however, the derived clump covering fractions at different radii are generally much smaller than one. The halo is still "fully covered" by clumps to an external observer, in the sense that on average there are a few clumps along any sightline. However, the required number of clumps at a particular radius or moving at a particular velocity is much lower. * The gas volume filling factor: previous models that assume a homogeneous, expanding wind correspond to a volume-filling gaseous medium. Whereas in ALPACA, the inferred clump volume filling factor is much smaller than 1. In this sense, the results predicted by ALPACA are more consistent with the most current physical picture of the cool gas in a galactic environment (e.g. McCourt et al., 2018; Gronke and Oh, 2018; Nelson et al., 2020; Fielding and Bryan, 2022). * The radial velocity of the absorbing gas: in previous models, the range of gas velocities strictly corresponds to the range of velocities where absorption is seen. This means if the gas is purely outflowing, no redshifted absorption will be seen (Carr et al., 2022). In ALPACA, however, we consider the superposition of clump outflow and clump velocity dispersion and account for all the non-resonant absorption, which makes it possible to have: (1) a maximum clump outflow velocity that is much smaller than the maximum absorption velocity (or the \(v_{00}\) parameter used by previous literature) by several hundred \(\,\mathrm{km\,s^{-1}}\); (2) redshifted absorption without assuming the presence of external inflows. Moreover, in ALPACA, the clumps that contribute to the absorption observed at a particular velocity are not necessarily located all at the same radii; instead, they can be distributed at many different radial locations in the halo. In this sense, the gas absorption in ALPACA is more "democratic" than that in previous models. ## 9 Caveats & Outlook In this section, we discuss several caveats that the readers should keep in mind when using ALPACA, as well as a number of possible applications of the model in the future. ### Caveats When we use ALPACA to predict DTB metal absorption line profiles, we only include the photons that travel radially and are not scattered by the clumps. Although along each sightline there are only a few clumps, the emergent absorption line profile represents the average frequency distribution of the photons that escape in all directions. In a real observation, the observed DTB absorption line profile represents the frequency distribution of the photons that emerge along the line of sight from a cylindrical region, whose radius is roughly the size of the ISM (i.e. a few kpc) and height is about the virial radius. One can estimate the number of clumps in such a cylindrical region: \[N_{\mathrm{cl,cl}}\simeq\frac{F_{\mathrm{V}}V_{\mathrm{cl}}}{V_{\mathrm{cl}}}= \frac{F_{\mathrm{V}}\pi r_{\mathrm{cl}}^{2}r_{\mathrm{h}}}{\frac{4}{3}\pi R_{ \mathrm{cl}}^{3}} \tag{46}\] Taking \(F_{\mathrm{V}}=10^{-3}\), \(r_{\mathrm{col}}=2-3\) kpc\({}^{\mathrm{(}1)}\), \(r_{\mathrm{h}}=100\) kpc and \(R_{\mathrm{cl}}=100\) pc, we have \(N_{\mathrm{cl,cl}}\simeq 300\) - \(675\). This is the estimated number of clumps that contribute to the absorption of the DTB spectrum of a galaxy. By comparing the model absorption line profiles predicted by ALPACA with the real DTB observations, we are essentially assuming the escape of photons is isotropic and the observed DTB profile is a representative sample of the frequency distribution of the escaped photons in all directions. Such an assumption may not be warranted if some of the galaxy's properties are known to be angular asymmetric (e.g. if it exhibits a collimated outflow). We plan to account for such asymmetries in our future work. In addition, in this work, we mainly explore an outflow-dominated kinematic profile for the clumps in the CGM of high-\(z\) star-forming galaxies with a highly simplistic semi-analytic model. For other types of galaxies (e.g. quiescent early-type galaxies), the CGM gas may be inflow-dominated due to cosmological accretion (e.g. Arfuni et al., 2019). We stress that ALPACA is adaptable to various radial velocity profiles for the CGM gas and we will explore other possibilities in future works. Lastly, our model does not account for any re-emission due to scattering via fluorescent channels. Our attempts to describe such fluorescent emission semi-analytically turn out to be unsuccessful, possibly because many photons are scattered multiple times and it is difficult to predict their behavior without running Monte-Carlo simulations. The infilling of fluorescent emission was considered to have a non-negligible effect on the absorption line profile of the resonant transition by previous work (e.g. Prochaska et al., 2011; Scarlata and Panagia, 2015), yet more recent work finds that the effect of infilling is generally insignificant (Mauerhofer et al., 2021; see also Wang et al., 2020 and Hayes et al., 2023). More work needs to be done on both the theoretical and observational sides to better quantify the importance of fluorescent emission in the future. ### Outlook The semi-analytic model that we present in this work, ALPACA, serves as a complimentary tool to our Ly\(\alpha\) radiative transfer model, tlac. As of now, these two models can be used to infer the properties of ISM and CGM via modeling the following types of observational data, including but not limited to: 1. spatially-resolved Ly\(\alpha\) profiles (Erb et al., 2022); 2. Ly\(\alpha\) surface brightness v.s. \(b\); 3. Ly\(\alpha\) EW v.s. \(b\); 4. DTB metal absorption line profiles; 5. metal absorption EW v.s. \(b\). As we have already demonstrated in this work, the joint modeling of different datasets using ALPACA has the great potential of unveiling the intricate structure of galactic environments. When combined with Ly\(\alpha\) modeling, crucial properties of the cool gas in the CGM (such as its kinematics, see Section 5.5) can be determined reasonably well. Such a new methodology even has several far-reaching benefits for other fields in addition to galaxy evolution. For example, constraining the structure of the ISM and CGM of high-\(z\) LyC emitters will help us understand how the ionizing photons propagate outwards and eventually contribute to cosmic reionization. In our next paper, we plan to apply our models on more statistically significant samples (e.g. KBSS and KLCS), with the aim of establishing a standard picture for the galactic environments of high-\(z\) galaxies. We believe these efforts will eventually shed light on the nature of the galactic environments and the many important physical processes they participate in that are crucial to galaxy evolution. ## 10 Conclusions In this work, we present ALPACA, a new, fast semi-analytic model for UV absorption lines that emerge from a clumpy galactic outflow. The main conclusions of this work are: 1. We present a semi-analytic formalism for metal absorption lines, where the galactic halo is dissected into a series of concentric shells and the photons' escape probability is a function of the clump covering fraction and velocity in each shell. With ALPACA, we predict the DTB metal absorption line profiles and the EW of absorption at different impact parameters as a function of the properties of the clumps, including the clump kinematics, the clump volume filling factor, the clump number density profile and the clump ion column densities. 2. We compare the absorption line profiles predicted by ALPACA with the results obtained from a Ly\(\alpha\) radiative transfer code, tlac. Our tests show that the absorption line profiles predicted by ALPACA are consistent with the Monte-Carlo simulations performed by tlac over a wide range of parameters, suggesting the validity of the relatively simple formalism of ALPACA. We also present the effect of individual parameters of the clumps to the emergent absorption line profiles by varying each parameter individually. 3. We use ALPACA to jointly model the stacked DTB C ii\(\lambda\)1334 spectrum of a sample of \(z\sim 3\) LBGs and the EW v.s. \(b\) profile of a sample of \(z\sim 2\) star-forming galaxy-galaxy pairs. The model successfully reproduced two datasets simultaneously, and the best-fit prefers a low volume filling factor (\(\sim 3\times 10^{-3}\)) for the clumps. Moreover, the clumps' radial velocities are preferred to be a superposition of an outflow and a velocity dispersion; the outflow is rapidly accelerated to \(v_{\rm cl,out}\sim 400\) km s\({}^{-1}\) and then gradually decelerated, whereas the velocity dispersion is \(\sigma_{\rm cl}\sim 120\) km s\({}^{-1}\). The best-fit clump number density decreases with radius as \(n_{\rm cl}(r)\propto r^{-1}\). As ALPACA accounts for clump random motion and non-resonant absorption, the best-fit model corresponds to a physical scenario where the absorption observed at a particular velocity is contributed by the clumps from a fairly broad range of radii, rather than from a single point of resonance. 4. We explore the usage of the commonly adopted Sobolev approximation in the context of a clumpy galactic environment. We find that in an extremely clumpy medium that resembles a homogeneous medium, the Sobolev approximation only works when the velocity gradient is sufficiently large and the random motion of the gas is negligible. Whereas in a realistic, non-volume-filling clumpy medium, the Sobolev approximation is at best only applicable within a narrow range of radii where the clumps are undergoing rapid acceleration and fails otherwise. Applying the Sobolev approximation to the entire halo of a galaxy has the risk of overestimating the clump covering fraction significantly. 5. We find that the clump radial velocity profile may not be fully constrained by the joint modeling of the DTB spectrum and the EW v.s. \(b\) profile. The analysis of additional observational data, such as spatially-resolved Ly\(\alpha\) emission modeling, may help break the degeneracy and distinguish different clump radial velocity profiles. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author. ## Acknowledgements We thank Phil Hopkins for providing computational resources. ZL acknowledges Cody Carr for the illuminating discussion. MG thanks the Max Planck Society for support through the Max Planck Research Group. CS and ZL have been supported in part by grant AST-2009278 from the U.S. National Science Foundation. Numerical calculations were run on the Caltech compute cluster "Wheeler," allocations from XSEDE TG-AST130039 and PRAC NSF.1713353 supported by the NSF, and NASA HEC SMD-16-7592. We also acknowledge the use of the the following software packages: Astropy (Astropy Collaboration et al., 2018), the SciPy and NumPy system (Virtanen et al., 2020; Harris et al., 2020).
2308.01736
Sourceless Maxwell and Dirac equations via Clifford Analysis
The study of complex functions is based around the study of holomorphic functions, satisfying the Cauchy-Riemann equations. The complex numbers are a subset [the even subalgebra] of $Cl(2)$, and therefore we can ask whether there are analogues for the Cauchy-Riemann equations for other Clifford algebras. This has been extensively explored under the name of Clifford Analysis. Here I explicitly decompose the Cauchy-Riemann equations for a general Clifford algebra into grades using the Geometric Algebra formalism,, and show that for the Spacetime Algebra $Cl(3,1)$ these equations are the equations for a self-dual source free Electromagnetic field, and for a massless uncharged Spinor.
Calum Robson
2023-08-03T12:54:01Z
http://arxiv.org/abs/2308.01736v1
# Sources Maxwell and Dirac equations via Clifford Analysis ###### Abstract The study of complex functions is based around the study of holomorphic functions, satisfying the Cauchy-Riemann equations. The complex numbers are a subset [the even subalgebra] of \(Cl(2)\), and therefore we can ask whether there are analogues for the Cauchy-Riemann equations for other Clifford algebras. This has been extensively explored under the name of Clifford Analysis. Here I explicitly decompose the Cauchy-Riemann equations for a general Clifford algebra into grades using the Geometric Algebra formalism,, and show that for the Spacetime Algebra \(Cl(3,1)\) these equations are the equations for a self-dual source free Electromagnetic field, and for a massless uncharged Spinor. ## 1 Introduction and Background This paper is about the application of Clifford (or Geometric) algebras to physics. In particular, it is about Clifford Analysis, which is an attempt to extend the results of Complex Analysis first to the Quaternions, and then to a general Clifford algebra. The Clifford approach to geometry is based upon the ideas of W.K.Clifford and H.G. Grassmann [25], and involves giving a direct geometric representation of the n- dimensional subspaces of a Clifford algebra in terms of the n-dimensional linear subspaces of a geometric space, for example lines, planes, volumes; and so on. Dieudonne clarifies the technical distincion between Clifford geometry and the more standard approach as follows [9]. In standard approaches,we consider both a vector space \(\mathcal{V}\), and its dual \(\mathcal{V}^{*}\). We cannot identify these two spaces, but we can associate elements in \(\mathcal{V}\) to elements in \(\mathcal{V}^{*}\) if we define an inner product between these spaces. Then a \(p\)-dimensional subspcae \(V_{p}\in\mathcal{V}\) will be dual to an \((n-p)\)-dimensional subspace \(W_{n-p}\in\mathcal{V}^{*}\) iff \((v,w)=0\) for all \(v\in V_{p}\) and \(w\in W_{n-p}\), where \(n\) is the total dimension of \(\mathcal{V}\) and \(\mathcal{V}^{*}\), and \((\,\ )\) denotes the inner product. Conversely, in the Clifford approach to Geometry, we only have one space, \(\mathcal{V}\), which is equipped with a product to turn it into an algebra; in exact analogy to the transition between \(\mathbb{R}^{2}\) and \(\mathbb{C}\). More details are given below, but the essential point is that the dual of a \(p\)-dimensional subspace \(V_{p}\) is given by its \((n-p)\) dimensional complement in \(\mathcal{V}\) itself. This has advantages for physics; first of all by cutting down on the mathematical entities required, and secondly because most objects in physics can be given a direct geometric interpretation in this construction (See [10] for examples). This fits well with approaches to theoretical physics which seeks to describe fundamental processes as an algebra. As explicitly described by Bohm [6], since the elements of the algebra act on each other, they can model both the physical objects in the theory and the interactions between them which make up the process. Common examples of theories in this approach are Twistor theories [2], Octonion models [16][12] and related strategies, e.g. [5]. The paradigmatic example is the complex plane, where an element \(Re^{i\theta}\) both represents a point, and the action of rotating anticlockwise by \(\theta\), and dilating by scale factor \(R\). I begin the paper with some definitions and notation. First, I will review the Geometric Algebra formalism for Clifford algebras. Then, I will discuss Clifford Analysis as a generalisation of Complex analysis. Finally, I will give some results from Hodge Theory which will be needed for this paper. ### Geometric Algebras First, we need to give an overview of the Geometric Algebra formalism for Clifford algebra calculations. This section is mainly based on [1] and [10]. Other excellent references are [7] and [11]. An (orthogonal) Clifford Algebra \(Cl(p,q)\) with a non- degenerate metric is described by \(d=p+q\) generators \(e_{i}\), satisfying \[e_{i}e_{j}-e_{j}e_{i}=\eta_{ij} \tag{1}\] Where \(\eta_{ii}\) is 1 for \(i\) from 1 to \(p+1\), \(\eta_{ii}=-1\) for \(i\) from \(p+1\) to \(d\), and \(\eta_{ij}=0\) otherwise. Clifford algebras are a graded algebra, with the grading given by the number of generators which are multipled togther. For example, \(e_{i}\) is grade 1, \(e_{i}e_{j}e_{k}\) is grade 3, and so on. A Clifford algebra element which is contains only objects of the same grade is called a blade, and is denoted by \(\langle E\rangle_{i}\), where \(E\) is the label of the element, and \(i\) is the grade of the blade. We can also use this notation to describe taking the component of a general Clifford algebra element which has grade \(i\). A Clifford algebra can be split into even and odd parts, corresponding to collecting the blades with even and odd grade. The even part is called the Spin algebra, denoted \(Spin(p,q)\). [24]. The Clifford algebra for a space are also isomorphic as vector spaces to the Differential Forms on the same space. This is what allows us to use techniques from the cohomology theory of forms in the Clifford Algebra context [29]. In the Geometric Algebra (GA) representation of Clifford Algebras, we take each grade to describe an \(k\)- dimensional subspace of \(\mathbb{R}^{d}\). The highest grade object is denoted \(I\), and describes a \(d\)-volume. So in 3 dimensions, the scalar describes a point, \(e_{i}\) describes a line, \(e_{i}e_{j}\) describes a plane, and the pseudoscalar \(I=e_{1}e_{2}e_{3}\) describes a 3-volume. When we multiply two blades \(\langle A\rangle_{i}\) and \(\langle B\rangle_{j}\), the lowest grade object we can form has grade \(|i-j|\), and is called the dot product, denoted by \(A\cdot B\) or \(\langle A\cdot B\rangle_{i-j}\). The highest grade element has grade \(i+j\), and is called the wedge product, denoted \(A\wedge B\) or \(\langle A\wedge B\rangle_{i+j}\). Note that these may be identically zero (for example if \(i+j>d\)), and that in general one or both will be. For vectors \(a,b\) these defintions become \[a\cdot b =ab-ba\] \[a\wedge b =ab-ba \tag{2}\] Here, \(a\cdot b\) is the usual dot product, and in 3d we have \[a\wedge b\ =\ (a\times b)I^{-1} \tag{3}\] Where \(a\times b\) is the cross product. This formula just means that \(a\wedge b\) is the area between \(a\) and \(b\), and \(a\times b\) is the length orthogonal to that area. Note, however, that we can define \(a\wedge b\) in any dimension, not just in 3d. We call an element made up of different grades a Multivector. Then the most general function we can write from \(Cl(p,q)\to Cl(p,q)\) is \[\mathbf{z}=f_{0}+f_{i}e_{i}+f_{ij}e_{i}e_{j}+...+f_{1...d}e_{1}...e_{d} \tag{4}\] Where the \(f\) are scalar functions. In this paper I shall only consider functions of \(\mathbf{x}\), where \(\mathbf{x}=x_{i}e_{i}\) is the usual position vector. Duality relations exist between blades of grades \(k\) and \(d-k\). These are defined in various ways in the literature, which differ up to a sign. I shall use the convention \[A^{\star}=AI^{-1} \tag{5}\] Where \(A^{\star}\) is the dual of \(A\). If \(A\) has grade \(k\), then \(A^{\star}\) has grade \(d-k\). The following property of the dual will be important later \[A\wedge BI^{-1}=\big{(}A\wedge B\big{)}I^{-1}\] \[A\cdot BI^{-1}=\big{(}A\cdot B\big{)}I^{-1} \tag{6}\] Next, we can define a vector derivative \[\boldsymbol{\partial}=\frac{\partial}{\partial\mathbf{x}}\equiv\ e_{i}\frac{ \partial}{\partial x} \tag{7}\] Once we have this, we can define the wave/laplacian operator as \(\boldsymbol{\partial}^{2}\). Applying the vector derivative to a blade will either raise or lower the grades of the components of multivector by 1. In general it will do both. We therefore write \[\boldsymbol{\partial}\mathbf{z}=\boldsymbol{\partial}\cdot\mathbf{z}+ \boldsymbol{\partial}\wedge\mathbf{z} \tag{8}\] where \(\boldsymbol{\partial}\cdot\mathbf{z}\) and \(\boldsymbol{\partial}\wedge\mathbf{z}\) are the grade lowering and grade raising parts respectively. ### Clifford Analysis The introduction of the vector derivative brings us back to Clifford Analysis. Clifford Analysis is focused around finding and understanding solutions of the Dirac eqation \(\boldsymbol{\partial}\phi=0\). Indeed, a key motivator for extending Complex Analysis to Clifford algebras is the fact that this equation is a higher dimensional analogue of the Cauchy- Reimann equations. This means that Clifford Analysis intersects with other areas of mathematics, for example Spin Geometry, in particular index theory [24] The motivation for a Clifford algebra version of complex analysis comes from the fact that we can use Clifford algebras to take the square root of the wave equation, as we saw below equation (7). Then we can call \(\boldsymbol{\partial}\) a Dirac operator, as it has exactly the same algebraic properties as the matrix operator \(\not{\partial}\) from the standard Dirac theory. Note, however, that we can define different Dirac operators as the square roots of different second order PDEs. Clifford Analysis has its beginnings in the work of Moisil, Theodoresco and Riesz [26][28] in the 1930s, which sucessfully extended the key results of Complex Analysis to the quaternionic case. R. Fueter then extended this to higher dimensions via Clifford Algebras[14][15]. In the late 20th century, this was developed by figures like Hestenes, Sobcyck and Delenghe [20][8]with results like the Cauchy Intergral theorem and the Cauchy- Kowalseka extension being extended from the complex numbers to general Clifford Algebras. In two dimensions, we have the Geometric Algebra \(Cl(2)\). This consists of a scalar, two vectors \(e_{1}\) and \(e_{2}\), and a bivector pseudoscalar, \(I=e_{1}e_{2}\). The even subalgebra, \(Spin(2)\), is spanned by \(\{1,I\}\), with \(I^{2}=-1\). Therefore \(Spin(2)\cong\mathbb{C}\). A generic \(Spin(2)\)-valued function is given by \(u(z)+Iv(z)\), where \(z=x+Iy\). Equivalently, we can consider the variable \(\tilde{z}=e_{1}z=xe_{1}+ye_{2}\). Then \[\boldsymbol{\partial}\big{(}u(\tilde{z}))+Iv(\tilde{z})\big{)} =e_{1}\frac{\partial u}{\partial x}+e_{2}\frac{\partial u}{ \partial y}+e1I\frac{\partial v}{\partial x}+e_{2}I\frac{\partial v}{\partial y}\] \[=e_{1}\Big{(}\frac{\partial u}{\partial x}-\frac{\partial v}{ \partial y}\Big{)}+e_{2}\Big{(}\frac{\partial u}{\partial y}+\frac{\partial v }{\partial x}\Big{)} \tag{9}\] Therefore the condition \(\boldsymbol{\partial}(u+Iv)=0\) is equivalent to the pair of equations \[u_{x}=v_{y};\ \ u_{y}=v_{x} \tag{10}\] We can recognise these as the Cauchy- Riemann equations. The natural question is whether there is an analogue to these for a general Clifford Algebra. It turns out that the answer is Yes- see the lecture notes [22], the book [8] and the references cited within for more details. Solutions to the equation \(\boldsymbol{\partial}\mathbf{f}=0\) are usually called Monogenic functions, though they are occasionally known as Clifford Holomorphic functions. or Regular functions in older literature. I shall refer to them as Monogenic functions in this paper. ### Hodge Theory We will need to use some elements of Hodge Theory in this paper. Whilst there is not the space here for a full discussion of that important and complicated branch of mathematics, I will review the main results we will be using. It is important to bear in mind that Hodge Theory is true only for compact (i.e. closed and bounded) Reimannian Manifolds. There are strong links between Hodge theory and Clifford analysis which are only beginning to be explored. See [29] for an excellent and very recent treatment. As explained in [29] we can use the fact that Clifford Algebras are isomorphic as Vector Spaces to the space of differential forms to use Hodge decomposition on the Clifford algebra itself. The main result of Hodge Theory is the Hodge Decomposition Theorem, which states that the space of grade-\(k\) differential forms \(\Omega_{k}\) decomposes as \[\Omega_{k}=Har_{k}+d\Omega_{k-1}+\delta\Omega_{k+1} \tag{11}\] This then lifts to a decomposition of the whole space of differential forms. In this expression, \(Har_{k}\) is the space of harmonic forms, i.e. those satisfying \(\mathbf{\partial}^{2}\omega=0\). A consequence of this is that the condition for a form \(\phi\) to be harmonic is that \(\delta\phi=d\phi=0\). The expressions \(d\Omega_{k-1}+\delta\Omega_{k+1}\) refer to the spaces of exact and co-exact \(k\)-forms respectively. Here, \(d\) refers to the usual exterior derivative, and \(\delta\) refers to the coderivative, which is the adjoint of the exterior derivative under the \({\cal L}_{2}\) norm \[\langle\omega,\xi\rangle\ :=\ \int\omega\wedge\star\xi\ dx^{4} \tag{12}\] Here, \(\star\) refers to the Hodge dual. It is defined by the relation \[A\wedge\star A=\ |A|^{2}I \tag{13}\] This interchanges \(k\) and \(n-k\) forms, and in Geometric Algebra notation is given by \[\star\omega=\ (\omega I^{-1})^{\dagger}=(-1)^{\frac{k(k-1)}{2}} \tag{14}\] where the factor \((-1)^{\frac{k(k-1)}{2}}\) comes from the fact that \[|A|^{2}=\ A^{\dagger}\cdot A=\ (-1)^{\frac{k(k-1)}{2}}A\cdot A \tag{15}\] In terms of the Hodge dual, we have \(\delta=\star d\star\), however, in geometric algebra notation, as discussed in [29], we have \[d\omega \equiv\ \mathbf{\partial}\wedge\omega \tag{16}\] \[\delta\omega \equiv-\mathbf{\partial}\cdot\omega\] Where \(k\) is the degree of \(\omega\). The operators \(d\) and \(\delta\) have the important property that they are adjoint with respect to the inner product (12)- that is \[\langle d\omega,\xi\rangle=\langle\omega,\delta\xi\rangle \tag{17}\] Now, in Geometric Algebra language this translates into an adjoint property between \(\boldsymbol{\partial}\cdot\) and \(\boldsymbol{\partial}\wedge\) \[\langle\boldsymbol{\partial}\wedge\omega,\xi\rangle=\ -\langle\omega, \boldsymbol{\partial}\cdot\xi\rangle \tag{18}\] We will use this property frequently during this paper ### Outline of the Paper After introducing the main concepts, I begin the paper by decomposing the equation \(\boldsymbol{\partial}\mathbf{z}=0\) grade by grade in order give to give a closed system of equations defining a monogenic function. I call these the, 'Clifford-Cauchy-Reimann' equations, or CCR equations for short. I then discuss some properties of these equations, showing how they require each grade of \(\mathbf{z}(x)\) to be harmonic. I then show that the CCR equations applied to a general multivector in the, 'Spacetime Algebra' \(Cl(3,1)\), the Clifford algebra representing Minkowski Space, give the Dirac and Maxwell equations. This is not totally surprising- the Dirac equation is a monogenic function by definition, and it has long been realised that Maxwell's equations can be written as Cauchy- Reimann equations in quaternionic analysis (called regularity conditions) [32] and also via complexified quaternions (or biquaternions) [21], including in the context of Clifford Analysis [30]. However this paper unites both equations into a single framework, and shows that the (source-free) Maxwell and Dirac equations are the CCR equations for a general multivector in \(Cl(3,1)\), implying that they are mathematically fundamental in a way linked to the geometry of Minkowski space. Having done this, I finish with a short discussion of future work which could be done in this area. ## 2 Clifford-Cauchy-Riemann equations I am not aware of an explicit grade-by-grade decomposition of the Cauchy-Reimann equations for a Clifford algebra in the literature, though they must have been derived before in the course of calculations. I present here such a decompostion, which I will use for the main results in the next section. We write a general \(\mathbf{x}\)-valued multivector function in \(Cl(p,q)\) as \[\mathbf{z}(\mathbf{x})=\sum_{i}\mathbf{f}_{i}=f_{0}(\mathbf{x})+f_{1i}( \mathbf{x})e_{i}+f_{2ij}(\mathbf{x})e_{i}e_{j}+f_{3ijk}(\mathbf{x})e_{i}e_{j}e _{k}+...+f_{d}e_{1}...e_{d} \tag{19}\] where \(\mathbf{f}_{i}\) is a blade of grade \(i\), \(f(\mathbf{x})\) is a scalar function of \(\mathbf{x}\) and \(i\) runs from \(1\) to \(d=p+q\). We are interested in Monogenic functions, for which \(\boldsymbol{\partial}\mathbf{z}=0\) This implies that \[\sum_{i}\boldsymbol{\partial}\mathbf{f}_{i}=\sum_{i}\left(\boldsymbol{ \partial}\cdot\mathbf{f}_{i+1}+\boldsymbol{\partial}\wedge\mathbf{f}_{i-1} \right)=0 \tag{20}\] equating all terms with the same grades, we find that \[\boldsymbol{\partial}\cdot\mathbf{f}_{i+1}=-\boldsymbol{\partial} \wedge\mathbf{f}_{i-1},i\neq\ \{1,d-1\} \tag{21}\] \[\boldsymbol{\partial}\cdot\mathbf{f}_{1}=0\] (22) \[\boldsymbol{\partial}\wedge\mathbf{f}_{d-1}=0 \tag{23}\] with \(\boldsymbol{\partial}\cdot f_{0}\) and \(\boldsymbol{\partial}\wedge\mathbf{f}_{d}\equiv 0\) identically (you can't take the exterior derivative of a top form, or the divergence of a scalar function). These are the generalisation of the Cauchy-Riemann equations to general Clifford Algebras for the Dirac operator \(\boldsymbol{\partial}\). I shall refer to them as the Clifford-Cauchy-Riemann, or CCR, equations. Note also that the condition \(\boldsymbol{\partial}\cdot\mathbf{f}_{i+1}=-\boldsymbol{\partial}\wedge \mathbf{f}_{i-1}\) becomes \(\delta f_{i-1}=df_{i+1}\) in the language of differential forms. ### Properties of the CCR Equations The firs thing to note about these equations is that the equations for the odd and even parts seperate. Equation (21) links \(\mathbf{f}_{n-1}\) and \(\mathbf{f}_{n+1}\), whose grades are always both odd, or both even. Then, equation (22) always involves only \(\mathbf{f}_{1}\), which is odd; and (23) only involves \(\mathbf{f}_{d-1}\), which is either odd or even depending upon \(d\). Therefore the CCR equations split into parts involving only the odd or even components. This simplifies the analysis of these equations, and we shall make use of this below in section 3 when analysing the spacetime algebra. We can rewrite the CCR equations using various dualities. If we define \(g_{i}=f_{i}I^{-1}\), then they become \[\boldsymbol{\partial}\wedge\mathbf{g}_{d-(i+1)}I^{-1}=-\boldsymbol {\partial}\cdot\mathbf{g}_{d-(i-1)}I^{-1},i\neq\ \{1,d-1\} \tag{24}\] \[\boldsymbol{\partial}\wedge\mathbf{g}_{d-1}I^{-1}=0\] (25) \[\boldsymbol{\partial}\cdot\mathbf{g}_{1}I^{-1}=0 \tag{26}\] Alternatively, we can only take the dual of one side of equation (21) to get \[\boldsymbol{\partial}\wedge\mathbf{f}_{i-1}=\boldsymbol{\partial}\wedge \mathbf{g}_{d-(i+1)}I^{-1} \tag{27}\] Note that whilst in general \(\mathbf{f}_{i-1}\) and \(\mathbf{g}_{d-(i+1)}\) have different grades, when \(d\) is even, \(\mathbf{f}_{d/2-1}\) and \(\mathbf{g}_{d/2-1}\) both have the same grade in the equation, giving \[\boldsymbol{\partial}\wedge\mathbf{f}_{d/2-1}=\boldsymbol{\partial}\wedge \mathbf{g}_{d/2-1}I^{-1} \tag{28}\] This suggests that mathematically it could be interesting to investigate functions satisfying \(\mathbf{z}=\pm\mathbf{z}I^{-1}\), which could be another avenue to explore to better understand the properties of solutions to these equations. An important property of the Cauchy-Riemann equations is that any solution to them is also a harmonic function, i.e. \(\boldsymbol{\partial}^{2}\mathbf{z}=0\). This also applies to Clifford analysis [8] **Theorem 2.1**.: \(Mon_{p}(\mathcal{M})\),the vector space of monogenic functions of degree p over a space \(\mathcal{M}\), is identical to \(Harm_{p}(\mathcal{M})\), the vector space of harmonic forms of degree \(p\). Since the spaces of harmonic forms are isomorphic to the de Rahm homology groups, this has as its corollary the well known result [31] **Theorem 2.2**.: \(Mon_{p}(\mathcal{M})\) is isomorphic to \(\mathcal{H}^{p}_{dR}(\mathcal{M})\), the \(p\)th de Rham cohomology group of \(\mathcal{M}\) Although it is a long established result, I will give a proof of theorem 2.1 using the CCR equations, as it illuminates the structure of the equations, and shows how they are linked to Hodge Theory. Proof.: We begin with a Hodge decomposition of the vector spaces of p-blades: \[\mathbf{z}(\mathbf{x})=\sum_{i}\mathbf{f}_{i}=\sum_{i}\boldsymbol{\partial} \wedge\mathbf{X}_{i-1}+\boldsymbol{\partial}\cdot\mathbf{Y}_{i+1}+\mathbf{Z}_{i} \tag{29}\] Now, for \(p\neq 1,d-1\) imposing the condition that \(\boldsymbol{\partial}\mathbf{z}=0\) implies that at degree \(p\), \[\boldsymbol{\partial}\wedge\boldsymbol{\partial}\cdot\mathbf{Y}_{p}=- \boldsymbol{\partial}\cdot\wedge\mathbf{X}_{p}\equiv\mathbf{P}_{i} \tag{30}\] But then, using the \(L^{2}\)- norm (12), we have \[\left\langle\mathbf{P}_{i},\mathbf{P}_{i}\right\rangle=\left\langle \boldsymbol{\partial}\cdot\mathbf{Y}_{p},-\boldsymbol{\partial}\wedge\mathbf{ X}_{p}\right\rangle=\left\langle\boldsymbol{\partial}\cdot\boldsymbol{\partial} \cdot\mathbf{Y}_{p},\mathbf{X}_{p}\right\rangle\equiv 0 \tag{31}\] where the second inequality uses the skew- adjoint relation between \(\boldsymbol{\partial}\wedge\) and \(\boldsymbol{\partial}\cdot\). This implies that \(P_{i}=0\), and therefore seperately that \(\boldsymbol{\partial}\wedge\boldsymbol{\partial}\cdot\mathbf{Y}_{p}=0\) and \(\boldsymbol{\partial}\cdot\boldsymbol{\partial}\wedge\mathbf{X}_{p}=0\). This implies that \(\boldsymbol{\partial}\wedge X_{i-1}\) and \(\boldsymbol{\partial}\cdot Y_{i+1}\) are harmonic, since \(\boldsymbol{\partial}\wedge\boldsymbol{\partial}\wedge X_{i-1}\) and \(\boldsymbol{\partial}\cdot\boldsymbol{\partial}\cdot Y_{i+1}\) are zero by definition, and harmonic functions satisfy \(\boldsymbol{\partial}\wedge\phi=\boldsymbol{\partial}\cdot\phi=0\). However, we assumed that \(\boldsymbol{\partial}\cdot\mathbf{Y}_{p}\) and \(\boldsymbol{\partial}\wedge\mathbf{X}_{p}\) were not harmonic functions, (and are therefore orthogonal to them under the \(L^{2}\)-norm) and therefore they must be the zero- function. For \(p=1\), \(\boldsymbol{\partial}\wedge X_{1}\) is covered by the above argument, and \(\boldsymbol{\partial}\cdot Y_{1}\equiv 0\) by the CCR equations. Similarily for \(p=d-1\), \(\boldsymbol{\partial}\cdot\mathbf{Y}_{d-1}\) is covered by the above whereas \(\boldsymbol{\partial}\wedge\mathbf{X}_{d-1}\equiv 0\) by the CCR equations. This shows that the only nonzero parts of \(\mathbf{z}\) are the harmonic blades \(\mathbf{Z}_{p}\), proving that \(\mathbf{z}\) monogenic implies that \(\mathbf{z}\) is harmonic. Since harmonic forms are trivially monogenic, this establishes that the set of harmonic forms on a space is identical to the set of monogenic functions on that space. This proof allows us to see how satisfying the CCR equations implies that a function is harmonic- the CCR equations imply that the exact and coexact parts at each grade of the function are equal to zero. This analysis also allows us to see what happens if the function fails to satisfy the CCR equations at grade \(p\). In this case \[\boldsymbol{\partial}\wedge f_{i-1}\neq\ -\boldsymbol{\partial}\cdot f_{i+1} \tag{32}\] which means that neither \(\boldsymbol{\partial}\cdot\mathbf{X}_{p}\in\Lambda_{p-1}\) or \(\boldsymbol{\partial}\wedge\mathbf{Y}_{p}\in\ \Lambda_{p+1}\) are equal to zero. Therefore \[f_{p-1}=\mathbf{Z}_{p-1}+\boldsymbol{\partial}\cdot\mathbf{X}_{p}\quad\text{ and }\quad f_{p+1}=\mathbf{Z}_{p+1}+\boldsymbol{\partial}\wedge\mathbf{Y}_{p} \tag{33}\] This means that if a multivector function \(\mathbf{z}\) does not satify the CCR equations at grade \(p\), then neither the grade \(p-1\) nor the grade \(p+1\) component of \(\mathbf{z}\) is harmonic. Finally, in complex analysis, we can show that every real harmonic function on \(\mathbb{R}^{n}\) is the real part of a holomorphic function. For Clifford analysis, we have the following theorem (which I believe is original) **Theorem 2.3**.: Every harmonic function \(f(x)\) on \(\mathbb{R}^{n}\) is the \(0\)-grade component of a monogenic funcion \(\mathbf{z}(x)\in Cl(n)\).. Conversely, every monogenic function \(\mathbf{z}(x)\) defines a harmonic function \(f(x)=\langle\mathbf{z}(x)\rangle_{0}\) Proof.: For the first direction, we start with a harmonic function \(f(x)\). For a scalar function, we can write the condition that \(f(x)\) is harmonic as \(\boldsymbol{\partial}\cdot\boldsymbol{\partial}f=0\). Expanding out, this implies that \[\boldsymbol{\partial}\cdot\boldsymbol{\partial}\wedge f(x)=0 \tag{34}\] Now we consider \[|\boldsymbol{\partial}\wedge f(x)|=\ \langle\boldsymbol{\partial}\wedge f(x), \boldsymbol{\partial}\wedge f(x)\rangle=\ \langle f(x),\boldsymbol{\partial}\cdot\boldsymbol{\partial}\wedge f(x)\rangle=0 \tag{35}\] where we have used the adjoint property in the second equality, and the harmonic property of \(f(x)\) in the final one. But now since we are in a Reimannian space, \(|\boldsymbol{\partial}\wedge f(x)|=0\) implies that \(\boldsymbol{\partial}\wedge f(x)\equiv 0\). From the above dicussion, this is the condition for \(f(x)\) to be the grade \(0\) part of a monogenic function. Now consider the other direction. Suppose \(f_{0}(x)\) is the grade-\(0\) part of some monogenic function \(\mathbf{z}(x)\). Then, by the CCR equations, \(\boldsymbol{\partial}\wedge f_{0}=0\). Then \(\boldsymbol{\partial}\cdot\boldsymbol{\partial}f_{0}=\boldsymbol{\partial} \cdot\boldsymbol{\partial}\wedge f_{0}=0\). This means that \(f_{0}\) is a harmonic function on \(\mathbb{R}^{n}\) ## 3 CCR and the Spacetime Algebra We now examine the CCR equations for the Spacetime Algebra. This is the Clifford Algebra \(Cl(3,1)\), whose Geometric Algebra representation is associated to Minkowski Spacetime. It consists of \(1\) scalar, \(4\) vectors \(e_{a}\), six bivectors \(e_{a}e_{b}\), \(4\) trivectors \(e_{a}e_{b}e_{c}\), and \(1\) Pseudoscalar \(I=e_{0}e_{1}e_{2}e_{3}\). Here we use the index \(0\) to refer to the timelike direction, for which \(e_{0}^{2}=-1\), and \(i,j,k=\{1,2,3\}\) for spacelike directions, for which \(e_{i}^{2}=1\). These correspond to the gamma matrices \(\gamma_{a}\) in the usual Dirac theory, and the pseudoscalar \(I\) corresponds to \(\gamma_{5}=\gamma_{0}\gamma_{1}\gamma_{2}\gamma_{3}\). It is also worth noting that the bivector part of the algebra corresponds to Lorentz rotations [10]. The Timelike rotations correspond to terms of the form \(e_{0}e_{i}\), and spacelike rotations correspond to \(e_{i}e_{j}\). Finally, the vector derivative \(\boldsymbol{\partial}=e_{0}\frac{\partial}{\partial t}+e_{i}\frac{\partial}{ \partial x_{i}}\) Now, a general multivector in the spacetime algebra can be written in the form \[f_{0}+\mathbf{f}_{1}+\mathbf{f}_{2}+\mathbf{f}_{3}+\mathbf{f}_{4}=f_{0}( \mathbf{x})+f_{1a}(\mathbf{x})e_{a}+f_{2ab}(\mathbf{x})e_{a}e_{b}+f_{3abc}( \mathbf{x})e_{i}e_{j}e_{k}+f_{4}e_{0}e_{1}e_{2}e_{3} \tag{36}\] This leads to the CCR equations \[\mathbf{\partial}\cdot\mathbf{f}_{1}=0\] \[\mathbf{\partial}f_{0}=-\mathbf{\partial}\cdot\mathbf{f}_{2}\] \[\mathbf{\partial}\wedge\mathbf{f}_{1}=-\mathbf{\partial}\cdot\mathbf{f}_{3}\] \[\mathbf{\partial}\wedge\mathbf{f}_{3}=0\] \[\mathbf{\partial}\cdot\mathbf{f}_{4}=-\mathbf{\partial}\wedge\mathbf{f}_{2} \tag{37}\] As discussed in section 2.1, these split into odd and even sectors. ### The Odd Sector We start with the odd sector \[\mathbf{\partial}\cdot\mathbf{f}_{1}=0 \tag{38}\] \[\mathbf{\partial}\wedge\mathbf{f}_{1}=-\mathbf{\partial}\cdot\mathbf{f}_ {3}\] (39) \[\mathbf{\partial}\wedge\mathbf{f}_{3}=0 \tag{40}\] Since 4 is an even number, we can use the discussion of duality in equation (28) to write the second of these equations as \[\mathbf{\partial}\wedge\mathbf{f}_{1}=-\mathbf{\partial}\wedge\mathbf{g}_{1}I^{-1} \tag{41}\] where \(g_{3}=f_{1}I^{-1}\). To solve this equation, we first of all note that the field strength for an electromagnetic potential is always anti-selfdual [13]. To see this, we can write \(\mathbf{\partial}\wedge\mathbf{A}=F_{ab}=E_{0i}+B_{jk}\), where \(E,B\) are the electric and magnetic fields respectively. The Electric field is a timelike bivector, and the magnetic field is a spacelike one. Then we can directly calculate \[-\Big{(}\partial_{0}A_{1}e_{0}e_{1}-\partial_{1}A_{0}e_{1}e_{0} \Big{)}I^{-1}=\partial_{2}A_{3}e_{2}e_{3}-\partial_{3}A_{2}e_{3}e_{2}\] \[-\Big{(}\partial_{0}A_{2}e_{0}e_{2}-\partial_{2}A_{0}e_{2}e_{0} \Big{)}I^{-1}=\partial_{3}A_{1}e_{3}e_{1}-\partial_{1}A_{3}e_{1}e_{3}\] \[-\Big{(}\partial_{0}A_{3}e_{0}e_{3}-\partial_{3}A_{0}e_{3}e_{0} \Big{)}I^{-1}=\partial_{1}A_{2}e_{1}e_{2}-\partial_{2}A_{1}e_{2}e_{1} \tag{42}\] This shows that \(-\mathbf{E}I^{-1}=\mathbf{B}\), and therefore that \(\mathbf{F}=-\mathbf{F}I^{-1}\). Hence \(f_{1}=g_{1}=\mathbf{A}\) is always a solution to equation (39) since \(\mathbf{\partial}\wedge\mathbf{A}=-\big{(}\mathbf{\partial}\wedge\mathbf{A}\big{)}I^ {-1}\). We now check for other solutions. Suppose that \(\mathbf{\partial}\wedge\mathbf{A}=-\big{(}\mathbf{\partial}\wedge\mathbf{C}\big{)}I^ {-1}\) for \(\mathbf{C}\neq\mathbf{A}\).Then \(-\big{(}\mathbf{\partial}\wedge\mathbf{C}\big{)}I^{-1}-\big{(}\mathbf{\partial}\wedge \mathbf{A}\big{)}I^{-1}=0\), which implies that \(\mathbf{\partial}\wedge(\mathbf{C}-\mathbf{A})=0\), and therefore \(\mathbf{C}\) and \(\mathbf{A}\) differ by the gradient of a scalar function, since \(\mathbf{\partial}\wedge\mathbf{\partial}\lambda=0\) automatically. Then the most general solution we can write is \[\mathbf{f}_{1}=\mathbf{A}+\mathbf{\partial}\lambda_{1},f_{3}=g_{1}I^{-1}=\big{(} \mathbf{A}+\mathbf{\partial}\lambda_{2}\big{)}I^{-1} \tag{43}\] for Scalar functions \(\lambda_{1}\) and \(\lambda_{2}\). If we set \(\lambda_{1}=\lambda_{2}\equiv\lambda\), and write \(\tilde{\mathbf{A}}\)=\(\mathbf{A}+\mathbf{\partial}\lambda\) then the CCR equations become the equation for a single anti-selfdual Field Strength. \[\mathbf{\partial}\wedge\tilde{\mathbf{A}}=\big{(}\mathbf{\partial}\wedge\tilde{ \mathbf{A}}\big{)}I^{-1} \tag{44}\] I will discuss the situation where \(\lambda_{1}=\lambda_{2}\) at the end of this subsection. Finally, if we write \(\mathbf{F}=\boldsymbol{\partial}\wedge\tilde{\mathbf{A}}\) then \[\boldsymbol{\partial}\mathbf{F} =\boldsymbol{\partial}\cdot\boldsymbol{\partial}\wedge\tilde{ \mathbf{A}}+\boldsymbol{\partial}\wedge\boldsymbol{\partial}\wedge\tilde{ \mathbf{A}}\] \[=\boldsymbol{\partial}\cdot\boldsymbol{\partial}\wedge\tilde{ \mathbf{A}}=-\boldsymbol{\partial}\cdot\big{(}\boldsymbol{\partial}\cdot( \mathbf{A}I^{-1})\big{)}=0\] which is simply the equation for a source-free Maxwell field written in Geometric Algebra notation [10]. I have used the fact that \(\boldsymbol{\partial}\cdot\boldsymbol{\partial}\cdot\mathbf{f}=\boldsymbol{ \partial}\wedge\boldsymbol{\partial}\wedge\mathbf{f}=0\) for any \(\mathbf{f}\) in the second and fourth inequalities, and equation (44) in the second. We can now look at the equation \(\boldsymbol{\partial}\cdot\mathbf{f}_{1}=0\). This is Gauss' law \[\boldsymbol{\partial}\cdot\tilde{\mathbf{A}}=0 \tag{45}\] Note that if we extract the gauge function \(\lambda\) we get \[\boldsymbol{\partial}\cdot\mathbf{A}+\boldsymbol{\partial}^{2}\lambda=0 \tag{46}\] which allows us to use \(\lambda\) to set a gauge exactly as in standard presentations of electromagnetism. What about the remaining equation \(\boldsymbol{\partial}\wedge\mathbf{f}_{3}=0\)? By the solution in equation (44) this is equivalent to \(\boldsymbol{\partial}\wedge(\tilde{A}I^{-1})=0\) which implies that \((\boldsymbol{\partial}\cdot\tilde{\mathbf{A}})I^{-1}=0\), which implies that \(\boldsymbol{\partial}\cdot\mathbf{A}+\boldsymbol{\partial}^{2}\lambda=0\), just the same as equation (38) Putting it all together, we see that the odd sector of the CCR equations is \[\boldsymbol{\partial}\wedge\tilde{\mathbf{A}} =\big{(}\boldsymbol{\partial}\wedge\tilde{\mathbf{A}}\big{)}I^{-1} \tag{47}\] \[\boldsymbol{\partial}\cdot\tilde{\mathbf{A}} =0 \tag{48}\] This describes an anti-selfdual field arising from a vector potential \(\tilde{A}\) (equation (47), which obeys Gauss' Law (equation (48). We also have a gauge freedom to rescale \(\tilde{A}\) by a scalar function \(\lambda\), via \(\tilde{A}\to\tilde{A}+\boldsymbol{\partial}\lambda\). What about when \(\lambda_{1}\neq\lambda_{2}\)? In this case we have \[f_{1}=\mathbf{A}+\boldsymbol{\partial}\lambda_{1},\ \ \ \ \ f_{3}=\big{(}\mathbf{A}+ \boldsymbol{\partial}\lambda_{2}\big{)}I^{-1} \tag{49}\] Putting this into equations (38) and (40) we get that \[\boldsymbol{\partial}\cdot\mathbf{A}=-\boldsymbol{\partial}^{2}\lambda_{1}, \ \ \ \boldsymbol{\partial}\cdot\mathbf{A}=-\boldsymbol{\partial}^{2}\lambda_{2} \tag{50}\] This implies that \(\boldsymbol{\partial}^{2}\big{(}\lambda_{1}-\lambda_{2}\big{)}=0\), and therefore that \(\lambda_{1}-\lambda_{2}=a+b\mathbf{x}\), for \(a,b\in\mathbb{R}\). I am unsure of the physical significance of this, and so I have focussed on the solutions where \(\lambda_{1}=\lambda_{2}\); however this would be a good topic for future work. ### The Even Sector The CCR equations for the even subalgebra are given by \[\boldsymbol{\partial}f_{0} =-\boldsymbol{\partial}\cdot\mathbf{f}_{2} \tag{51}\] \[\boldsymbol{\partial}\wedge\mathbf{f}_{2} =-\boldsymbol{\partial}\cdot\mathbf{f}_{4} \tag{52}\] The physical meaning of these equations can be seen by considering the Dirac equation for a massless, uncharged spinor. Written in the Geometric Algebra representation, this is \[\mathbf{\partial}\phi=0 \tag{53}\] where \(\phi=\rho^{1/2}e^{IB}e^{\mathbf{\theta}/\mathbf{2}}\). Here, \(\rho(\mbox{\bf x})\) and \(B(\mbox{\bf x})\) are scalar functions, and \(\mathbf{\theta}(\mbox{\bf x})\) is a bivector function. This form of the solution to Dirac's equation is due to David Hestenes [18]. Mathematically this corresponds to a polar decomposition of \(\phi\). There are eight degrees of freedom, just as we would expect - one each for \(\rho\) and \(B\), and six contained in \(\theta\), which geneates a Lorentz rotation. Physically, following Hestenes, we can interpret the multivector function \(\phi\) as a physical wave in Minkowski space, with \(\rho^{1/2}\) as the amplitude, and \(e^{\mathbf{\theta}/2}\) being the spinor generator of a rotation into the rest frame of the particle. The phyiscal interpretation of \(B\) is more ambigous, but Hestenes has suggested that for the full Dirac equation it corresponds to a hypothetical rapid oscillation of the electron called Zitterbewegung [19]. I will not address these interpretational issues here. We can evaluate \[\mathbf{\partial}\phi=\Big{(}\frac{\partial\rho}{\rho}+\mathbf{\partial}(IB)+\mathbf{\partial}\theta\Big{)}\rho^{1/2}e^ {IB}e^{\mathbf{\theta}/2}=0 \tag{54}\] Which implies \[\frac{\partial\rho}{\rho}+\mathbf{\partial}(IB)+\mathbf{ \partial}\theta/2=0 \tag{55}\] collecting terms of the same grade, we find \[\mathbf{\partial}{\rm ln}(\rho)=-\mathbf{\partial}\cdot \mathbf{\theta}/2 \tag{56}\] for grade 1, and \[\mathbf{\partial}\wedge\mathbf{\theta}/2=-\mathbf{ \partial}\cdot(IB) \tag{57}\] for grade 3, with \(\mathbf{\partial}\wedge(IB)\equiv 0\) identically. But these are just the CCR equations (51) and (52), with \(f_{0}={\rm ln}\rho\), \({\bf f}_{2}=\mathbf{\theta}/2\) and \({\bf f}_{4}=IB\). Therefore, \(f_{0},{\bf f}_{2},{\bf f}_{4}\) satisfying the CCR equations automatically define a free Dirac field via (53). A final note: We could have written the scalar part of \(\phi\) as \(e^{a(\mbox{\bf x})/2}\) for some scalar function \(a(\mbox{\bf x})\), rather than using \(\rho^{1/2}\). This would have given us \(f_{0}=a/2\), rather than \({\rm ln}(\rho)\). I chose the notation \(\rho^{1/2}\) partly to fit with the notation of Hestenes [18], and partly because of the similarity of the term \(\frac{\mathbf{\partial}\rho}{\rho}\) to the quantum potential of Bohm [6], which is derived from a similar polar decomposition of the wavefunction. ## 4 Conclusion and Further Work Putting it all together, we find that a multivector \[\mbox{\bf z}={\rm ln}(\rho)+\tilde{\mbox{\bf A}}+\mathbf{\theta}/2+ \tilde{\mbox{\bf A}}I^{-1}+IB \tag{58}\] satisfying the CCR equations in the Spacetime Algebra \(Cl(3,1)\) defines both a free Dirac field \(\phi=\rho^{1/2}e^{IB}e^{\boldsymbol{\theta}/2}\), and an electromagnetic field strength \(\mathbf{F}=\boldsymbol{\partial}\wedge\tilde{\mathbf{A}}\), where \(\tilde{\mathbf{A}}\) satisfies Gauss' law, and is defined up to the addition of a gradient \(\boldsymbol{\partial}\lambda\). Aside from the points mentioned earlier in the paper (For example, below equation (50)), there are many things to explore. Chief among these for me is the question whether we can obtain the equations for a massive or charged Dirac field, or for an electromagnetic field with a source \(\mathbf{J}=\boldsymbol{\partial}\mathbf{F}\)? I suspect that this would involve considering Dirac operators of the form \(\boldsymbol{\partial}+\mathbf{A}\), where \(\mathbf{A}\) is a gauge potential. Once this had been done, it would be interesting to look at the geometric meaning of the cohomology groups of this operator. We could perform a Hodge decomposition of the space of differential forms by looking for forms which are harmonic with respect to this gauged Dirac operator. We could then compare the resulting gauge theory cohomologies with more standard methods such as the BRST or BV cohomologies [17][27], making use of the fact that Geometric Algebra allows us to give a direct geometric interpretation of all our expressions to determine the physical meaning of the resulting cohomology groups. Another avenue would be to investigate Hodge theory on Lorentzian manifolds. The motivation for this comes from the fact that in the proof of theorem 2.1, the key argument was that the condition \(\boldsymbol{\partial}\wedge X_{i-1}=-\boldsymbol{\partial}\cdot Y_{i+1}\equiv P _{i}\) implied that \(\langle P_{i},P_{i}\rangle=0\), and hence that \(P_{i}=0\). In the Lorentzian case, the condition \(\langle P_{i},P_{i}\rangle=0\) no longer implies that \(P_{i}=0\), but that \(P_{i}\) is null. Therefore the condition for a multivector function \(\mathbf{z}(x)\) to be monogenic is no longer that it is made up of harmonic forms, but that it is made up of forms whose boundaries and coboundaries are null under the \(L_{2}\) norm, such that \(\boldsymbol{\partial}\wedge X_{i-1}=-\boldsymbol{\partial}\cdot Y_{i+1}\). Investigating these solutions could be a valuable tool for understanding cohomology on Lorentzian manifolds, which so far has focused on the timelike and spacelike parts of the spacetime [23][4][3]. I hope to explore this in future work.
2301.12527
Diverse, Difficult, and Odd Instances (D2O): A New Test Set for Object Classification
Test sets are an integral part of evaluating models and gauging progress in object recognition, and more broadly in computer vision and AI. Existing test sets for object recognition, however, suffer from shortcomings such as bias towards the ImageNet characteristics and idiosyncrasies (e.g., ImageNet-V2), being limited to certain types of stimuli (e.g., indoor scenes in ObjectNet), and underestimating the model performance (e.g., ImageNet-A). To mitigate these problems, we introduce a new test set, called D2O, which is sufficiently different from existing test sets. Images are a mix of generated images as well as images crawled from the web. They are diverse, unmodified, and representative of real-world scenarios and cause state-of-the-art models to misclassify them with high confidence. To emphasize generalization, our dataset by design does not come paired with a training set. It contains 8,060 images spread across 36 categories, out of which 29 appear in ImageNet. The best Top-1 accuracy on our dataset is around 60% which is much lower than 91% best Top-1 accuracy on ImageNet. We find that popular vision APIs perform very poorly in detecting objects over D2O categories such as ``faces'', ``cars'', and ``cats''. Our dataset also comes with a ``miscellaneous'' category, over which we test the image tagging models. Overall, our investigations demonstrate that the D2O test set contain a mix of images with varied levels of difficulty and is predictive of the average-case performance of models. It can challenge object recognition models for years to come and can spur more research in this fundamental area.
Ali Borji
2023-01-29T19:58:32Z
http://arxiv.org/abs/2301.12527v1
# Diverse, Difficult, and Odd Instances (D2O): ###### Abstract Test sets are an integral part of evaluating models and gauging progress in object recognition, and more broadly in computer vision and AI. Existing test sets for object recognition, however, suffer from shortcomings such as bias towards the ImageNet characteristics and idiosyncrasies (e.g. ImageNet-V2), being limited to certain types of stimuli (e.g. indoor scenes in ObjectNet), and underestimating the model performance (e.g. ImageNet-A). To mitigate these problems, we introduce a new test set, called D2O, which is sufficiently different from existing test sets. Images are a mix of generated images as well as images crawled from the web. They are diverse, unmodified, and representative of real-world scenarios and cause state-of-the-art models to misclassify them with high confidence. To emphasize generalization, our dataset by design does not come paired with a training set. It contains 8,060 images spread across 36 categories, out of which 29 appear in ImageNet. The best Top-1 accuracy on our dataset is around 60% which is much lower than 91% best Top-1 accuracy on ImageNet. We find that popular vision APIs perform very poorly in detecting objects over D2O categories such as "faces", "cars", and "cats". Our dataset also comes with a "miscellaneous" category, over which we test the image tagging models. Overall, our investigations demonstrate that the D2O test set contain a mix of images with varied levels of difficulty and is predictive of the average-case performance of models. It can challenge object recognition models for years to come and can spur more research in this fundamental area. Data and code are publicly available at: DATA and CODE. ## 1 Introduction The object recognition problem remains in an unclear state. Despite compelling performance of state-of-the-art object recognition methods, several questions such as out-of-distribution generalization [1, 17, 25, 28, 31], "superhuman performance" [8, 10], adversarial vulnerability [9], and invariance to image transformations and distortions [13] still persist. Raw performance on test sets has been the main indicator of the progress and the major feedback about the state of the field. Few test sets have been proposed for evaluating object recognition models. Some follow the footsteps of ImageNet [25]. Some filter images based on failures of models [14]. Researchers have also used controlled settings to collect data [1, 4]. While being invaluable, these datasets suffer from few shortcomings. For example, datasets that only include examples for which the best models fail give the worst case scenario accuracy. While being useful, they underestimate model performance. Datasets that are biased towards certain environments (_e.g._ indoor scenes in ObjectNet [1]), may not capture the full spectrum of visual stimuli. Most of the new datasets for object recognition have been centered on ImageNet (_e.g._ ImageNet-V2, ImageNet-A, ImageNet-O) and thus may have inherited its biases. This may in turn give us a biased assessment of visual recognition capability of models. We argue that a good test set should strike the right balance between sample difficulty and diversity and should reflect the average-case performance of models. We also believe that new test sets Figure 1: Sample images from D2O dataset. Images are a mix of generated images as well as images crawled from the web. Categories in order are: acorn, banana, basketball, car, cat, and clock. that are sufficiently different from existing ones can provide new insights into the object recognition problem. To this end, we include images that contain rich semantics and require cognitive processing (_e.g._ artistic scenes). Existing datasets lack enough samples of such cases. Here, we emphasize image diversity and difficulty over scale. While scaling up test sets has a clear advantage (_e.g._ covering rare cases), it comes with some shortcomings. It is hard to ensure privacy, security, quality, and spurious correlations1 in datasets containing millions of images. These problems are easier to tackle in small scale expert-made datasets. Nonetheless, both small and large datasets are needed and are complementary. Further, our dataset is one of the early efforts to use generative models for building image datasets. Footnote 1: Deep models take advantage of correlations between testing and training sets (_a.k.a_ “spurious cues” or “shortcuts”). These correlations are easily accessible to models but are not to humans [7]. Our dataset includes 8,060 images across 36 categories (Fig. 1). Images are carefully collected, verified, and labeled. We do not limit ourselves to object recognition models proposed in academia, and also consider prominent vision APIs in industry. This allows us to test models over a wider range of categories than those available in ImageNet and obtain a broader sense of image understanding by models. State-of-the-art models show a 30% absolute drop in Top-1 acc on D2O test set compared to the best ImageNet accuracy (around 20% drop using Top-5 acc). Further, over categories for which we know humans are very good at (_e.g._ faces, cars), current APIs fail drastically. D2O test set is intentionally not paired with a training set. It comes with a license that disallows researchers to update the parameters of any model on it. This helps avoid over-fitting on the dataset. Additionally, to mitigate the danger of leaking our data to other datasets, we mark every image by a one pixel green border which must be removed on the fly before being used. ## 2 Object Recognition Test Sets A plethora of datasets have been proposed for image classification2. Here, we are concerned with datasets that focus on core object recognition. ImageNet [5] is one of the most used datasets in computer vision and deep learning. It contains 1,000 classes of common objects, with more than a million training images. Its test set contains 50,000 images. ImageNet test examples tend to be simple (by today's standards), clear, and close-up images of objects. As such, they may not represent harder images encountered in the real world. Further, ImageNet annotations are limited to a single label per image. To remedy the problems with ImageNet, new test sets have been proposed, which have been instrumental in gauging performance of models and measuring the gap between models and humans. The major ones are reviewed below. Several other datasets such as CIFAR-100 [18], SUN [35], Places [37], ImageNet-Sketch [34], and iLab20M [4] have also been introduced. Footnote 2: Please see link. **ImageNet-V2.** Recht _et al._[25] built this test by closely following the ImageNet creation process. They reported a performance gap of about 11% (Top-1 acc.) between the performance of the best models on this dataset and their performance on the original test set. Engstrom _et al._[6] estimated that the accuracy drop from ImageNet to ImageNet-V2 is less than 3.6%. Some other works have also evaluated and analyzed models on this dataset [28, 31]. **ImageNet-A, ImageNet-O, ImageNet-C, and ImageNet-P.** These datasets are built to measure the robustness of image classifiers against out of distribution examples and image distortions [12, 13, 14]. Specifically, ImageNet-A dataset contains images for which a pre-trained ResNet-50 model fails to predict the correct label (Fig. 2). It has 7,500 images scrapped from iNaturalist, Flickr, and DuckDuckGo websites. Following the approach in [14], researchers have gathered "natural adversarial examples" for other problems such as object detection [20] Figure 2: Top: Distribution of number of objects per category over our D2O dataset as well as the 11 classes from ImageNet-A that are in common with D2O (inset panel). This subset of Imagenet-A has a total of 393 images. In total, our dataset contains 8060 images over 36 categories. Bottom: Sample images from acorn, banana, and basketball categories of Imagenet-A. The blue box demonstrates the cropped region used to build the Isolated ImageNet-A dataset. See also Appendix B. and microscopy analysis [23]. In contrast to these datasets which benchmark the worst case performance of models, here we are interested in the average case performance. To this end, instead of filtering images to fool a classifier, we include a mix of easy and hard examples to get a better sense of accuracy. Notice that a model that can only solve a very hard test set is not guaranteed to solve an easy one. **Reassessed Labels (ReaL).** Beyer _et al._[2] collected new human annotations over the ImageNet validation set and used them to reassess the accuracy of ImageNet classifiers. They showed that model gains are substantially smaller than those reported using the original ImageNet labels. Further, they found that ReaL labels eliminate more than half of the ImageNet labeling mistakes. This implies that they provide a superior estimate of the model accuracy. **ObjectNet.** To remove the biases of the ImageNet, Barbu _et al._[1] introduced the ObjectNet dataset. Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds, rotations, and imaging viewpoints. ObjectNet contains 50,000 images across 313 categories, out of which 113 are in common with ImageNet categories. Astonishingly, Barbu _et al._ found that the state-of-the-art deep models perform drastically lower on ObjectNet compared to their performance on ImageNet (about 40-45% drop). Later on, [3] revisited the Barbu _et al._'s results and found that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, leads to 20-30% performance improvement. ## 3 D2O Test Set We followed two approaches to collect the data. In the first one, we used publicly-available and free-to-distribute sources. We crawled images from the Flickr and Google image search engine using different search queries. The queries contained terms specifying countries, locations, materials, different times (_e.g._ 80s), odd appearances (_e.g._ odd helmet), etc. We also included images from various categorized panels in the search results (_e.g._ drawing, sketch, clip art, icon, neon, clay, etc.). In the second approach, we used image generation tools such as DALL-E 2 [24], Midjourney, and StableDiffusion [26] to generate some images, or searched the web for some images that are generated by these tools. We only selected the images that had good quality (_e.g._ no dog with three eyes!). Some sample generated images are shown in Appendix J. We did our best to ensure that no image contains sensitive material, has high resolution, or violates the copyright law3. The gathered images encompass a wide variety of visual concepts over both RGB images, paintings, drawings, cartoons, and clip arts. To reduce ambiguity in annotation, most of the images contain one main object. Categories were selected based on the following two criteria: a) it should be possible to collect a variety of instances for them, with different levels of difficulty, and b) one would consider model errors on them egregious (_i.e._ confusing a cat with a dog is more troublesome than confusing a beaver with a marmot). During data collection, we emphasized choosing the odd items. Footnote 3: We chose images that were public domain, did not have copyright, or were released by the government. Three annotators were presented with an image as well as its corresponding label. They were tasked to verify the label by checking the correct or the incorrect box. The three annotators agreed with the correct label over all images. We did not incorporate any bias towards gender, age, or race during data collection, and tried to be as inclusive as possible. Most of the categories are about objects. Few \begin{table} \begin{tabular}{|l|l|} \hline **D2O class** & **ImageNet class** \\ \hline \hline clock & digital clock, wall clock \\ elephant & Indian elephant, African elephant \\ helmet & crash helmet, football helmet, gas mask, respirator, gas helmet \\ rabbit & wood rabbit, cottonial rabbit, Angora, Angora rabbit \\ squirrel & fox squirrel, eastern fox squirrel, Scinus nigr \\ sun glass & sun glass, sunglass, dark glasses, shades \\ turtle & loggerhead, loggerhead turtle, Caretta caretta, \\ leatherback turtle, leatherback, leatherly turtle, & tartle, \\ Dermochelys coriacea, mud turtle, box turtle, box tortoise \\ \hline \end{tabular} \end{table} Table 1: Class mapping between D2O dataset and ImageNet. Figure 3: Sample images from the miscellaneous category of D2O dataset along with their tags. Bar plot shows the frequencies of the top 70 most frequent tags. Please see also Appendices C and D for more stats. classes such as bicycle-built-for-two, face, helmet, person, sunglass, and umbrella contain humans and faces. We include and balance the number of images containing different ages and genders. The age groups are (child, 22), (teenager, 30), (adult, 51), and (elderly, 43). The gender groups include (woman, 66) and (man, 80). Notice that these issues are more important to address over large training sets. This is because sometimes models trained on such datasets are directly deployed in the real-world. Our dataset contains 8,060 images spread across 36 categories, out of which 29 are in common with ImageNet. Six categories including {car, cat, cow, face, giraffe, person} do not appear in ImageNet, and are included mainly because they are very common and easily recognizable by humans. Seven of the categories correspond to multiple ImageNet categories, as shown in the mapping in Table 1. For a certain class, if a model predicted any of its corresponding ImageNet classes, we considered the prediction a hit. Sample images from our dataset are shown in Fig. 1. Distribution of object frequencies is shown in the top panel of Fig. 2. The most frequent class is the person followed by car and cat classes. Interestingly, there is a large variation of person in images as this topic has fascinated many artists over time (_e.g._ person made of wire, clay, matches, etc.). **The miscellaneous category.** This category includes images that do not simply fall under a specific category, and may cover multiple concepts (_e.g._ hybrid animals or strange objects). Thus, this category is suitable for testing image tagging algorithms. It has 576 images covering a wide variety of concepts and topics including hybrid animals, hybrid objects, art, illusions, camouflage objects, out of context objects, shadow, animals, fruits, drawings, paintings, objects made from different materials (_e.g._ glass, metal, clay, cloud, tattoos, or Lego), impersonating objects, and objects from odd viewpoints. Sample images alongside their tags are shown in Fig. 3. This figure also presents the tag frequencies. The top 10 most frequent tags are (camouflage,60), (person,57), (Lego,41), (fish,40), (dog,33), (art,32), (house,20), (chair,18), (bird,16), (hand,16), (duck,14). To put our dataset in perspective with other datasets and for cross-dataset comparison, we also evaluate models over 11 classes of the ImageNet-A dataset that also exist in our dataset. Sample images from three of these categories are shown in the bottom panel of Fig. 2. The D2O dataset is substantially different from ImageNet and ImageNet-A validation sets measured in terms of the Frechet Inception Distance (FID) [15] (using 10K im Figure 4: Per category performance of the models on our dataset (left) and on ImageNet-A dataset (right), averaged over 10 models. The dashed lines show the average performance over categories. See Appendix C for performance of individual models. Figure 5: Per category performance of the best model (resnext101_32x8d_ws)’. ages). The FID between D2O and these sets in order are 45.2 and 51.3 indicating a large distribution shift, and thus high diversity. To put these numbers in perspective, the FID between ImageNet's validation and the test set is approximately 0.99. Notice that the lower the FID, the more similar the two distributions. ## 4 Results and Analyses ### Generic Object Recognition We tested 10 state-of-the-art object recognition models4, pre-trained on ImageNet, on our dataset. These models have been published over the past several years and have been immensely successful over the ImageNet benchmarks. They include AlexNet [19], MobileNetV2 [27], GoogleNet [29], DenseNet [16], ResNext [36], ResNet101 and ResNet152 [11], Inception_V3 [30], Deit [32], and ResNext_WSL [22]. Details on accuracy computation are given in Appendix A. Footnote 4: Models are available in PyTorch hub: [https://pytorch.org/hub/](https://pytorch.org/hub/). We used a 12 GB NVIDIA Tesla K80 GPU to do the experiments. Models are trained on ImageNet and tested only on the classes shared with ImageNet. Performance per D2O category, averaged over models, is shown in Fig. 4. The average performance, over all models and categories, is around 30% using Top-1 acc and around 50% using Top-5 acc. The corresponding numbers over the ImageNet-A dataset are about 5% and 15%, respectively. Therefore, ImageNet-A images are on average harder than D2O images for models, perhaps because they contain a lot of clutter. Prior research has shown that clutter and crowding severely hinder deep models (_e.g._[33]). It is not clear which object is the main one in most of the ImageNet-A images (See Fig. 2). To pinpoint whether and how much clutter contributes to low performance on this dataset, we manually cropped the object of interest in images (the blue bounding boxes in Fig. 2). The cropped objects have low resolution, but they are still recognizable by humans. Results on this dataset, called ImageNet-A-Isolated, will be discussed in the following. Among the models, resnext101_32x8d_ws5 ranks the best over both datasets, as shown in Fig. 5. It achieves around 60% Top-1 accuracy, which is much higher than its Top-1 acc over the ImageNet-A dataset (\(\sim\)40%). The Top-5 acc of this model on our dataset is about 80% compared to its 60% over ImageNet-A. The success of this model can be attributed to the fact that it is trained to predict hashtags on billions of social media images in a weakly supervised manner. The best performance on our dataset is much lower than the best performance on the ImageNet validation set. The best Top-1 and Top-5 performance over the latter are about 91% and 99%, respectively6. We find that better accuracy on ImageNet translates to better accuracy on D2O. Footnote 5: This model scores 85.4% (97.6% top-5) over ImageNet-1k validation set (single-crop). Performance per model, averaged over D2O categories, is shown in the top panel of Fig. 6. We observe a big difference between accuracy of resnext101_32x8d_ws model _vs_. other models. This model is \(\sim\)20% better than the second best model deit_bas_patch16_224, using Top-1 accuracy. The former model is based on weakly supervised learning whereas the latter is a transformer-based model. See Appendix A for accuracy of individual models. As shown in Fig. 6, models perform much better over the Isolated ImageNet-A dataset than the original ImageNet-A, even though the former has low resolution images due to region cropping. This supports our argument that lower performance on ImageNet-A dataset is partly due to its scenes being cluttered. All categories enjoy an improvement in accuracy (See Appendix C). We also test the SwinTransformer [21]7 on D2O. This Figure 6: Top: Performance of models averaged over categories (29 categories of D2O and 11 categories of ImageNet-A). Bottom: Fraction of images over which all models fail or they all succeed. model has improved the state-of-the-art over several computer vision problems including object detection, semantic segmentation, and action recognition. It scores 58.2% (Top-1) and 76.1% (Top-5). It performs much better than ResNet, Inception and Deit models, but is slightly below the resnext101_32x8d_ws model. **Illustrative Failure Modes.** According to Fig. 4, the top five most difficult D2O categories for all models in order are kite, rabbit, squirrel, turtle, and mushroom which happen to be the most difficult categories for the best model as well (Fig. 5). The kite class is often confused with parachute, balloon, and umbrella classes. Sample failure cases from the categories along with the predictions of the resnext101_32x8d_ws model are shown in Fig. 7. Models often fail on drawings, unusual objects, or images where the object of interest is not unique. We also computed the fraction of images, per category, over which all models succeed, or they all fail, as shown in the bottom panel of Fig. 6. For some categories, models consistently fail (_e.g._ kite, rabbit, turtle, squirrel), while for some others they all do very well (_e.g._ toaster, tractor, pretzel). When all models succeed, they are correct at best over 30% of the images (_e.g._ toaster). This result indicates that models share similar weaknesses and strengths. ### Performance of Vision APIs We tested several APIs from **Microsoft8**, **Google9**, and **MEGVII10** over the D2O categories that do not exist in ImageNet: {face, person, car, cat, cow, giraffe}). These APIs are popular and highly accurate. The goal is to see how models behave beyond ImageNet. Footnote 8: [https://azure.microsoft.com/en-us/services/cognitive-services/](https://azure.microsoft.com/en-us/services/cognitive-services/) Footnote 9: [https://google.github.io/mediapiipe/](https://google.github.io/mediapiipe/) Footnote 10: [https://www.faceplusplus.com/face-detection/](https://www.faceplusplus.com/face-detection/) **Face Detection.** D2O face category has 289 images and includes a lot of odd and difficult faces. Some are shown in Fig. 8. We are mainly interested in finding whether a model is close enough in detecting faces. To this end, we refrain from using mAP to evaluate the APIs and use the accuracy score, which is easier to understand and interpret. An image is considered as a hit if the API is able to generate a bounding box with IOU greater than or equal to 0.5 with a face in the image (Ground truth boxes are annotated). Otherwise, the image is considered a mistake. We also manually verified all the predicted boxes. Our evaluation is an overestimation of performance rather than being a strict benchmark. Over the images for which the APIs failed, most often the predicted boxes did not overlap with any face in the image. In the majority of the mistakes, though, the face was missed. Even with the above relaxed evaluation, APIs did not do well. Microsoft Azure face detection API achieves 45.3% accuracy in detecting D2O faces. The MEGVII Face++ API achieves 23.9% accuracy, slightly above the 23.2% by OpenCV face detector. Google MediaPipe face detector achieves 50.5% accuracy. Sample face images and predictions of the APIs are shown in Fig. 8. **Person Detection.** MEGVII API person detector obtains about 16% accuracy over the 1,217 person images in the person category. OpenCV person detector achieves 5% accuracy. Microsoft object detection API achieves 27.4% accuracy. If this API predicted a correct bounding box with any of the following classes {person, snowman, bronze sculpture, sculpture, doll}, we counted it as a Figure 7: Sample failures of resnext101_32x8d_ws model over the kite (top) and other categories (GT for bottom rows are acorn, banana, umbrella, clock, helmet, pizza, turtle, elephant, basketball, rabbit, mouse, pretzel, mushroom, and tractor). Model predictions are overlaid on images. The confidence for the top-1 prediction is also shown. Please see also Appendix F. hit. Evaluation is done the same way as in face detection. **Cat Detection.** Over the cat category (407 images), Microsoft object detection API, predicted 95 images as cat (95/407=0.23), 9 images as Persian cat (9/407=0.022), 26 images as animal (6/407 = 0.064), and 95 images as mammal (95/407= 0.23). Considering all of these images as hits, this API achieves 54.8% accuracy. **Cow Detection**. Over the cat category (407 images), Microsoft object detection API, predicted 95 images as cat (95/407=0.23), 9 images as Persian cat (9/407=0.022), 26 images as animal (6/407 = 0.064), and 95 images as mammal (95/407= 0.23). Considering all of these images as hits, this API achieves 54.8% accuracy. **Giraffe Detection.** Over the giraffe category (138 images), Microsoft object detection API predicted 30 images as giraffe (30/138=0.217), 15 images as animal (15/138=0.108), and 46 images as mammal (46/138=0.333). Considering all of these images as hits, this API achieves 65.9% accuracy. **Car Detection.** Microsoft object detection API achieves 25.8% accuracy (139 out of 539) on this category. An image was considered a hit if the API predicted a correct bounding box with any of these labels {car, land vehicle, all terrain vehicle, taxi, vehicle, race car, limousine, Van, station wagon}. On the one hand, our investigation shows that APIs perform very poorly, even considering overestimated accuracy, over categories that are very easy for humans. On the other hand, it reveals that our test set is challenging for a large array of models trained on a variety of datasets. Sample images from the above categories and predictions of the APIs on them are shown in Fig. 9. See also Appendix H. ### Tagging results We used the Microsoft tagging API to annotate the 576 images in the miscellaneous category. For 46.4% of the images, there is a common tag between predicted tags and the ground truth tags (calculated for each image and then averaged over images). For the remaining 53.6% of images, there is no overlap between the two sets. The fractional overlap between predicted and GT tags per image, computed as the number of common tags over the number of GT tags, is 21.8%. The Google Vision API11 performed slightly worse than the Microsoft API. Results are shown in Table 2 and Fig. 10. Footnote 11: [https://cloud.google.com/vision](https://cloud.google.com/vision) We also used Microsoft detection, recognition, and captioning APIs as image taggers by considering their generated labels or words as tags. Using the detection API, for 8.5% of the images, there was at least one tag in common between the detected label set and the ground truth tags. For 36.3% of the images, there was no overlap at all. For the remaining 55.2% of images, the API did not detect anything. The fractional overlap between the predicted and ground-truth tags is 17.1%. Using the recognition API, 91.3% of images had no overlap and the remaining 8.7% had no prediction at all. Using the Microsoft captioning API, for 30.9% of the images there was an overlap between predicted and ground-truth tags (69.1% had no overlap). The fractional overlap between the two sets is 13.8%. Overall, the tagging APIs perform better in tagging images than APIs that are not tailored for this task, but all of them still per \begin{table} \begin{tabular}{c|c c c|c} **API** & \begin{tabular}{c} **overlap** \\ **(\%)** \\ \end{tabular} & \begin{tabular}{c} **no overlap** \\ **(\%)** \\ \end{tabular} & \begin{tabular}{c} **no** \\ **prediction** \\ \end{tabular} & \begin{tabular}{c} **fractional** \\ **overlap** \\ \end{tabular} \\ \hline \hline MSFT Tagger & 46.4 & 53.6 & 0 & 21.8 \\ ” Detector & 8.5 & 36.3 & 55.2 & 17.1 \\ ” Recognizer & 0 & 91.3 & 8.7 & 0 \\ ” Captioning & 30.9 & 69.1 & 0 & 13.8 \\ \hline Google Tagger & 39.9 & 60.1 & 0 & 17.1 \\ \end{tabular} \end{table} Table 2: Image tagging performance of APIs over the miscellaneous category of our dataset. “No prediction” column shows the percent of images for which there was no prediction. Figure 8: Sample face images along with predictions of OpenCV (Red), Microsoft API (Green), MEGVII Face++ API (Blue), and Google MediaPipe (Black) face detectors. Figure 9: Sample images from the person, cat, cow, giraffe, and car categories (row-wise) along with predictions of Microsoft object detector. For the person category, the blue and green boxes represent Microsoft and MEGVII person detectors, respectively. form poorly in tagging images. Please see Table 2. ## 5 Discussion and Conclusion We introduced a new test for object recognition and evaluated several models and APIs on it. Despite years of research and significant progress, there is still a large gap in accuracy of models on our dataset _vs._ ImageNet. Some datasets rooted in ImageNet are biased towards borrowing its images and classes, or its data collection strategy. Here, we intended to deviate from these biases. For example, unlike ImageNet-A, our dataset is model independent. ImageNet-A contains images from ImageNet for which models fail. Our dataset also includes categories, such as cows or cats, that perhaps everyone can easily recognize. These categories are missing in the ImageNet-based datasets. Further, our work encourages researchers and small teams to build carefully-curated, small-scale and versatile test sets frugally. Presently, the mindset is that datasets can only be collected by large institutions since data collection and annotation is difficult and expensive. It is unlikely that a single test set will be enough to fully and comprehensively assess models. In practice, various test sets may be required. In conjunction with other test sets, D2O offers better insights into strengths and weaknesses of deep models. We hope that our dataset will open up new avenues for research in generalizable, robust, and human-like computer vision. We also hope that it will inspire other researchers to curate test sets where results are predictive of real-world performance. Our dataset has its own biases. It includes many non-natural, abstract, and artistic visualizations of objects. For a model to be able to interpret the D2O images, it has to do it similar to how humans perceive scenes and objects. Notice that our design decisions (_e.g._ including certain categories such as images with boxes around objects, or not fine-tuning the models on our data) is intentional. The main goal is to use this dataset to gauge performance of deep models over time to see how general they become, rather than fitting them to solve a particular dataset. We share the dataset and code to facilitate future work, and will organize an annual challenge and associated workshop. We will share images, annotations, and metadata in a zip file. Our dataset is licensed under Creative Commons Attribution 4.0 (Appendix I).
2308.06059
On skyburst polynomials and their zeros
We consider polynomials orthogonal on the unit circle with respect to the complex-valued measure $z^{\omega-1}\mathrm{d} z$, where $\omega\in\mathbb{R}\setminus\{0\}$. We derive their explicit form, a generating function and several recurrence relations. These polynomials possess an intriguing pattern of zeros which, as $\omega$ varies, are reminiscent of a firework explosion. We prove this pattern in a rigorous manner.
María José Cantero, Arieh Iserles
2023-08-11T10:18:18Z
http://arxiv.org/abs/2308.06059v1
# On skyburst polynomials and their zeros ###### Abstract We consider polynomials orthogonal on the unit circle with respect to the complex-valued measure \(z^{\omega-1}\,\mathrm{d}z\), where \(\omega\in\mathbb{R}\setminus\{0\}\). We derive their explicit form, a generating function and several recurrence relations. These polynomials possess an intriguing pattern of zeros which, as \(\omega\) varies, are reminiscent of a firework explosion. We prove this pattern in a rigorous manner. ###### Contents * 1 Introduction * 2 Skyburst polynomials * 3 Recurrences and generating functions * 4 The pattern of the zeros ###### Contents * 1 Introduction * 2 Skyburst polynomials * 3 Recurrences and generating functions * 4 The pattern of the zeros ###### Contents * 2.1 Introduction * 2.2 The To motivate our interest (and the unusual name with which we have endowed them, _skyburst polynomials_), we commence in Fig. 1.1 with a plot of the zeros of \(\mathrm{S}_{9}^{\omega}\), the 9th-degree orthogonal polynomial with respect to the above measure, as \(\omega\geq 0\) increases. For \(\omega=0\) the polynomial in question is \(z^{n}\), with all its zeros at the origin. As \(\omega\) grows in \((0,1)\), these zeros emerge from the origin into the complex plane: one into the interval \((-1,0)\), the remaining eight along equiangular rays in \(\mathbb{C}\). Except for the zero in \((-1,0)\), as \(\omega\) increases these zeros form eight loops, returning to the origin at \(\omega=1\). Subsequently (in the second figure), the eight zeros emerge for \(\omega>1\) from the origin: one is 'trapped' in \((-1,0)\), the rest form seven loops in the complex plane, eventually returning to the origin at \(\omega=2\). Similar state of affairs unfolds in subsequent figures, each displaying zeros for \(\omega\in[m,m+1]\) for increasing integer \(m\): at each integer value of \(m\) one more zero travels into \((-1,0)\) and stays there, the rest loop a loop in the complex plane, returning to the origin at the next integer value - until \(\omega=8\), when all the zeros live in \((-1,0)\) and remain there as \(\omega>8\). Examining the trajectories of zeros as \(\omega\) increases, the picture resembles a firework explosion, followed by a sequence of increasingly smaller (in both magnitude and complexity) explosions and eventually followed by a 'fizzle'. This is the reason for the name "skyburst polynomials". A major feature of skyburst polynomials is that they possess a surprisingly simple explicit form: as proved in Section 2, the monic polynomials are \[\mathrm{S}_{n}^{\omega}(z)=z^{n}{}_{2}F_{1}\!\left[\begin{array}{c}-n,- \omega;\\ -n-\omega;\end{array}-\frac{1}{z}\right].\] This, in itself, is fairly remarkable, because so few orthogonal polynomial systems on the unit circle are known in their explicit form (Cantero & Iserles, 2016, Simon, 2005) and it acts as a gateway towards a surprisingly simple generating function and a wealth of recurrence relations: this is the theme of Section 3. More effort is required to examine in great detail the observations that we have just made in Fig. 1.1 and prove them rigorously for every \(n\in\mathbb{N}\). This is done in Section 4, building upon the material of Section 3. In general, exceedingly little is known on orthogonal polynomials (whether on the real line or the unit circle) with respect to complex-valued measures. The one substantive result, (Celsus, Deano, Huybrechs & Iserles, 2022), indicates that, for a specific, parameter-dependent family of polynomials orthogonal on the real line, the zeros behave in a highly complicated manner. That paper also emphasises the advantage of examining the zeros as a function of a parameter: for each individual value we have just a number of points in \(\mathbb{C}\) but, once we examine the evolution of these points as the parameter varies, the full (and intricate!) picture emerges. The current paper, the first to consider orthogonal polynomials on the unit circle in this setting, highlights a similar state of affairs. A snapshot of the zeros of \(\mathrm{S}_{9}^{n}\) (cf. Fig. 1.1) tells us very little but the complete'movie' as \(\omega\) evolves indicates the underlying structure which, as we do in this paper, needs be subjected to rigorous analysis. ## 2 Skyburst polynomials Let \(\omega\in\mathbb{R}\setminus\{0\}\) and \(\,\mathrm{d}\mu(z)=z^{\omega-1}\,\mathrm{d}z\). We consider the complex-valued bilinear form \[\langle f,g\rangle_{\omega}=\frac{1}{2\pi\mathrm{i}}\int_{\mathbb{T}}f(z) \overline{g(\bar{z})}z^{\omega-1}\,\mathrm{d}z=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(\mathrm{e}^{\mathrm{i}\theta})\overline{g(\mathrm{e}^{-\mathrm{i}\theta})} \mathrm{e}^{\mathrm{i}\omega\theta}\,\mathrm{d}\theta, \tag{2.1}\] where \(\mathbb{T}\) is the complex unit circle. Since \(\omega\neq 0\), (2.1) is not a genuine inner product, since \(\langle f,f\rangle_{\omega}=0\) does not imply \(f=0\), yet this need not preclude the existence of orthogonal polynomials. Like in (Celsus et al., 2022) in the case of orthogonal polynomials on the real line with complex-valued measures, while important elements of classical theory are lost, the construct itself is amenable to analysis. While we cannot be assured _a priori_ for all \(n\in\mathbb{Z}_{+}\) and \(\omega\) of the existence of a monic \(n\)th-degree polynomial \(\mathrm{S}_{n}^{\omega}\), orthogonal with respect to (2.1), it turns out that it always exists except for a finite number of values of \(\omega\) and, moreover, such polynomials can be described explicitly and their zeros display intriguing patterns as \(\omega\) varies. We assume for the time being that \(\omega>0\): this assumption, rendering our analysis considerably simpler, will be lifted toward the end of this section. For reasons already explained in Section 1 and examined rigorously in the sequel, we call the \(\mathrm{S}_{n}^{\omega}\)s _skyburst polynomials_. We recall from (Simon, 2005, p.65) that an \(n\)th degree monic polynomial \(p_{n}\), orthogonal on the unit circle with respect to the measure \(\,\mathrm{d}\mu(z)=w(z)\,\mathrm{d}z\), can be represented in determinantal form, \[p_{n}(z)=\frac{1}{\mathfrak{t}_{n}}\det\left[\begin{array}{ccccc}\mu_{0}& \mu_{1}&\cdots&\mu_{n-1}&1\\ \mu_{-1}&\mu_{0}&\cdots&\mu_{n}&z\\ \vdots&\vdots&&\vdots&\vdots\\ \mu_{-n}&\mu_{-n+1}&\cdots&\mu_{-1}&z^{n}\end{array}\right], \tag{2.2}\] using the moments of the underlying measure, \[\mu_{n}=\frac{1}{2\pi\mathrm{i}}\int_{\mathbb{T}}z^{n}w(z)\frac{\mathrm{d}z} {z}=\frac{1}{2\pi}\int_{-\pi}^{\pi}\mathrm{e}^{\mathrm{i}n\theta}w(\mathrm{e}^ {\mathrm{i}\theta})\,\mathrm{d}\theta,\qquad n\in\mathbb{Z}\] and \[\mathfrak{t}_{n}=\left[\begin{array}{ccccc}\mu_{0}&\mu_{1}&\cdots&\mu_{n-1} \\ \mu_{-1}&\mu_{0}&\cdots&\mu_{n-2}\\ \vdots&\vdots&&\vdots\\ \mu_{-n+1}&\mu_{-n+2}&\cdots&\mu_{0}\end{array}\right],\qquad n\in\mathbb{Z}_ {+}. \tag{2.3}\] Being purely algebraic constructs, (2.2) and (2.3) remain valid for complex-valued measures. In the case of skyburst polynomials the moments are \[\mu_{n}^{\omega}=\frac{(-1)^{n}}{\pi}\frac{\sin\pi\omega}{n+\omega},\qquad n \in\mathbb{Z},\quad\omega>0.\] Because of (2.3), \(\mathrm{S}_{n}^{\omega}\) exists if and only if \(\mathfrak{t}_{n}^{\omega}=\mathfrak{t}_{n}\neq 0\) and is bounded. **Lemma 1**: _Let \(\omega\in\mathbb{R}\setminus\mathbb{Z}\), Then it is true that_ \[\mathfrak{t}_{n}^{\omega}=\left(\frac{\sin\pi\omega}{\pi\omega}\right)^{n} \frac{\prod_{\ell=0}^{n-1}\ell!^{2}}{\prod_{k=1}^{n-1}(k^{2}-\omega^{2})^{n-k} },\qquad n\in\mathbb{Z}_{+},\quad\omega\in\mathbb{R}, \tag{2.4}\] _hence \(\mathrm{S}_{n}^{\omega}\) exists._ _Proof_ The moments of (2.1) are \[\mu_{n}^{\omega}=(-1)^{n}\frac{\sin\pi\omega}{\pi}\frac{1}{n+\omega},\qquad n\in \mathbb{Z},\] and substitution into (2.3) results in \[\mathfrak{t}_{n} =\left(\frac{\sin\pi\omega}{\pi}\right)^{n}\det\left[\begin{array} []{cccc}\frac{1}{\omega}&-\frac{1}{\omega+1}&\frac{1}{\omega+2}&\cdots\frac{(- 1)^{n-1}}{\omega+n-1}\\ -\frac{1}{\omega-1}&\frac{1}{\omega}&-\frac{1}{\omega+1}&\cdots\frac{(-1)^{n- 2}}{\omega+n-2}\\ \vdots&\vdots&\vdots&\vdots\\ \frac{(-1)^{n-1}}{\omega-n+1}&\frac{(-1)^{n-2}}{\omega-n+2}&\frac{(-1)^{n-3} }{\omega-n+3}&\cdots&\frac{1}{\omega}\end{array}\right]\] \[=\left(\frac{\sin\pi\omega}{\pi}\right)^{n}\det\left[\begin{array} []{cccc}\frac{1}{\omega}&\frac{1}{\omega+1}&\frac{1}{\omega+2}&\cdots\frac{1} {\omega+n-1}\\ \frac{1}{\omega-1}&\frac{1}{\omega}&\frac{1}{\omega+1}&\cdots\frac{1}{ \omega+n-2}\\ \vdots&\vdots&\vdots&\vdots\\ \frac{1}{\omega-n+1}&\frac{1}{\omega-n+2}&\frac{1}{\omega-n+3}&\cdots&\frac{ 1}{\omega}\end{array}\right].\] The way we have obtained the second determinant is by pulling a factor of \(-1\) from every odd (counting from zero) row and column of the first determinant. This gives a factor of \((-1)^{2n}=1\). We identify the last expression as a determinant of a _Cauchy matrix,_ where the \((k,\ell)\) element is \(1/(x_{k}+y_{\ell})\), \(k,\ell=0,\ldots,n-1\). The determinant of such a matrix is \[\frac{\prod_{k=1}^{n-1}\prod_{\ell=0}^{k-1}(x_{k}-x_{\ell})(y_{k}-y_{\ell})}{ \prod_{k=0}^{n-1}\prod_{\ell=0}^{n-1}(x_{k}+y_{\ell})}\] (Schechter 1959). The lemma follows by straightforward algebra once we let \(x_{k}=\omega+k\), \(y_{k}=-k\). \(\Box\) **Corollary 1**: _The polynomial \(\mathrm{S}_{n}^{\omega}\) exists and is of degree \(n\) for all \(\omega\not\in\mathbb{N}\)._ _Proof_ Follows from (2.4) because \[\mathfrak{t}_{n}^{m}=\frac{\prod_{\ell=0}^{n-1}\ell!^{2}}{(\pi m)^{n}}\prod_{ \begin{subarray}{c}k=1\\ k\neq m\end{subarray}}^{n-1}(k^{2}-m^{2})^{k-n}\lim_{\omega\to m}\frac{\sin^{ n}\pi\omega}{(m^{2}-\omega^{2})^{n-m}}=0,\qquad m\in\mathbb{N},\] otherwise \(\mathfrak{t}_{n}^{\omega}\neq 0\). Moreover, the coefficient of \(z^{n}\) in \(\mathrm{S}_{n}^{\omega}\) is \(\mathfrak{t}_{n-1}/\mathfrak{t}_{n}\), according to (2.2), and this is nonzero for \(\omega\not\in\mathbb{N}\) by a similar argument. \(\Box\) **Theorem 2**: _Let \(\omega>0\). It is true that_ \[\mathrm{S}_{n}^{\omega}(z)=z^{n}{}_{2}F_{1}\bigg{[}\begin{array}{cc}-n,- \omega;&-\frac{1}{z}\end{array}\bigg{]},\qquad n\in\mathbb{Z}_{+}. \tag{2.5}\] _Proof_ It is enough to prove that the monic polynomial given in (2.5) is orthogonal to \(z^{k}\), \(k=0,\ldots,n-1\), with respect to the underlying bilinear form \(\langle\,\cdot\,,\,\cdot\,\rangle_{\omega}\). It follows from (2.5) that \[\langle\mathrm{S}_{n}^{\omega},z^{k}\rangle_{\omega} = \frac{\mathrm{i}}{2\pi}\int_{-\pi}^{\pi}\sum_{\ell=0}^{n}(-1)^{ \ell}\frac{(-n)_{\ell}(-\omega)_{\ell}}{\ell!(-n-\omega)_{\ell}}\mathrm{e}^{ \mathrm{i}(n-\ell-k+\omega)\theta}\,\mathrm{d}\theta\] \[= \frac{1}{2\pi}\sum_{\ell=0}^{n}(-1)^{\ell}\frac{(-n)_{\ell}(- \omega)_{\ell}}{\ell!(-n-\omega)_{\ell}}\frac{\mathrm{e}^{\mathrm{i}(n-\ell-k +\omega)\pi}-\mathrm{e}^{-\mathrm{i}(n-\ell-k+\omega)\pi}}{n-\ell-k+\omega}\] \[= \frac{(-1)^{n-k}\mathrm{i}\sin\pi\omega}{\pi(k-n-\omega)}{}_{3}F _{2}\bigg{[}\genfrac{}{}{0.0pt}{}{-n,-\omega,k-n-\omega;}{-n-\omega,k-n- \omega+1;}1\bigg{]}=\frac{(-1)^{n-k}\mathrm{i}\sin\pi\omega}{\pi(k-n-\omega)}r _{n,k}^{\omega},\] where \[r_{n,k}^{\omega}={}_{3}F_{2}\bigg{[}\genfrac{}{}{0.0pt}{}{-n,-\omega,k-n- \omega;}{-n-\omega,k-n-\omega+1;}1\bigg{]}\,.\] It is thus sufficient to prove that \(r_{n,k}^{\omega}=0\) for \(k=0,\ldots,n-1\) and every \(n\in\mathbb{N}\). (We already know from Corollary 1 that \(r_{n,n}^{\omega}\neq 0\), because \(\mathrm{S}_{n}^{\omega}\) is of degree \(n\).) We do so by first proving a mixed recurrence which is of its own independent interest: the polynomials \(\mathrm{S}_{n}^{\omega}\), as defined by (2.5), satisfy \[\mathrm{S}_{n}^{\omega}(z)=z\mathrm{S}_{n-1}^{\omega}(z)+\frac{\omega^{2}}{( \omega+n-1)(\omega+n)}\mathrm{S}_{n-1}^{\omega-1}(z),\qquad n\in\mathbb{N}. \tag{2.6}\] At the first instance, it follows from (2.5) that \[\mathrm{S}_{n}^{\omega}(z)-z\mathrm{S}_{n-1}^{\omega}(z)=z^{n}\left[\sum_{ \ell=0}^{n}(-1)^{\ell}\frac{(-n)_{\ell}(-\omega)_{\ell}}{\ell!(-\omega-n)_{ \ell}}z^{-\ell}-\sum_{\ell=0}^{n-1}(-1)^{\ell}\frac{(-n+1)_{\ell}(-\omega)_{ \ell}}{\ell!(-\omega-n+1)_{\ell}}z^{-\ell}\right]\!.\] Since \[\frac{(-n)_{\ell}(-\omega)_{\ell}}{\ell!(-\omega-n)_{\ell}}-\frac{(-n+1)_{ \ell}(-\omega)_{\ell}}{\ell!(-\omega-n+1)_{\ell}}=-\frac{\omega^{2}}{(\omega+ n-1)(\omega+n)}\frac{(-n+1)_{\ell-1}(-\omega+1)_{\ell-1}}{(\ell-1)!(-\omega-n+1)_{ \ell-1}},\] we deduce that \[\mathrm{S}_{n}^{\omega}(z)-z\mathrm{S}_{n-1}^{\omega}(z) = -\frac{\omega^{2}}{(\omega+n-1)(\omega+n)}z^{n-1}\!\sum_{\ell=1} ^{n}(-1)^{\ell}\frac{(-n\!+\!1)_{\ell-1}(-\omega\!+\!1)_{\ell-1}}{(\ell\!-\!1)!(-\omega\!-\!n\!+\!2)_{\ell-1}}z^{-\ell+1}\] \[= \frac{\omega^{2}}{(\omega+n-1)(\omega+n)}z^{n-1}\sum_{\ell=0}^{n- 1}(-1)^{\ell}\frac{(-n+1)_{\ell}(-\omega+1)_{\ell}}{\ell!(-\omega-n+2)_{\ell} }z^{-\ell}\] \[= \frac{\omega^{2}}{(\omega+n-1)(\omega+n)}z^{n-1}{}_{2}F_{1} \bigg{[}\genfrac{}{}{0.0pt}{}{-n+1,-\omega+1;}{-\omega-n+1;}z^{-1}\bigg{]}\] \[= \frac{\omega^{2}}{(\omega+n-1)(\omega+n)}S_{n-1}^{\omega-1}(z)\] and (2.6) is true. Consequently, by induction, the formula being true for \(n=0\) and \(n=1\) by direct computation, \[\langle{\rm S}_{n}^{\omega},z^{k}\rangle_{\omega} = \frac{{\rm i}}{2\pi}\int_{-\pi}^{\pi}{\rm S}_{n}^{\omega}({\rm e}^ {{\rm i}\theta}){\rm e}^{{\rm i}(-k+\omega)\theta}\,{\rm d}\theta=\frac{{\rm i} }{2\pi}\int_{-\pi}^{\pi}{\rm S}_{n-1}^{\omega}({\rm e}^{{\rm i}\theta}){\rm e} ^{{\rm i}(-k+\omega)\theta}\,{\rm d}\theta\] \[\mbox{}+\frac{\omega^{2}}{(\omega+n-1)(\omega+n)}\frac{{\rm i}}{2 \pi}\int_{-\pi}^{\pi}{\rm S}_{n-1}^{\omega-1}({\rm e}^{{\rm i}\theta}){\rm e} ^{{\rm i}(-k+\omega-1)\theta}\,{\rm d}\theta\] \[= \langle{\rm S}_{n-1}^{\omega},z^{k}\rangle_{\omega}+\frac{\omega ^{2}}{(\omega+n-1)(\omega+n)}\langle{\rm S}_{n-1}^{\omega-1},z^{k}\rangle_{ \omega-1}=0\] for \(k=0,\ldots,n-2\). All we need to prove is that \(\langle{\rm S}_{n}^{\omega},z^{n-1}\rangle_{\omega}=0\), \(\langle{\rm S}_{n}^{\omega},z^{n}\rangle\neq 0\), and to this end it is sufficient to prove that \(r_{n,n-1}^{\omega}=0\) and \(r_{n,n}^{\omega}\neq 0\) respectively. But \[r_{n,n-1}^{\omega}={}_{3}F_{2}\!\left[\begin{array}{c}-n,-\omega,-\omega-1; \\ -n-\omega,-\omega;\end{array}1\right]={}_{2}F_{1}\!\left[\begin{array}{c}-n,- \omega-1;\\ -n-\omega;\end{array}1\right]=(-1)^{n}\frac{(-n+1)_{n}}{(-n-\omega)_{n}}=0,\] where we have used the standard Vandermonde formula to sum up \({}_{2}F_{1}\) series at \(z=1\). Since our stipulated form of \({\rm S}_{n}^{\omega}\) is monic, the expression (2.5) follows - as does (2.6), which might be of an independent interest. Finally, it follows that \(r_{n,n}^{\omega}\neq 0\) from (2.6) by easy induction and our proof is done. \(\Box\) **Corollary 2**: _For every \(m,n\in\mathbb{N}\)_ \[z^{-n}{\rm S}_{n}^{m}(z)=z^{-m}{\rm S}_{m}^{n}(z). \tag{2.7}\] The time has come to lift the assumption that \(\omega>0\): since \({\rm S}_{n}^{0}(z)=z^{n}\) is a simple and very well-known case, we need just to consider the case \(\omega<0\). **Lemma 3**: _Let \(\omega>0\), \(\omega\not\in\{1,2,\ldots,n\}\), then_ \[{\rm S}_{n}^{-\omega}(z)=(-1)^{n}\frac{(\omega)_{n}}{(1-\omega)_{n}}z^{n}{\rm S }_{n}^{\omega-1}(z^{-1}). \tag{2.8}\] _Proof_ Direct substitution in (2.5) (which has been obtained without assuming the sign of \(\omega\neq 0\)) yields \[z^{n}{\rm S}_{n}^{-\omega}(z^{-1})={}_{2}F_{1}\!\left[\begin{array}{c}-n, \omega;\\ -n+\omega;\end{array}-z\right]=\sum_{\ell=0}^{n}\binom{n}{\ell}\frac{(\omega )_{n-\ell}}{(-n+\omega)_{n-\ell}}z^{n-\ell}.\] But \[(\omega)_{n-\ell} = (-1)^{\ell}\frac{(\omega)_{n}}{(-n-\omega+1)_{\ell}},\] \[(-n+\omega)_{n-\ell} = (-1)^{n-\ell}\frac{(1-\omega)_{n}}{(1-\omega)_{\ell}},\] therefore, following simple algebra, \[z^{n}\mathrm{S}_{n}^{-\omega}(z^{-1})=(-1)^{n}\frac{(\omega)_{n}}{(1-\omega)_{n}} \mathrm{S}_{n}^{\omega-1}(z).\] The expression (2.8) follows by replacing \(z\) with \(z^{-1}\). \(\Box\) Note that \(\mathrm{S}_{n}^{m}\) blows up for \(m\in\{1,2,\ldots,n\}\), because the denominator of the hypergeometric function vanishes. This comes as a little surprise: the main message of (2.8) is that, flipping the sign of a non-integer \(\omega\) is equivalent to conformally reflecting a skyburst polynomial (with a unit shift in parameter) in respect to the unit circle. Such a reflection takes the origin to infinity and we have already seen, e.g. in (2.7), that \(\mathrm{S}_{n}^{\omega}\) vanishes at the origin for \(\omega\in\{0,1,\ldots,n-1\}\). ## 3 Recurrences and generating functions We have already obtained the mixed recurrence relation (2.6) in the course of proving Theorem 2. This formula is interesting in the following sense. Polynomials orthogonal on the unit circle with respect to a real-valued measure obey the _Szego recurrence_ \[p_{n+1}(z)=zp_{n}(z)-\bar{\alpha}_{n}p_{n}^{*}(z),\qquad n\in\mathbb{N}, \tag{3.1}\] for a sequence of _Verblunski coefficients_\(\alpha_{n}\) (Simon 2005, p. 2), where \(p_{n}^{*}(z)=z^{n}\overline{p_{n}(\bar{z}^{-1})}\). It is easy, though, to compute \[\mathrm{S}_{n}^{\omega\,*}(z)={}_{2}F_{1}\!\left[\begin{array}{c}-n,-\omega ;\\ -n-\omega;\end{array}-z\right]\] and verify that (3.1) does not hold for skyburst polynomials. This is not very surprising, since the underlying measure is complex valued. However, the surprising fact is that the above recurrence is replaced by (2.6): instead of a conjugate \(\mathrm{S}_{n}^{\omega\,*}\) we have \(\mathrm{S}_{n}^{\omega-1}\), with a shifted parameter. Moreover, in this section we prove several other recurrence relations. The following mixed recurrence can be proved directly. **Lemma 4**: _For every \(n\in\mathbb{N}\)_ \[\mathrm{S}_{n}^{\omega+1}(z)=\mathrm{S}_{n}^{\omega}(z)+\frac{nz^{2}}{(\omega +n)(\omega+n+1)}\mathrm{S}_{n-1}^{\omega}(z). \tag{3.2}\] _Proof_ We compute directly, using (2.5), \[\mathrm{S}_{n}^{\omega+1}(z)-\mathrm{S}_{n}^{\omega}(z) =\sum_{\ell=1}^{n}\binom{n}{\ell}\left[\frac{(-\omega-1)_{\ell}} {(-\omega-1-n)_{\ell}}-\frac{(-\omega)_{\ell}}{(-\omega-n)_{\ell}}\right]z^{n-\ell}\] \[=\sum_{\ell=1}^{n}\binom{n}{\ell}\frac{(-\omega)_{\ell-1}}{(- \omega-1-n)_{\ell+1}}\ell nz^{n-\ell}\] \[=n^{2}z\sum_{\ell=0}^{n-1}\binom{n-1}{\ell}\frac{(-\omega)_{\ell }}{(-\omega-n-1)_{\ell+1}}z^{n-1-\ell}\] \[=\frac{n^{2}z}{(\omega+n)(\omega+n+1)}\sum_{\ell=0}^{n-1}\binom{n-1}{\ell} \frac{(-\omega)_{\ell}}{(-\omega-n+1)_{\ell}}z^{n-1-\ell}\] \[=\frac{n^{2}z}{(\omega+n)(\omega+n+1)}{\rm S}_{n-1}^{\omega}(z).\] \(\Box\) As a gateway to further recurrence relations, we prove the existence of a surprisingly neat generating function. **Theorem 5**: _It is true that_ \[\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}{\rm S}_{n}^{\omega}(z)T^{n}=\frac {(1+T)^{\omega}}{(1-zT)^{\omega+1}},\qquad|zT|<1. \tag{3.3}\] _Proof_ We have \[G_{\omega}(z,T) =\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}{\rm S}_{n}^{\omega }(z)T^{n}=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}T^{n}\sum_{\ell=0}^{n}( -1)^{\ell}\frac{n!(-\omega)_{\ell}z^{n-\ell}}{\ell!(n-\ell)!(-\omega-n)_{\ell}}\] \[=\sum_{\ell=0}^{\infty}\frac{(-1)^{\ell}}{(-\omega)_{\ell}}\ell! \sum_{n=\ell}^{\infty}\frac{(1+\omega)_{n}z^{n-\ell}T^{n}}{(n-\ell)!(-\omega- n)_{\ell}}\] \[=\sum_{\ell=0}^{\infty}\frac{(-1)^{\ell}(-\omega)_{\ell}T^{\ell} }{\ell!}\sum_{n=0}^{\infty}\frac{(1+\omega)_{n+\ell}(Tz)^{n}}{n!(-\omega-n- \ell)_{\ell}}.\] But \[\frac{(1+\omega)_{n+\ell}}{(-\omega-n-\ell)_{\ell}}=\frac{(1+\omega)_{n}(n+1+ \omega)_{\ell}}{(-1)^{\ell}(\omega+n+1)_{\ell}}=(-1)^{\ell}(\omega+1)_{n},\] therefore \[G(z,T) =\sum_{\ell=0}^{\infty}\frac{(-\omega)_{\ell}T^{\ell}}{\ell!} \sum_{n=0}^{\infty}\frac{(\omega+1)_{n}(Tz)^{n}}{n!}={}_{1}F_{0}\genfrac{[}{] }{0.0pt}{}{-\omega;}{-;T}_{1}F_{0}\genfrac{[}{]}{0.0pt}{}{\omega+1;}{-;Tz}\] \[=\frac{(1+T)^{\omega}}{(1-Tz)^{\omega+1}},\] summing up the binomial \({}_{1}F_{0}\) series explicitly (Rainville 1960, p. 74). \(\Box\) The generating function (3.3) is a pathway to a wide array of results. For example, expanding \(G_{\omega}(-1,T)\), we obtain at once \[{\rm S}_{n}^{\omega}(-1)=(-1)^{n}\frac{n!}{(1+\omega)_{n}}\neq 0,\qquad n\in \mathbb{Z}_{+},\quad\omega>0, \tag{3.4}\] an expression which will be useful in Section 4. With minor effort, (3.4) can be generalised. **Lemma 6**: _For every \(m\in\mathbb{Z}_{+}\) it is true that_ \[\frac{\mathrm{d}^{m}\mathrm{S}_{n}^{\omega}(-1)}{\mathrm{d}z^{m}}=(-1)^{n-m}n! \frac{(1+\omega)_{m}}{(1+\omega)_{n}}\binom{n}{m},\qquad n\geq m,\quad\omega>0. \tag{3.5}\] _Therefore_ \[\mathrm{S}_{n}^{\omega}(z)=\frac{(-1)^{n}}{(1+\omega)_{n}}{}_{2}F_{1}\bigg{[} \begin{array}{c}-n,1+\omega;\\ 1;\end{array}1+z\bigg{]}.\] _Proof_ Differentiating (3.3) and letting \(z=-1\), we have \[\sum_{n=m}^{\infty}\frac{(1+\omega)_{n}}{n!}\frac{\mathrm{d}^{m} \mathrm{S}_{n}^{\omega}(z)}{\mathrm{d}z^{m}}T^{n} =(1+T)^{\omega}\frac{\mathrm{d}^{m}}{\mathrm{d}z^{m}}\frac{1}{(1- Tz)^{\omega+1}}=\frac{(1+\omega)_{m}T^{m}}{(1+T)^{m+1}}\] \[=(1+\omega)_{m}T^{m}{}_{1}F_{0}\bigg{[}\begin{array}{c}m+1;\\ -;\end{array}-T\bigg{]}\] \[=(1+\omega)_{m}\sum_{n=m}^{\infty}\binom{n}{m}(-1)^{n-m}T^{n}\] and deduce (3.5). Therefore \[\mathrm{S}_{n}^{\omega}(z) =\sum_{m=0}^{n}\frac{1}{m!}\frac{\mathrm{d}^{m}p_{n}^{\omega}(-1) }{\mathrm{d}z^{m}}=\frac{(-1)^{n}n!}{(1+\omega)_{n}}\sum_{m=0}^{n}(-1)^{m} \binom{n}{m}\frac{(1+\omega)_{m}}{m!}(1+z)^{m}\] \[=\frac{(-1)^{n}n!}{(1+\omega)_{n}}{}_{2}F_{1}\bigg{[}\begin{array} []{c}-n,1+\omega;\\ 1;\end{array}1+z\bigg{]}\] and the proof is complete. \(\Box\) The generating function also lends itself toward the derivation of lifting and lowering recurrences for skyburst polynomials. **Theorem 7**: _The following_ lifting__ \[\frac{(2+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega+1}(z)=(1+z)\sum_{\ell=0}^{n-1 }\frac{(1+\omega)_{\ell}}{\ell!}z^{n-\ell-1}\mathrm{S}_{\ell}^{\omega}(z)+ \frac{(1+\omega)_{n}}{n!}z\mathrm{S}_{n}^{\omega}(z) \tag{3.6}\] _and_ lowering__ \[\frac{(\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega-1}(z)=(1+z)\sum_{\ell=0}^{n-1} (-1)^{n-\ell}\frac{(1+\omega)_{\ell}}{\ell!}\mathrm{S}_{\ell}^{\omega}(z)+ \frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega}(z) \tag{3.7}\] _recurrences are valid for every \(n\in\mathbb{Z}_{+}\)._ _Proof_ We commence by proving (3.6). Since \[\frac{1+T}{1-Tz}=1+(1+z)\sum_{\ell=1}^{\infty}z^{\ell-1}T^{\ell},\] it follows from (3.3) that \[\sum_{n=0}^{\infty}\frac{(2+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega +1}(z)T^{n}=\frac{(1+T)^{\omega+1}}{(1-Tz)^{\omega+2}}\] \[=\left[1+(1+z)\sum_{\ell=1}^{\infty}z^{\ell-1}T^{\ell}\right]\sum_ {n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega}(z)T^{n}\] \[=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{ \omega}(z)T^{n}+(1+z)\sum_{\ell=1}^{\infty}z^{\ell-1}\sum_{n=0}^{\infty}\frac{ (1+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega}(z)T^{n+\ell}\] \[=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{ \omega}(z)T^{n}+(1+z)\sum_{\ell=1}^{\infty}z^{\ell-1}\sum_{n=\ell}^{\infty} \frac{(1+\omega)_{n-\ell}}{(n-\ell)!}\mathrm{S}_{n-\ell}^{\omega}(z)T^{n}\] \[=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{ \omega}(z)T^{n}+(1+z)\sum_{n=1}^{\infty}\sum_{\ell=1}^{n}\frac{(1+\omega)_{n- \ell}}{(n-\ell)!}z^{\ell-1}\mathrm{S}_{n-\ell}^{\omega}(z)T^{n}\] \[=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{ \omega}(z)T^{n}+(1+z)\sum_{n=1}^{\infty}\sum_{\ell=0}^{n-1}\frac{(1+\omega)_{ \ell}}{\ell!}z^{n-\ell-1}\mathrm{S}_{\ell}^{\omega}(z)T^{n}\] and the assertion (3.6) follows by comparing powers of \(T\). Similarly, \[\frac{1-Tz}{1+T}=1+(1+z)\sum_{\ell=1}^{\infty}(-1)^{\ell}T^{\ell},\] therefore \[\sum_{n=0}^{\infty}\frac{(\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega -1}(z)T^{n}=\frac{(1+T)^{\omega-1}}{(1-Tz)^{\omega}}=\frac{1-Tz}{1+T}\times \frac{(1+T)^{\omega}}{(1-Tz)^{\omega+1}}\] \[=\left[1+(1+z)\sum_{\ell=1}^{\infty}(-1)^{\ell}T^{\ell}\right] \sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega}(z)T^{n}\] \[=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{ \omega}(z)T^{n}+(1+z)\sum_{\ell=1}^{\infty}(-1)^{\ell}\sum_{n=0}^{\infty} \frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{\omega}(z)T^{n+\ell}\] \[=\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\mathrm{S}_{n}^{ \omega}(z)T^{n}+(1+z)\sum_{n=1}^{\infty}\sum_{\ell=0}^{n-1}(-1)^{n-\ell}\frac{ (1+\omega)_{\ell}}{\ell!}\mathrm{S}_{\ell}^{\omega}(z)T^{n}\] and, comparing powers of \(T\), the proof of (3.7) follows. \(\Box\) The generating function (3.3) is a convenient pathway towards a differential recurrence for skyburst polynomials. **Lemma 8**: _The recurrence_ \[(\omega+n)\frac{\mathrm{dS}_{n}^{\omega}(z)}{\mathrm{d}z}=nz\,\frac{\mathrm{dS }_{n-1}^{\omega}(z)}{\mathrm{d}z}+n(1+\omega)\mathrm{S}_{n-1}^{\omega}(z) \tag{3.8}\] _holds for all \(n\in\mathbb{Z}_{+}\)._ _Proof_ We again commence from the generating function (3.3). Since \[\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\,\frac{{\rm dS}_{n}^{\omega}(z)}{{ \rm d}z}T^{n}=(\omega+1)T\frac{(1+T)^{\omega}}{(1-Tz)^{\omega+2}},\] we have \[(1-Tz)\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}\,\frac{{\rm dS} _{n}^{\omega}(z)}{{\rm d}z}T^{n} =(1+\omega)T\frac{(1+T)^{\omega}}{(1-Tz)^{\omega+1}}\] \[=(1+\omega)T\sum_{n=0}^{\infty}\frac{(1+\omega)_{n}}{n!}{\rm S}_ {n}^{\omega}(z)T^{n+1}.\] Therefore, since \({\rm S}_{0}^{\omega}\equiv 1\), \[\sum_{n=1}^{\infty}\frac{(1+\omega)_{n}}{n!}\,\frac{{\rm dS}_{n}^ {\omega}(z)}{{\rm d}z}T^{n}-z\sum_{n=1}^{\infty}\frac{(1+\omega)_{n-1}}{(n-1)!}\,\frac{{\rm dS}_{n-1}^{\omega}(z)}{{\rm d}z}T^{n}\] \[=(1+\omega)\sum_{n=1}^{\infty}\frac{(1+\omega)_{n-1}}{(n-1)!}{ \rm S}_{n-1}^{\omega}(z)T^{n}\] and we deduce the differential recurrence (3.8) comparing powers of \(T\). \(\Box\) Finally in this section, we demonstrate that the skyburst polynomials obey a second-order linear differential equations - something that should come as little surprise, because of their relation to hypergeometric functions. **Theorem 9**: _The function \({\rm S}_{n}^{\omega}\) obeys the differential equation_ \[-z(1+z)\,\frac{{\rm d}^{2}{\rm S}_{n}^{\omega}(z)}{{\rm d}z^{2}}+[1-(2+\omega- n)(z+1)]\frac{{\rm dS}_{n}^{\omega}(z)}{{\rm d}z}+(1+\omega)n{\rm S}_{n}^{ \omega}(z)=0, \tag{3.9}\] _with regular-singular points at \(-1\) and 0._ _Proof_ Our starting point is Lemma 6. A hypergeometric function \[y(z)={}_{2}F_{1}\!\left[\begin{array}{cc}a,b;&z\\ c;&z\end{array}\right]\] obeys the differential equation \[x(1-x)y^{\prime\prime}(x)+[c-(a+b+1)x]y^{\prime}(x)-aby(x)=0\] (Rainville 1960, p. 54). The equation (3.9) follows by letting \(a=-n\), \(b=1+\omega\), \(c=1\) and \(y=1+z\). \(\Box\) **Corollary 3**: _All the zeros of \({\rm S}_{n}^{\omega}\), except possibly at the origin, are simple._ _Proof_ Suppose that \(\mathrm{S}_{n}^{\omega}(\tilde{z})=\,\mathrm{dS}_{n}^{\omega}(\tilde{z})/\, \mathrm{d}z=0\) for some \(\tilde{z}\in\mathbb{C}\setminus\{-1,0\}\). Solving (3.9) with these initial conditions we obtain \(\mathrm{S}_{n}^{\omega}\equiv 0\), a contradiction. Moreover, \(\mathrm{S}_{n}^{\omega}(-1)\neq 0\) according to (3.4). \(\Box\) Note that it follows at once from (2.5) that \[\mathrm{S}_{n}^{\omega}(0)=\frac{n!(-\omega)_{n}}{(-n-\omega)_{n}}\neq 0\] unless \(\omega\in\{0,1,\ldots,n-1\}\), when (2.7) implies that \(\mathrm{S}_{n}^{\omega}\) has a zero of multiplicity \(n-m\). ## 4 The pattern of the zeros Orthogonal polynomials on the real line with respect to complex-valued highly oscillatory measures have been investigated in (Celsus et al., 2022). Perhaps their most fascinating feature is the behaviour of their zeros. The zeros no longer reside in the support of the measure. As an example (as a matter of fact, the only example investigated at depth), consider the inner product \[\langle\!\langle f,g\rangle\!\rangle_{\omega}=\int_{-1}^{1}f(x)g(x)\mathrm{e} ^{\mathrm{i}\omega x}\,\mathrm{d}x,\qquad\omega\geq 0.\] As the parameter \(\omega\) grows, the zeros of the orthogonal polynomial \(p_{n}^{\omega}\) trace \(n\) trajectories in the complex plane: each such trajectory commences at a zero of a Legendre polynomial and, as \(\omega\to\infty\), tends to either \(+1\) or \(-1\). However - and this is the reason to their name in (Celsus et al., 2022), _kissing polynomials,_ these trajectories do not stay distinct: at certain points \(\omega^{*}\) they 'kiss': the trajectory of \(p_{n}^{\omega^{*}}\) briefly touches the trajectory of the zeros of \(p_{n-1}^{\omega^{*}}\). These are precisely the points where the Hankel determinant \(\mathfrak{t}_{n}^{\omega^{*}}\) vanishes and, at that instance, \(p_{n-1}^{\omega^{*}}\) and \(p_{n}^{\omega^{*}}\) coincide. (In other words, \(p_{n}^{\omega^{*}}\) is of degree \(n-1\).) We might expect similar behaviour from the OPUC \(\mathrm{S}_{n}^{\omega}\) defined by the bilinear form (2.1), yet this is not the case! We focus in this section on \(\omega>0\) because, in light of Lemma 3, the pattern for \(\omega<0\) is the same, subject to the conformal map \(z\to z^{-1}\) and unit shift of \(\omega\). Fig. 4.1 displays the zero trajectories of \(\mathrm{S}_{n}^{\omega}\), \(2\leq n\leq 10\), in the complex plane as \(\omega\) varies between \(0\) and \(+\infty\). It provides an explanation for the name we have endowed them with, _skyburst polynomials._ As \(\omega\) increases from the origin, the zeros 'burst' from the origin (not surprising, since \(\mathrm{S}_{n}^{0}(z)=z^{n}\)) into the complex plane, after a while the \(n\) trajectories all loop back to the origin, zeros become real, negative and they slowly move towards \(-1\), a point they collectively reach as \(\omega\to+\infty\). However, they do now kiss: the trajectories remain separate from each other. This is evident from Fig. 4.2, a closeup of the zeros of \(\mathrm{S}_{4}^{\omega}\) and \(\mathrm{S}_{5}^{\omega}\). The trajectories cross each other, but these encounters occur at distinct values of \(\omega\): it is possible for a zero of \(\mathrm{S}_{4}^{\omega_{1}}\) to coincide with a zero of \(\mathrm{S}_{5}^{\omega_{2}}\) for \(\omega_{1}\neq\omega_{2}\). The reason is that, once \(\mathfrak{t}_{n}^{\omega}\) vanishes, so do other minors in the determinantal representation of \(\mathrm{S}_{n}^{\omega}\), \[\mathrm{S}_{n}^{\omega}(z)=\frac{1}{\mathrm{t}_{n}^{\omega}}\det\left[\begin{array} []{cccc}\mu_{0}&\mu_{1}&\cdots&\mu_{n-1}&1\\ \mu_{-1}&\mu_{0}&\cdots&\mu_{n}&z\\ \vdots&\vdots&&\vdots&\vdots\\ \mu_{-n}&\mu_{-n+1}&\cdots&\mu_{-1}&z^{n}\end{array}\right],\] where the \(\mu_{n}\)s are the moments. More detailed examination of the zero trajectories of \(\mathrm{S}_{n}^{\omega}\) for \(n\geq 2\) reveals an intriguing pattern. We claim - and this will be proved rigorously in the sequel - that, as \(\omega>0\) grows, there are two regimes: **Burst:**: For \(m=1,\ldots,n-1\) and \(m-1\leq\omega\leq m\) the pattern is as follows: at \(\omega=m-1\)\(n-m\) trajectories 'burst' into the complex plane at angles which are multiples of \(2\pi/(n-m)\): if \(n-m\) is odd, one of them does so along the positive ray. As \(\omega\) grows, the trajectories sketch a loop, ultimately returning to the origin when \(\omega=m\). The trajectory along the positive ray (if it exists) is 'flattened': at certain point in \((m-1,m)\) it loops back to the origin, all along positive values. The remaining \(m\) zeros of \(\mathrm{S}_{n}^{\omega}\) live in \((-1,0)\). **Fizzle:**: The fireworks are over once \(\omega>n\). All the zeros are then 'trapped' in \((-1,0)\), ultimately tending to \(-1\) as \(\omega\to\infty\). We now confirm the above claims. Since \(\mathrm{S}_{1}^{\omega}(z)=z+\frac{\omega}{1+\omega}\), for \(n=1\) we have a single zero in \((-1,0)\). Let us now progress by induction, assuming that all the zeros of \(\mathrm{S}_{m}^{n}\) are in \((-1,0)\) for \(m\geq n+1\). Because of (2.7), it thus follows that \(\mathrm{S}_{n}^{m}(z)=z^{n-m}\mathrm{S}_{m}^{n}(z)\), \(m\in\{0,1,\ldots,n-1\}\), has zeros of multiplicity \(m\) zeros - actually, because of Corollary 3, \(m\) simple zeros - in \((-1,0)\), as well as a zero at the origin of multiplicity \(n-m\). Consider \(\omega=m+\varepsilon\) for \(0<\varepsilon\ll 1\). As \(\varepsilon\) grows away from zero, \(\mathrm{S}_{n}^{m+\varepsilon}(z)=\mathrm{S}_{n}^{m}(z)+\varepsilon\partial\mathrm{S }_{n}^{m}(z)/\partial\varepsilon+\mathcal{O}\big{(}\varepsilon^{2}\big{)}\). But it follows from (2.5) and (2.7) that \[\mathrm{S}_{n}^{m+\varepsilon}(z)=\frac{n!^{2}z^{n-m}}{(n-m)!(n+m)!}[1+ \mathcal{O}(z)]-\varepsilon(-1)^{n-m}\frac{m!^{2}(n-m-1)!}{(n+m)!}[1+ \mathcal{O}(z)]+\mathcal{O}\big{(}\varepsilon^{2}\big{)}\,.\] We are interested in the \(n-m\) zeros emanating from the origin for \(m+\varepsilon\): suppose that such a zero is \(r_{\varepsilon}\mathrm{e}^{\mathrm{i}\theta_{\varepsilon}}\), where \(0<r\ll 1\) - actually, \(r\approx\varepsilon^{1/(n-m)}\). Therefore \[\theta_{\varepsilon}\in\left\{\pi+\frac{2\pi k}{n-m}\,:\,k=0,\ldots,n-m-1\right\}\] - the zeros leave the origin in \(n-m\) trajectories. One (for \(k=0\)) leads into \((-1,0)\), the other emerge into \(\mathbb{C}\setminus(-1,0)\) in \(n-m-1\) equiangular rays. (If \(n-m\) is even one of these rays proceeds along the positive ray.) Let us consider what happens as \(\varepsilon\) increases in \((0,1)\). The zero in \((-1,0)\) cannot return to the origin or cross \(-1\), because it follows at once from (2.5) that \(\mathrm{S}_{n}^{\omega}(0)\neq 0\) for non-integer \(\omega\), while \(\mathrm{S}_{n}^{\omega}(-1)\neq 0\) because of (3.4). There is another theoretical possibility for zeros to leave \((-1,0)\): the trajectories of two (or more) zeros coalesce at a point and thence emerge into \(\mathbb{C}\setminus(-1,0)\). This is rules out by Corollary 3 because a point of zero coalescence is a zero of nontrivial multiplicity. What about the \(n-m-1\) zero trajectories that have emerged into \(\mathbb{C}\setminus(-1,0)\)? They evolve there as \(\omega\) increases until \(\omega=m+1\), when they must return to the origin because \(\mathrm{S}_{n}^{m+1}(z)=z^{n-m-1}\mathrm{S}_{m+1}^{n}(z)\) and each forms a loop because of the continuity of (simple!) zeros. No zero trajectory may approach the negative ray (in particular the interval \((-1,0)\)) because the trajectory have \(z\leftrightarrow\bar{z}\) symmetry and such an encounter will bring two trajectories together: again, this is ruled out by Corollary 3. Once \(n-m\) is even, one of the trajectories at \(\omega=m\) emerges into the positive ray and (again, by simplicity of zeros) it stays there until \(\omega=m+1\). In other words, in each interval \(\omega\in(m,m+1)\) for \(m=0,1,\ldots,n-1\) there are \(n-m-1\) zero trajectories looping in \(\mathbb{C}\setminus(-1,0)\), while the remaining \(m+1\) zeros live in \((-1,0)\). Once \(\omega\) exceeds \(n\), all the zeros are in \((-1,0)\) and it is trivial to note that, by (2.5), \[\lim_{\omega\to\infty}\mathrm{S}_{n}^{\omega}(z)=(1+z)^{n}\] - ultimately, all the zeros congregate at \(-1\). This completely explains the pattern visible in Fig. 4.1. ## Acknowledgements The work of the first author is supported by the project PID2021-124472NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe" and the project E48_23R from Diputacion General de Aragon (Spain) and ERDF "Construyendo Europa desde Aragon".
2310.13178
Exact Inference for Common Odds Ratio in Meta-Analysis with Zero-Total-Event Studies
Stemming from the high profile publication of Nissen and Wolski (2007) and subsequent discussions with divergent views on how to handle observed zero-total-event studies, defined to be studies which observe zero events in both treatment and control arms, the research topic concerning the common odds ratio model with zero-total-event studies remains to be an unresolved problem in meta-analysis. In this article, we address this problem by proposing a novel repro samples method to handle zero-total-event studies and make inference for the parameter of common odds ratio. The development explicitly accounts for sampling scheme and does not rely on large sample approximation. It is theoretically justified with a guaranteed finite sample performance. The empirical performance of the proposed method is demonstrated through simulation studies. It shows that the proposed confidence set achieves the desired empirical coverage rate and also that the zero-total-event studies contains information and impacts the inference for the common odds ratio. The proposed method is applied to combine information in the Nissen and Wolski study.
Xiaolin Chen, Jerry Q Cheng, Lu Tian, Minge Xie
2023-10-19T22:26:46Z
http://arxiv.org/abs/2310.13178v1
# Exact Inference for Common Odds Ratio in Meta-Analysis with Zero-Total-Event Studies ###### Abstract Stemming from the high profile publication of Nissen and Wolski (2007) and subsequent discussions with divergent views on how to handle observed zero-total-event studies, defined to be studies which observe zero events in both treatment and control arms, the research topic concerning the common odds ratio model with zero-total-event studies remains to be an unresolved problem in meta-analysis. In this article, we address this problem by proposing a novel repro samples method to handle zero-total-event studies and make inference for the parameter of common odds ratio. The development explicitly accounts for sampling scheme and does not rely on large sample approximation. It is theoretically justified with a guaranteed finite sample performance. The empirical performance of the proposed method is demonstrated through simulation studies. It shows that the proposed confidence set achieves the desired empirical coverage rate and also that the zero-total-event studies contains information and impacts the inference for the common odds ratio. The proposed method is also applied to combine information in the Nissen and Wolski study. **Keywords:** Exact confidence interval; Meta-analysis; Odds ratio; Repro samples; Zero-total-event studies ## 1 Introduction Meta-analysis methodology developed for synthesizing information across multiple independent (but comparative) sources has a long history and remains to be a popular research topic in statistics (Breslow, 1981; Normand, 1999; Sutton and Higgins, 2008; Xie et al., 2011; Cooper et al., 2019). It is particular useful for the settings where a single study is inadequate for drawing a reliable conclusion and conclusions can often be strengthened by aggregating information from all studies of the same or a similar kind. Meta-analysis approaches have become a widely used tool in many fields, such as biomedical research, pathology, library and information science, education and so on. One of the research topics in meta-analysis that remain open is how to handle an observed zero-total-event study that is defined to be a study which observes zero events in both treatment and control arms (cf., Finkelstein and Levin, 2012; Liu et al., 2014; Yang et al., 2016). This problem has long been debated since the high profile publication by Nissen and Wolski (2007), as there are divergent but inclusive views on how to the handle zero-total-event studies (Finkelstein and Levin, 2012; Xie et al., 2018). In this article, we revisit this problem and propose a novel exact meta-analysis procedure to handle zero-total-event studies. Our research is motivated by the exact study of Nissen and Wolski (2007) on drug safety evaluation of the use of diabetic drug Avandia. In Nissen and Wolski (2007), the authors collected data from 48 clinical studies, and conduct a meta-analysis to assess whether Avandia significantly increases the risk of myocardial infraction and death from cardiovascular diseases. Most of these studies reported zero or a very small number of events in one or both of treatment and control groups. Nissen and Wolski (2007) used Peto's method to combine information across all studies, which effectively discarded more than half of 48 studies in the analysis of endpoint cardiovascular death (25 out of the total 48 studies are zero-total-event studies). This practice was challenged by Diamond et al. (2007), initiated a hot debate in the community with diverging views on how to handle observed zero-total-event studies in general. The key difficulties are that 0/0 has no mathematical definition and also that most of the existing meta-analysis methods rely on normality or large sample justifications and therefore are not suited for analysis of zero-total-event studies. Indeed, as stated in Xie et al. (2018), with the probabilities of both treatment and control events \((\pi_{0i},\pi_{1i})\) not equal to 0 (even though very small), the probability of observing a zero-total-event study is 0 when the number of patients in both treatment arms \(n_{i}\rightarrow\infty\) and \(m_{i}\rightarrow\infty\). Thus, when a zero-total-event study is observed, it is an indication that the sample sizes are not large enough for this particular underlying set. Until today, the statistical inference problem at the center of this debate is still open and unanswered (Finkelstein and Levin, 2012; Xie et al., 2018). Consider a typical setting of \(K\) independent clinical trials (control vs treatment): \(X_{i}\sim\text{Binomial}(n_{i},\,\pi_{0i})\) and \(Y_{i}\sim\text{Binomial}(m_{i},\pi_{1i})\), \(i=1,\ldots,K\). We can often express the sample data in \(K\)\(2\times 2\) tables: where \(X_{i}\) and \(Y_{i}\) are the numbers of events in the control and treatment arms of the \(i^{th}\) trial. Often \((\pi_{0i},\pi_{1i})\) is reparameterized to \((\theta_{i},\eta_{i})\), with the log odds ratio \(\theta_{i}=\log\left(\frac{\pi_{1i}}{1-\pi_{1i}}/\frac{\pi_{0i}}{1-\pi_{0i}}\right)\) and \(\eta_{i}=\log\left\{\left(\frac{\pi_{1i}}{1-\pi_{1i}}\right)\left(\frac{\pi_{ 0i}}{1-\pi_{0i}}\right)\right\}\). A classical common odds ratio model assumes \(\theta_{1}\equiv\ldots\equiv\theta_{K}=\theta\), but the rates \((\pi_{0i},\pi_{1i})\) allow to be different from one study to another; cf., Breslow (1981); Cox and Snell (1989); Nissen and Wolski (2007); Finkelstein and Levin (2012); Tian et al. (2009), among others. In _rare event studies_, both \(\pi_{0i}>0\) and \(\pi_{1i}>0\) but are very small. In this case, the observed data, say \(\{x_{i}^{obs},y_{i}^{obs}\}\), can often be 0 or very small numbers (\(n_{i}\) and \(m_{i}\) can be large typically in thousands). The studies with observed data \(x_{i}^{obs}=y_{i}^{obs}=0\) are referred to as _zero-total-event studies_ in the literature (cf., Finkelstein and Levin, 2012; Liu et al., 2014). In this article, we focus on the inference problem of \(\theta\), or more specifically, constructing a finite-sample performance guaranteed level-\(\alpha\) confidence interval for \(\theta\) in meta-analysis while incorporating potentially many zero-total-event studies. The analysis of rare event data, in particular incorporating zero-total-event studies in a meta-analysis, raises specific statistical challenges and has been intensely studied (Sweeting et al., 2004; Bradburn et al., 2007; Finkelstein and Levin, 2012; Tian et al., 2009; Cai et al., 2010; Bhaumik et al., 2012; Liu et al., 2014; Yang et al., 2016). Most commonly used meta-analysis methods rely on the asymptotic distribution of the combined estimator to make inference. For instance, the widely used inverse-variance weighted method combines point estimators from individual studies, assuming that the distributions of all the estimators can be well approximated by normal distributions. The classical Mantel-Haenszel Mantel and Haenszel (1959) \begin{table} \begin{tabular}{r|c c|c} & Yes & No & \\ \hline Control & \(X_{i}\) & \(n_{i}-X_{i}\) & \(n_{i}\) \\ Treatment & \(Y_{i}\) & \(m_{i}-Y_{i}\) & \(m_{i}\) \\ \hline Total & \(X_{i}+Y_{i}\) & \((n_{i}+m_{i})-(X_{i}+Y_{i})\) & \(n_{i}+m_{i}\) \\ \end{tabular} \end{table} Table 1: \(2\times 2\) Clinical Study with Control and Treatment and Peto methods Yusuf et al. (1985) also rely on the normal approximation to the distribution of the combined estimator. However, the normal approximations are ill-suited for rare events data and results for rare events data in practice often yield an unacceptably low coverage probability (Bradburn et al., 2007; Tian et al., 2009). In addition, the commonly practiced "continuity correction" (i.e. adding 0.5 or 0.1 to zero cells) is shown with compelling evidence to have undesirable impact on inference outcomes (Sweeting et al., 2004; Bradburn et al., 2007). Conditional likelihood inference methods have also been proposed for meta-analysis of \(2\times 2\) tables (e.g., Cox and Snell, 1989). In particular, one can make inference relying on a conditional likelihood function and finite sample Fisher exact test, for which computing algorithms and small sample approximations are developed (Mehta et al., 1985; Davison, 1988). Under the conditional inference framework, the conditional likelihood function of a zero-total-event study is constant, and thus the study does not contribute to the inference. However, based on the likelihood principle (Berger and Wolpert, 1988), Xie et al. (2018) showed that the conditional likelihood, although maintaining test size, loses power (compared to the full likelihood method) and Fisher exact test is not particularly suited for analysis of zero-total-event clinical trials, a conclusion also reached independently in Finkelstein and Levin (2012). Bayesian methods have also been experienced to analyze zero-total-event studies, in which zero-total-event studies typically contribute to the meta-analysis inference. Since the use of priors imposes an additional model assumption and rare events data are very sensitive to the prior choices, it is argued in the field that a Bayesian approach "may raise more questions than they settle" (cf, Finkelstein and Levin (2012)). In recent years, several finite sample methods are proposed for rare events data but for different inference problems. For instance, Tian et al. (2009) proposes an exact method for meta-analysis of risk difference \(p_{1k}-p_{0k}\). Although Tian et al. (2009) does not use large sample approximations, it is on risk difference and cannot handle the parameter of odds ratio. Yang et al. (2016) reviews exact meta-analysis methods with a focus on rare events and shows that the method by Tian et al. (2009) is a spacial case of Xie et al. (2011). Cai et al. (2010) suggests to use a Poisson model to analyze the rare event \(2\times 2\) tables. The approach avoids the difficult question of \(0/0\), but by changing the distribution assumption it also changes the original inference target in the two binomial \(2\times 2\) tables. Despite all the efforts, it remains an open and unanswered inference problem in statistics on how to handle the zero-total-event studies in analysis of the common odds ratio (Finkelstein and Levin, 2012; Xie et al., 2018). The debate on zero-total-event studies are centered on two questions: (a) Does a zero-total-event study possess any information concerning the parameter of common odds ratio? (b) If it does, how can we effectively incorporate zero-total-event studies in meta-analysis? In Xie et al. (2018), the authors showed that zero-total-event studies indeed possess information about the parameter common odds ratio in meta-analysis. In the current article, we provide a solution to the second question on how to effectively include zero-total-event studies to help make an effective inference on the common \(\theta\) in meta-analysis. Our solution is developed based on a newly developed inferential framework called _repro samples method_(Xie and Wang, 2022). The repro samples method uses a simple yet fundamental idea: Study the performance of artificial samples that are generated by mimicking the sampling mechanism of the observed data; the artificial samples are then used to help quantify the uncertainty in estimation of model and parameters. The repro samples development is deeply rooted and grown from prior developments of artificial-sample-based inference procedures across Bayesian, frequentist and fiducial paradigms (i.e., approximate Bayesian computing, Bootstrap, generalized fiducial inference and inferential model; See further discussions in Xie and Wang, 2022). It does not need to rely on likelihood functions or large sample theories, and it is especially effective for difficult inference problems in which regularity conditions and thus regular inference approaches do not apply. Xie and Wang (2022) and Wang et al. (2022) used the repro samples framework to address two open inference questions in statistics concerning (a) Gaussian mixture and (b) high dimensional regression models, where the authors successfully provided finite-sample confidence set for discrete unknown parameters (i.e., unknown number of components in the mixture model and unknown sparse model in the high dimensional model) along with joint confidence sets for the unknown discrete and also the remaining model parameters. In our current paper, our problem does not involve any discrete parameters, however we can still use some of the key techniques in the repro samples framework to develop a novel methodology with finite sample supporting theories to address the highly non-trivial inference problem concerning zero-total-event studies. The rest of this article is organized as follows. Section 2 introduces the repro samples method and our proposed inference procedure. Section 3 provides extensive simulation studies to examine the performance of proposed method and compare it with the popular Mantel-Haenszel and Peto methods. A new analysis of the Avandia data in Nissen and Wolski (2007) using the proposed repro samples method is provided in Section 4. A brief summary and discussion is given in Section 5. ## 2 Repro Samples Method for Meta-analysis of \(2\times 2\) Tables Since the repro samples method is relatively new, we first provide in Section 2.1 a brief description of the method, based on which we provide our new development tailored to zero-total-event studies in Sections 2.2 and 2.3. ### Notations, terminologies and brief review of repro samples method Suppose the sample data \(Y\in\mathcal{Y}\) are generated from an _algorithmic model_: \[Y=G(\theta,U), \tag{1}\] where \(G(\cdot,\cdot)\) is a known mapping from \(\Theta\times\mathcal{U}\mapsto\mathcal{Y}\), \(\theta\in\Theta\) is model parameter and \(U=(U_{1},\ldots U_{r})^{\top}\in\mathcal{U}\subset R^{r}\), \(r>0\), is a random vector whose distribution is known or can be simulated. Thus, given \(\theta\in\Theta\), we know how to simulate data \(Y\) from (1). In fact, this is the only assumption needed in the repro samples development. The model \(G(\cdot,\cdot)\) can be very complicated in either an explicit or in-explicit form, including complex examples such as differential equations or generative neural networks. As long as we can generate \(Y\) for a given \(\theta\), we can apply the method. Denote by the observed data \(y^{obs}=G(\theta^{(o)},u^{rel})\), where \(\theta^{(o)}\in\Theta\) is the true value and \(u^{rel}\) the corresponding (unknown) realization of \(U\). Let \(T(\cdot,\cdot)\) be a mapping function from \(\mathcal{U}\times\Theta\rightarrow\mathcal{T}\subseteq R^{q}\), for some \(q\leq n\). Also, for each given \(\theta\), let \(B_{\alpha}(\theta)\) be a Borel set such that \[\mathbb{P}\left\{T(U,\theta)\in B_{\alpha}(\theta)\right\}\geq\alpha,\quad 0 <\alpha<1. \tag{2}\] The function \(T\) is refereed to as a _nuclear mapping_ function. A repro samples method constructs a subset in \(\Theta\): \[\Gamma_{\alpha}(y^{obs})=\left\{\theta:\exists\,u^{*}\in\mathcal{U}\text{ such that }y^{obs}=G(\theta,u^{*}),\,T(u^{*},\theta)\in B_{\alpha}(\theta)\right\}\subset\Theta, \tag{3}\] In another words, for a potential value \(\theta\), if there exists a \(u^{*}\) such that the artificial sample \(y^{*}=G(\theta,u^{*})\) matches \(y^{obs}\) (i.e., \(y^{*}=y^{obs}\)) and \(T(u^{*},\theta)\in B_{\alpha}(\theta)\), then we keep this \(\theta\) in the set. Since \(y^{obs}=G(\theta^{(o)},\)\(u^{rel})\), if \(T(u^{rel},\theta^{(o)})\in B_{\alpha}(\theta^{(o)})\), then \(\theta^{(o)}\in\Gamma_{\alpha}(y^{obs})\). Similarly, under model \(Y=G(\theta^{(o)},U)\), if \(T(U,\theta^{(o)})\in B_{\alpha}(\theta^{(o)})\), then \(\theta^{(o)}\in\Gamma_{\alpha}(Y)\). Thus, by construction, \(\mathbb{P}\big{\{}\theta^{(o)}\in\Gamma_{\alpha}(Y)\big{\}}\geq\mathbb{P} \big{\{}T(U,\theta^{(o)})\in B_{\alpha}(\theta^{(o)})\big{\}}\geq\alpha.\) This proves that \(\Gamma_{\alpha}(y^{obs})\) is a level-\(\alpha\) confidence set for \(\theta_{0}\). This development is likelihood-free and does not need to rely on any large sample theories. The repro samples development utilizes the ideas of _inversion_ and _matching of artificial and observed samples_. Let's illustrate the development using a very simple toy example of \(Y\sim N(\theta,1)\). In the form of (1), \(Y=\theta+U\), where \(U\sim N(0,1)\). Suppose the true underlying parameter value is \(\theta^{(o)}=1.35\) and the realization is \(u^{rel}=1.06\), giving us a single observed data point \(y^{obs}=2.41\). We only know \(y^{obs}=2.41\) and \(u^{rel}\) is a realization from \(N(0,1)\) but we do not know its value \(1.06\). We would like to make an inference for \(\theta^{(o)}\). Let \(T(U,\theta)=U\), then the level-95% Borel set in (2) is the interval \((-1.96,1.96)\). By (3), we keep and only keep those potential \(\theta\) values that can reproduce \(y^{obs}=2.41\) by setting (matching) \(\theta+u^{*}=2.41\) with a (potential) realized error \(u^{*}\in(-1.96,1.96)\). This method of getting the set of \(\theta\)'s is essentially an inversion procedure and the method leads us to a level-95% confidence set \((0.45,4.37)\), which is exactly the same best possible level 95% confidence interval when observing a single data point \(y^{obs}=2.41\) using the classical frequentist method. The repo samples method does not need to involve the likelihood function and has a finite sample performance guarantee. Xie and Wang (2022) also showed that the repro methods is more general and flexible and subsumes the Neyman-Pearson framework as a special case. By using the repro samples development in our current paper on meta-analysis of \(2\times 2\) tables, we ask, for a potential value of the common log odds ratio parameter \(\theta\) and a given confidence level \(\alpha\), whether the \(\theta\) value can be potentially be used to generate an artificial data set that match the observed studies. If it does, we keep the \(\theta\) value in our level-\(\alpha\) confidence set. One complication is that there are also nuisance parameters \(\boldsymbol{\eta}=(\eta_{1},\ldots,\eta_{K})^{T}\). We provide our detailed development in Section 2.2. Repro samples method and finite-sample confidence set for the common odds ratio in \(2\times 2\) tables For the common odds ratio model in the \(2\times 2\) tables. We have \(\pi_{0,i}=e^{(\theta+\eta_{i})/2}\) and \(\pi_{1,i}=e^{(\theta-\eta_{i})/2}\), for \(i=1,\ldots,K\). We write \(\boldsymbol{X}=(X_{1},\cdots,X_{K})^{T}\) and \(\boldsymbol{Y}=(Y_{1},\cdots,Y_{K})^{T}\). In the form of (1), the pair of binomial models \(X_{i}\sim\text{Binomial}(n_{i},\)\(\pi_{0i})\) and \(Y_{i}\sim\text{Binomial}(m_{i},\pi_{1i})\), \(i=1,\ldots,K\), can be re-expressed as \[X_{i}=\sum_{j=1}^{n_{i}}I\{U_{ij}\leq e^{(\theta+\eta_{i})/2}\}\text{ and }Y_{i}=\sum_{j=1}^{m_{i}}I\{V_{ij}\leq e^{(\theta-\eta_{i})/2}\},\quad\text{ for }i=1,\ldots,K, \tag{4}\] where \(U_{ij}\) and \(V_{ij}\) are iid \(U(0,1)\) distributed random variables, for \(j=1,\ldots,n_{i}\) or \(m_{i}\), \(i=1,\ldots,K.\) We observe \(\boldsymbol{x}^{obs}=(x_{1}^{obs},\ldots,x_{K}^{obs})^{T}\) and \(\boldsymbol{y}^{obs}=(y_{1}^{obs},\ldots,y_{K}^{obs})^{T}\), \(x_{i}^{obs}=\sum_{j=1}^{n_{i}}I\{u_{ij}^{rel}\leq e^{(\theta^{(o)}-\eta_{i}^{ (o)})/2}\}\) and \(y_{i}^{obs}=\sum_{j=1}^{n_{i}}I\{v_{ij}^{rel}\leq e^{(\theta^{(o)}-\eta_{i}^{ (o)})/2}\}\), where \(\theta^{(o)}\) and \(\boldsymbol{\eta}^{(o)}=(\eta_{1}^{(o)},\ldots,\eta_{K}^{(o)})^{T}\) are the true parameter values and \(\boldsymbol{u}_{i}^{rel}=(u_{i1}^{rel},\ldots,u_{im_{i}}^{rel})^{T}\)\(\boldsymbol{v}_{i}^{rel}=(v_{i1}^{rel},\ldots,v_{im_{i}}^{rel})^{T}\) are the corresponding realized random vectors that generated \(\boldsymbol{x}^{obs}\) and \(\boldsymbol{y}^{obs}\), respectively. The number of tables \(K\) and each table's \((n_{i},m_{i})\) are given (not need to go to infinity). Among the \(K\) tables, we allow many zero-total-event studies with \(x_{i}^{obs}=y_{i}^{obs}=0\), but assume that at least one of \(x_{i}^{obs}\neq 0\) and one of \(y_{i}^{obs}\neq 0\). Our goal is to use a repro sample method to construct a performance guaranteed level-\(\alpha\) confidence interval for the common log odds ratio parameter \(\theta^{(o)}\) while taking care of the remaining \(K\) nuisance model parameters \(\eta_{i}\), \(i=1,\ldots,K\). Mantel-Haenszel statistic is a commonly used estimator of common log odds ratio, \[W_{MH}(\mathbf{X},\mathbf{Y})=\log\Bigg{(}\sum_{k=1}^{K}R_{k}/\sum_{k=1}^{K}S_{k}\Bigg{)}\] where \(R_{k}=X_{k}(m_{k}-Y_{k})/(m_{k}+n_{k})\) and \(S_{k}=Y_{k}(n_{k}-X_{k})/(m_{k}+n_{k})\). To make inference, the Mantel-Haenszel method uses the large sample theorems by which \[W(\mathbf{X},\mathbf{Y};\theta)=W_{MH}(\mathbf{X},\mathbf{Y})-\theta \tag{5}\] is normally distributed as both \(n_{i}\rightarrow\infty\) and \(m_{i}\rightarrow\infty\), for all \(i=1,\ldots,K\)(Hauck, 1979; Breslow, 1981). In rare events studies especially those contain zero-total-event studies, the large sample theorems do not apply, so a use of Mantel-Haenszel method is not theoretically justified for zero-total-event studies. However, due to its simplicity and good empirical performance especially in large sample situations, we use \(W(\mathbf{X},\mathbf{Y};\theta)\) in (5) to help develop the nuclear mapping function in our repro samples method to obtain a performance guaranteed finite sample confidence interval for \(\theta\). For the sample data generated with parameter values \((\theta,\mathbf{\eta}^{T})\), \(X_{i}=\sum_{j=1}^{n_{i}}I\{U_{ij}\leq e^{(\eta_{i}+\theta)/2}\}\) and \(Y_{i}=\sum_{j=1}^{m_{i}}I\{V_{ij}\leq e^{(\eta_{k}-\theta)/2}\}\), the distributions of \(W(\mathbf{X},\mathbf{Y};\theta)\) depends on the \(K\) nuisance parameters \(\mathbf{\eta}=(\eta_{1},\ldots,\eta_{K})^{T}\). We use a profile approach to control the impact of the nuisance parameters \(\mathbf{\eta}\). Specifically, let \(\widetilde{X}_{i}=\sum_{j=1}^{n_{i}}I\{U^{\prime}_{ij}\leq e^{(\widetilde{ \eta}_{i}+\theta)/2}\}\) and \(\widetilde{Y}_{k}=\sum_{j=1}^{m_{i}}I\{V^{\prime}_{ij}\leq e^{(\widetilde{ \eta}_{i}-\theta)/2}\}\), where \(U^{\prime}_{ij}\) and \(V^{\prime}_{ij}\) are iid \(U(0,1)\) distributed random variables. We define, for \(t\geq 0\), \[\gamma_{(\theta,\widetilde{\eta})}\{t\}=\mathbf{P}\left\{\big{|}W(\widetilde{ \mathbf{X}},\widetilde{\mathbf{Y}};\theta)\big{|}<t\right\}. \tag{6}\] In the special case with \(\widetilde{\mathbf{\eta}}=\mathbf{\eta}\), we have \(\gamma_{(\theta,\eta)}\{|W(\mathbf{X},\mathbf{Y};\theta)|\}\sim U(0,1)\). In particular, we can show that \(1-\gamma_{(\theta,\widetilde{\eta})}\left\{|W(\mathbf{x},\mathbf{y};\theta)|\right\}= \mathbf{P}\big{\{}\big{|}W(\widetilde{\mathbf{X}},\widetilde{\mathbf{Y}};\theta)\big{|} \geq\big{|}W(\mathbf{x},\mathbf{y};\theta)\big{|}\big{\}}\) is the \(p\)-value to reject the null hypothesis \(H_{0}:\) a sample dataset \((\mathbf{x},\mathbf{y})\) is generated from \((\theta,\tilde{\mathbf{\eta}}^{T})\), when in fact the a sample dataset \((\mathbf{x},\mathbf{y})\) is generated from \((\theta,\mathbf{\eta}^{T})\). Following the profile method proposed in Xie and Wang (2022), we define our nuclear mapping function as \[T(\mathbf{X},\mathbf{Y};\theta)=\min_{\tilde{\mathbf{\eta}}\,\in\,\mathbf{R}^{K}}\gamma_{( \theta,\tilde{\mathbf{\eta}}^{T})}\left\{|W(\mathbf{X},\mathbf{Y};\theta)|\right\} \tag{7}\] It is clear that \(T(\mathbf{X},\mathbf{Y};\theta)\leq\gamma_{(\theta,\eta^{T})}\{|W(\mathbf{X},\mathbf{Y};\theta )|\}\), i.e., \(T(\mathbf{X},\mathbf{Y};\theta)\) is dominated by \(\gamma_{(\theta,\eta^{T})}\{|W(\mathbf{X},\mathbf{Y};\theta)|\}\). Since \(X_{i}=\sum_{j=1}^{n_{i}}I\{U_{ij}\leq e^{(\eta_{i}+\theta)/2}\}\) and \(Y_{i}=\sum_{j=1}^{m_{i}}I\{V_{ij}\leq e^{(\eta_{k}-\theta)/2}\}\), the mapping \(T(\mathbf{X},\mathbf{Y};\theta)\) is a function of \(\mathbf{U}=\{U_{ij},1\leq j\leq n_{i},1\leq i\leq K\}\), \(\mathbf{V}=\{V_{ij},1\leq j\leq m_{i},1\leq i\leq K\}\) and \((\theta,\mathbf{\eta}^{T}).\) Thus, for a given \(\theta\), the distribution of \(T(\mathbf{X},\mathbf{Y};\theta)\) still depends on the nuisance parameter \(\mathbf{\eta}\). However, we always have \[\mathbf{P}\left\{T(\mathbf{X},\mathbf{Y};\theta)\leq\alpha\right\}\geq\mathbf{P}\left[ \gamma_{(\theta,\eta^{T})}\{|W(\mathbf{X},\mathbf{Y};\theta)|\}\leq\alpha\right]=\alpha. \tag{8}\] Thus, a Borel set corresponding to (2) is \(B_{\alpha}=(0,\alpha]\) which is free of both \(\theta\) and \(\mathbf{\eta}\). Following (3), the level-\(\alpha\) repro samples confidence set for \(\theta\) is: \[\Gamma_{\alpha}(\mathbf{x}_{obs},\mathbf{y}_{obs}) =\left\{\theta:\exists\left(\mathbf{u}^{*},\mathbf{v}^{*}\right)\text{ and }\mathbf{\eta}\text{ such that }(\mathbf{x}^{obs},\mathbf{y}^{obs})=(\mathbf{x}^{*},\mathbf{y}^{*}),\right.\] \[\left.T(\mathbf{x}^{*},\mathbf{y}^{*};\theta)\leq\alpha\right\}\] \[=\left\{\theta:\exists\left(\mathbf{u}^{*},\mathbf{v}^{*}\right)\text{ and }\mathbf{\eta}\text{ such that }(\mathbf{x}^{obs},\mathbf{y}^{obs})=(\mathbf{x}^{*},\mathbf{y}^{*}),\right.\] \[\left.T(\mathbf{x}^{obs},\mathbf{y}^{obs};\theta)\leq\alpha\right\}\] \[=\left\{\theta:T(\mathbf{x}_{obs},\mathbf{y}_{obs};\theta)\leq\alpha \right\}, \tag{9}\] where \(\mathbf{x}^{*}=(x_{1}^{*},\ldots,x_{K}^{*})^{T}\) and \(\mathbf{y}^{*}=(y_{1}^{*},\ldots,y_{K}^{*})^{T}\) with \(x_{i}^{*}=\sum_{j=1}^{n_{i}}I\{u_{ij}^{*}\leq e^{(\theta+\eta_{i})/2}\}\) and \(y_{i}^{*}=\sum_{j=1}^{m_{i}}I\{v_{ij}^{*}\leq e^{(\theta-\eta_{i})/2}\}\), for \(i=1\leq i\leq K.\) The first equation of (9) follows the repro samples approach. The last equation holds since, for a given \(\theta\), there always exist \((\mathbf{u}^{*},\mathbf{v}^{*})\) and \(\mathbf{\eta}\) such that \((\mathbf{x}^{obs},\mathbf{y}^{obs})=(\mathbf{x}^{*},\mathbf{y}^{*})\). By equation (8), we have the following theorem that \(\Gamma_{\alpha}(\mathbf{x}_{obs},\mathbf{y}_{obs})\) in (9) is a level-\(\alpha\) confidence set for the common log odds ratio \(\theta^{(o)}\). **Theorem 1**.: _Under the above setup and suppose the random sample are generated using the parameter values \((\theta^{(o)},\boldsymbol{\eta}^{(o)T})\), i.e., \(X_{i}=\sum_{j=1}^{n_{i}}I\{U_{ij}\leq e^{(\eta_{i}^{(o)}+\theta^{(o)})/2}\}\) and \(Y_{i}=\sum_{j=1}^{m_{i}}I\{V_{ij}\leq e^{(\eta_{i}^{(o)}-\theta^{(o)})/2}\}\), we have_ \[\mathbf{P}\left\{\theta^{(o)}\in\Gamma_{\alpha}(\boldsymbol{X},\boldsymbol{Y} )\right\}\geq\alpha.\] ### Monte-Carlo implementation and computing algorithm To construct the level-\(\alpha\) confidence set in (9), we need to calculate \(T(\boldsymbol{x}_{obs},\boldsymbol{y}_{obs};\theta)=\min_{\widetilde{\eta}} \gamma_{(\theta,\widetilde{\eta}^{T})}\{|W(\boldsymbol{x}_{obs},\boldsymbol{ y}_{obs};\theta)|\}\), for a potential \(\theta\) value. This can be done by using a Monte-Carlo method to approximate \(\gamma_{(\theta,\widetilde{\boldsymbol{\eta}}^{T})}\{|W(\boldsymbol{x}_{obs},\boldsymbol{y}_{obs};\theta)|\}\). Specifically, for any set of fixed \((\theta,\widetilde{\boldsymbol{\eta}}^{T})\), we can approximate the function \(\gamma_{(\theta,\widetilde{\boldsymbol{\eta}}^{T})}\{t\}\) by \[\gamma_{(\theta,\widetilde{\boldsymbol{\eta}}^{T})}\{t\}\approx\frac{1}{M} \sum_{s=1}^{M}I\left\{\left|W(\widetilde{\boldsymbol{x}}^{(s)},\widetilde{ \boldsymbol{y}}^{(s)};\theta)\right|<t\right\}, \tag{10}\] where \(\widetilde{\boldsymbol{x}}^{(s)}=(\widetilde{x}_{1}^{(s)},\ldots,\widetilde{x }_{K}^{(s)})^{T}\), \(\widetilde{\boldsymbol{y}}^{(s)}=(\widetilde{y}_{1}^{(s)},\ldots,\widetilde{ y}_{K}^{(s)})^{T}\), \(\widetilde{x}_{i}^{(s)}=\sum_{i=1}^{n_{i}}I\{U_{ij}^{(s)}\leq e^{(\widetilde{ \eta}_{i}+\theta)/2}\}\), \(\widetilde{y}_{i}^{(s)}=\sum_{j=1}^{m_{i}}I\{V_{ij}^{(s)}\leq e^{(\widetilde{ \eta}_{i}-\theta)/2}\}\) and \((U_{ij}^{(s)},V_{ij}^{(s)})\) are simulated iid \(U(0,1)\) random numbers, for \(s=1\ldots,M\). Thus, we can approximate \(\gamma_{(\theta,\widetilde{\boldsymbol{\eta}}^{T})}\{|W(\boldsymbol{x}_{obs},\boldsymbol{y}_{obs};\theta)|\}\), which is only a function of \((\theta,\widetilde{\boldsymbol{\eta}}^{T})\). We then call an optimization program to find its minimum value over \(\tilde{\boldsymbol{\eta}}\), and it leads to \(T(\boldsymbol{x}_{obs},\boldsymbol{y}_{obs};\theta)\) that is a function of \(\theta\) when given \((\boldsymbol{x}_{obs},\boldsymbol{y}_{obs})\). We provide below a computing algorithm: ``` Step 1: Compute \(W_{MH}(\boldsymbol{x}_{obs},\boldsymbol{y}_{obs})\) and select grids for \(\theta\) on its range, say \(\theta_{1},\cdots,\theta_{Q}\). Step 2: Set \(\widetilde{\Theta}=\emptyset\). For \(m=1,2,\cdots,Q\), repeat the following computation: Step 2a: Calculate \[T(\mathbf{x}_{obs},\mathbf{y}_{obs};\theta_{m})=\min_{\widetilde{\eta}}\frac{1}{M}\sum_{s= 1}^{M}I\left\{\left|W(\widetilde{\mathbf{x}}^{(s)},\widetilde{\mathbf{y}}^{(s)};\theta_ {m})\right|<|W_{MH}(\mathbf{x}_{obs},\mathbf{y}_{obs})-\theta_{m}|\right\},\] where \(\widetilde{\mathbf{x}}^{(s)}=(\widetilde{x}_{1}^{(s)},\ldots,\widetilde{x}_{K}^{(s )})^{T}\), \(\widetilde{\mathbf{y}}^{(s)}=(\widetilde{y}_{1}^{(s)},\ldots,\widetilde{y}_{K}^{(s )})^{T}\), \(\widetilde{x}_{i}^{(s)}=\sum_{i=1}^{n_{i}}I\{u_{ij}^{(s)}\leq e^{(\widetilde{ \eta}_{i}+\theta_{m})/2}\}\), \(\widetilde{y}_{i}^{(s)}=\sum_{j=1}^{m_{i}}I\{v_{ij}^{(s)}\leq e^{(\widetilde{ \eta}_{i}-\theta_{m})/2}\}\) and \((u_{ij}^{(s)},v_{ij}^{(s)})\) are simulated iid \(U(0,1)\) random numbers, for \(s=1\ldots,M\). Step 2b: For given \(0<\alpha<1\), if \(T(\mathbf{x}_{obs},\mathbf{y}_{obs};\theta_{m})\leq\alpha\), update \(\widetilde{\Theta}=\widetilde{\Theta}\cup\theta_{m}\). Step 3: Compute \(\min\{\widetilde{\Theta}\}\) and \(\max\{\widetilde{\Theta}\}\). The \(100\alpha\%\) confidence interval for \(\theta\) is \([\min\{\widetilde{\Theta}\},\max\{\widetilde{\Theta}\}]\). ## 3 Simulation Studies In this section, we examine the empirical performance of our repro samples method on making inference for the common log odds ratio \(\theta\), and also make comparisons with the popular Mantel-Haenszel and Peto methods. In particular, we compare the empirical coverage probabilities and average lengths of the confidence intervals based on 500 replications with M =1000. To generate simulated data, we design a context similar to the structure of Avandia dataset, following Tian et al. (2009) and Liu et al. (2014). Concretely, \(K=48\) independent \(2\times 2\) tables are generated using the same sample sizes of Avandia dataset. The incidence rate \(\pi_{0i}^{(o)}\) in \(i\)th trial is generated from a uniform distribution \(U(0,0.08)\). Then the incidence rate \(\pi_{1i}^{(o)}\) is determined by relationship \(\text{logit}(\pi_{1i}^{(o)})=\theta^{(o)}+\text{logit}(\pi_{0i}^{(o)})\), where several true common log odds ratio values \(\theta^{(o)}\) under various scenarios are examined. Finally, the \(i\)th table is simulated by the binomial distributions with the generated \((\pi_{0i}^{(o)},\pi_{1i}^{(o)})\). In the implementation of our repro samples algorithm, we confine our potential \(\theta\) values within the \(99.95\%\) confidence interval of the true \(\theta^{(o)}\) obtained using the Mantel-Haenszel approach. For each \(\theta\), it is noted that the nuclear mapping involves the minimization over \(\boldsymbol{\eta}=(\eta_{1},\cdots,\eta_{K})^{T}\) with \(K=48\). We apply the R function 'optim' in the package'stats' to find the minimum value. In the implementation of minimization via 'optim', an initial value of \(\boldsymbol{\eta}\) need to be specified. Recall that \(\eta_{i}=\log\left(\pi_{1i}/(1-\pi_{1i})\right)+\log\left(\pi_{0i}/(1-\pi_{0i} )\right)\) for \(k=1,\cdots,K\). Then, if \(i\)th trial has nonzero events in both groups, the initial value of \(\eta_{k}\) is given by \(\hat{\eta}_{i}=\log\left(\frac{\hat{\pi}_{1i}}{1-\hat{\pi}_{1i}}\right)+\log \left(\frac{\hat{\pi}_{0i}}{1-\hat{\pi}_{0i}}\right)\), where \(\hat{\pi}_{1i}=x_{i}/n_{i}\) and \(\hat{\pi}_{0i}=y_{i}/m_{i}\). However, it will not work for trials with zero events in one arms. In view of the similarity among all the trials, we use \(\min\{\hat{\eta}_{k}:k\text{th trial has nonzero events in both groups},1\leq k\leq K\}\) as the initial value of \(\eta_{k}\) for trials with zero events in one or both group. Tables 2 to 4 list the empirical results based on 500 data replications when the common odds ratio \(\theta\) takes different values. Based on these tables, we can see that the proposed repro samples method produces valid confidence intervals for the prespecified confidence level of \(95\%\) for all different \(\theta\) values. The empirical coervages of the Mantel-Haenszel method are mostly on target, although a few of them have slightly undercoverage rates. Peto method only works for moderate \(\theta\)'s, and breaks down for those large and small \(\theta\)'s. In addition, we can see that interval lengths of repro samples are similar but slightly longer than those obtained using Mantel-Haenszel method. To ensure the coverage rates across all cases, the repro samples approach is slightly conservative, which is expected by equation (8). Finally, we conduct a numerical study to demonstrate that our proposed repro samples method can effectively extract information hidden in the zero-total-event studies for the common odds ratio parameter. Suppose we have two datasets, both of which include two non-zero-total-event studies and three zero-total-event studies: (a) (3/100, 2/100), (2/300, 1/300), (0/600, 0/300), (0/600, 0/300), (0/300, 0/300); and (b) (2/100, 2/100), (1/50, 1/50), (0/100, 0/300), (0/100, 0/300), (0/100, 0/300). For each of the two datasets, we use our algorithm to obtain the two level-95% confidence intervals for the common log odds ratio \(\theta^{(o)}\), one using all five studies and the other using only the two non-zero-total-event studies (excluding the three zero-total-event studies). Figure 1 depicts the comparisons of these two sets of intervals. Based on the figure, we can see that the confidence intervals obtained by excluding the three zero-total-event studies are significantly wider than the intervals obtained by including them. This set of results further affirms the conclusion that zero-total-event studies has information and impacts the inference of the common odds ratio as discussed in Xie et al. (2018). Overall, our repro samples method provides a solution to effectively include zero-total-event studies in the analysis of the common odds ratio parameter in meta-analysis. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & & & & & & & True odds ratio & & & & \\ \cline{3-11} & & 1.0 & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 & 1.6 & 1.7 & 1.8 & 1.9 \\ \hline MH & CP & 0.946 & 0.944 & 0.940 & 0.936 & 0.952 & 0.960 & 0.962 & 0.956 & 0.954 & 0.964 \\ & Length & 0.772 & 0.753 & 0.743 & 0.730 & 0.721 & 0.720 & 0.705 & 0.700 & 0.691 & 0.687 \\ Peto & CP & 0.946 & 0.944 & 0.940 & 0.938 & 0.956 & 0.968 & 0.966 & 0.960 & 0.958 & 0.966 \\ & Length & 0.769 & 0.747 & 0.729 & 0.710 & 0.696 & 0.684 & 0.666 & 0.655 & 0.638 & 0.630 \\ Repro & CP & 0.974 & 0.966 & 0.974 & 0.962 & 0.966 & 0.970 & 0.976 & 0.970 & 0.964 & 0.980 \\ & Length & 0.891 & 0.870 & 0.858 & 0.834 & 0.830 & 0.829 & 0.818 & 0.804 & 0.801 & 0.792 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons of MH, Peto and repro samples by mimicking the structure of Avandia dataset with true common odds ratio \(\theta^{(o)}\) being 1.0 to 1.9. \begin{table} \begin{tabular}{r r r r r r r r r r r r} \hline \hline & & \multicolumn{10}{c}{True odds ratio} \\ \cline{3-10} & & 1/1.8 & 1/1.6 & 1/1.4 & 1/1.2 & 1 & 1.2 & 1.4 & 1.6 & 1.8 \\ \hline MH & CP & 0.972 & 0.952 & 0.952 & 0.950 & 0.946 & 0.940 & 0.952 & 0.962 & 0.954 \\ & Length & 0.897 & 0.864 & 0.832 & 0.804 & 0.772 & 0.743 & 0.721 & 0.705 & 0.691 \\ Peto & CP & 0.972 & 0.950 & 0.956 & 0.954 & 0.946 & 0.940 & 0.956 & 0.966 & 0.958 \\ & Length & 0.885 & 0.861 & 0.832 & 0.805 & 0.769 & 0.729 & 0.696 & 0.666 & 0.638 \\ Repro & CP & 0.978 & 0.958 & 0.966 & 0.972 & 0.974 & 0.974 & 0.966 & 0.976 & 0.964 \\ & Length & 1.016 & 0.987 & 0.954 & 0.923 & 0.891 & 0.858 & 0.830 & 0.818 & 0.801 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparisons of MH, Peto, and repro samples by mimicking the structure of Avandia dataset with true common odds ratio \(\theta^{(o)}\) being 1/1.8 to 1.8. \begin{table} \begin{tabular}{r r r r r r r r r r} \hline \hline & & \multicolumn{10}{c}{True odds ratio} \\ \cline{3-10} & & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline MH & CP & 0.958 & 0.956 & 0.952 & 0.952 & 0.946 & 0.934 & 0.944 & 0.950 \\ & Length & 0.681 & 0.643 & 0.637 & 0.623 & 0.616 & 0.609 & 0.605 & 0.599 \\ Peto & CP & 0.970 & 0.840 & 0.466 & 0.066 & 0 & 0 & 0 & 0 \\ & Length & 0.618 & 0.531 & 0.478 & 0.457 & - & - & - & - \\ Repro & CP & 0.974 & 0.976 & 0.976 & 0.974 & 0.956 & 0.960 & 0.978 & 0.966 \\ & Length & 0.792 & 0.745 & 0.736 & 0.724 & 0.715 & 0.705 & 0.701 & 0.695 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons of MH, Peto and repro samples by mimicking the structure of Avandia dataset with true common odds ratio \(\theta^{(o)}\) being 2 to 9. Figure 1: Illustration of the impact of zero-total-event studies on level-95% confidence intervals of common log odds ratio: analysis using two studies (removing zero-total-event studies) versus analysis using all five studies. The two datasets used are: (a) \((3/100,2/100)\), \((2/300,1/300)\), \((0/600,0/300)\), \((0/600,0/300)\), \((0/300,0/300)\); and (b) \((2/100,2/100)\), \((1/50,1/50)\), \((0/100,0/300),(0/100,0/300)\), \((0/100,0/300)\). Real Data Analysis Avandia dataset (Nissen and Wolski, 2007) includes data from \(K=48\) independent clinical trials to examine its effect on cardiovascular morbidity and mortality. In fact, Avandia is the trade name of drug rosiglitzone, which is widely used for treatment of type 2 diabetes mellitus. Among the 48 trials, there are 46 small trials with sample size at most 1172 in one arm and 2 large trials with sample sizes at least 1456 in one group. The two large trials are called Diabetes Reduction Assessment with Ramipiril and Rosiglitazone Medication (DREAM) and A Diabetes Outcome Prevention Trial (ADOPT), respectively. In this dataset, the events of myocardial infarction and cardiovascular death have very low incidence rate. Thus, many trials do not contain any or only contain very few interested events, especially for death from cardiovascular causes. Specially, there exist many trials with zero events in one of two arms and zero-total-event trials. Among the 48 trials, 10 reports no events for myocardial infarction and 25 reports no events for cardiovascular death in both of treatment and control groups. The entire dataset could be found in Table I of the supplementary material of Tian et al. (2009). It is an extremely non-trial and challenging task to effectively incorporate these studies in a meta-analysis (Finkelstein and Levin, 2012; Xie et al., 2018). In Xie et al. (2018), the authors made a definite conclusion that zero-total-event trials have information of the common odds ratio. Here, we apply our newly developed finite sample method along with the widely used Mantel-Haenszel and Peto methods to construct confidence intervals for the common odds ratio. The 95% confidence intervals for common odds ratios of myocardial infarction and cardiovascular death obtained by these three approaches, denoted as MH, Peto, Repro-1, respectively, are listed in Table 5. For the endpoint of cardiovascular death, three methods output the similar results. Three confidence intervals all include the value of 1. Thus, all of them suggest that the drug rosiglitazone has no statistically significant effect on mortality of cardiovascular death. Our repro samples method, however, obtained smaller lower end of confidence interval and show stronger evidence that the drug rosiglitazone has no statistically significant effect on mortality of cardiovascular death. As for myocardial infarction, the results are quite different. The confidence intervals of conventional Mantel-Haenszel and Peto methods exclude the value 1, while that using the repro samples method includes it. According to Mantel-Haenszel and Peto means, the drug rosiglitazone has statistically significant effect. However, using the repro samples method, we could not conclude that the drug rosiglitazone has a statistically significant effect on myocardial infarction. Finally, we examine the impact of zero-total-event studies on the confidence intervals of common log odds ratio in the Avandia dataset. Specifically, we re-run our repro sample algorithm by deleting the zero-total-event studies, and compare the confidence intervals obtained without including zero-total-event studies, denoted by Repro-2 in Table 5, with those previously obtained including these zero-total-event \begin{table} \begin{tabular}{r c c} \hline \hline & MI & CVD \\ \hline MH & (1.029,1.978) & (0.984,2.930) \\ Peto & (1.031,1.979) & (0.980,2.744) \\ Repro-1 & (0.982,2.118) & (0.962,3.283) \\ Repro-2 & (0.962,2.165) & (0.846,3.802) \\ \hline \hline \end{tabular} * MI is for myocardial infarction; CVD is for cardiovascular death. Repro-1 uses data from all the 48 trials; Repro-2 excludes zero-total-event trials. \end{table} Table 5: Analysis of Avandia dataset: 95% confidence intervals of common odds ratio studies. For the event of myocardial infarction, there are 10 zero-total-event studies. For the event of cardiovascular death there are 25 zero-total-event studies. From Table 5, we can see that intervals with and without including the zero-total-event studies are quite different. The intervals with zero-total-event studies are narrower than those without including zero-total-event studies. This shows that utilizing zero-total-event studies in meta-analysis is important and beneficial for the inference of the common log odds ratio in general. It reaffirms our conclusion that the zero-total-event studies has information and impacts the inference of the common odds ratio. ## 5 Discussion Questions on whether a zero-total-event study contains any information for the common odds ratio in meta-analysis of \(2\times 2\) tables and how to incorporate such studies when making inference for the common odds ratio have long been debated and remain to be open in statistics (cf., Finkelstein and Levin, 2012; Xie et al., 2018). The difficulty is due to the lack of mathematical definition for 0/0 and also because most meta-analysis approaches rely on normality and large sample theories both of which do not apply for the zero-total-event studies. In this article and by using the recent developed repro samples inferential framework, we are able to develop a finite-sample approach to make inference for the common odds ratio. The developed inference procedure has guaranteed theoretical performance and is validated in numerical studies. It provides an affirmative answer to the set of open research questions. The repro sample framework is developed based on the ideas of inversion, matching of artificial and observed samples, and simplifying uncertainty quantification through a Borel set concerning \(U\). It does not need any regularity conditions, nor relies on any large sample theories. It can provide finite sample inference with few assumptions, and is an ideal tool to address some difficult and complicated inference problems. In this article, we have used it to develop a novel approach to answer the unresolved questions concerning the use of zero-total-event studies in meta-analysis. The repro samples method can also be used to develop new finite-sample procedures in other meta-analysis settings; for instance, developing a new finite-sample approach to perform meta-analysis and combine information in a random-effects model with only a few studies, a setting studied in Michael et al. (2019). Furthermore, the repro samples method is also very effective for other irregular inference problems that involve discrete or non-numerical parameters. For instance, Xie and Wang (2022) and Wang et al. (2022) provided solutions for two highly nontrivial problems in statistics: a) how to quantify the uncertainty in the estimation of the unknown number of components and make inference for the associated parameters in a Gaussian mixture; b) how to quantify the uncertainty in model estimation and construct confidence sets for the unknown true model, the regression coefficients, or both true model and coefficients jointly in high dimensional regression models. We anticipate these developments will stimulate further developments to address more complicated and non-trivial inference problems in statistics and data science where a solution is currently unavailable or cannot be easily obtained. ## 6 Acknowledgment Xie's research is supported in part by NSF grants DMS2015373, DMS2027855, DMS2311064 and DMS-2319260. Chen's research is supported partly by Humanity and Social Science Research Foundation of Ministry of Education (MOE) of China (21YJA910002).
2308.11525
Linear stability of Poiseuille flow over a steady spanwise Stokes layer
The temporal linear stability of plane Poiseuille flow modified by spanwise forcing applied at the walls is considered. The forcing consists of a stationary streamwise distribution of spanwise velocity that generates a steady transversal Stokes layer, known to reduce skin-friction drag in a turbulent flow with little energetic cost. A large numerical study is carried out, where the effects of both the physical and the discretization parameters are thoroughly explored, for three representative subcritical values of the Reynolds number Re. Results show that the spanwise Stokes layer significantly affects the linear stability of the system. For example, at Re=2000 the wall forcing is found to more than double the negative real part of the least-stable eigenvalue, and to decrease by nearly a factor of four the maximum transient growth of perturbation energy. These observations are Re-dependent and further improve at higher $Re$. Comments on the physical implications of the obtained results are provided, suggesting that spanwise forcing might be effective to obtain at the same time a delayed transition to turbulence and a reduced turbulent friction.
Daniele Massaro, Fulvio Martinelli, Peter J. Schmid, Maurizio Quadrio
2023-08-22T15:54:44Z
http://arxiv.org/abs/2308.11525v1
# Linear stability of Poiseuille flow over a steady spanwise Stokes layer ###### Abstract The temporal linear stability of plane Poiseuille flow modified by spanwise forcing applied at the walls is considered. The forcing consists of a stationary streamwise distribution of spanwise velocity that generates a steady transversal Stokes layer, known to reduce skin-friction drag in a turbulent flow with little energetic cost. A large numerical study is carried out, where the effects of both the physical and the discretization parameters are thoroughly explored, for three representative subcritical values of the Reynolds number \(Re\). Results show that the spanwise Stokes layer significantly affects the linear stability of the system. For example, at \(Re=2000\) the wall forcing is found to more than double the negative real part of the least-stable eigenvalue, and to decrease by nearly a factor of four the maximum transient growth of perturbation energy. These observations are \(Re\)-dependent and further improve at higher \(Re\). Comments on the physical implications of the obtained results are provided, suggesting that spanwise forcing might be effective to obtain at the same time a delayed transition to turbulence and a reduced turbulent friction. ## I Introduction Decreasing the aerodynamic drag is a formidable scientific and technological challenge in configurations dominated by a relative motion between a solid body and a surrounding fluid. In particular, the skin-friction drag -- in the laminar or in the turbulent regime -- often represents a major portion of the total drag in the air transport sector, and can be of paramount importance in naval and submarine transport. Skin friction can be reduced either by keeping the flow laminar as long as possible, thus exploiting the intrinsically lower friction levels typical of the laminar regime, or by accepting the transition to turbulence, and reducing the level of turbulent friction below its natural level. A viable flow control approach that achieves both objectives would be very desirable, as it would first take advantage of laminarity as long as possible, and then continue to reduce turbulent friction. Among the several active techniques for the reduction of turbulent drag, we are interested in spanwise-forcing techniques, and the present work in particular considers streamwise-traveling waves of spanwise wall-velocity as introduced by Quadrio _et al._ (2009) [1]. Comprehensive reviews on this technique for turbulent drag reduction are available [2; 3; 4]. The streamwise-traveling waves, which include as a special case the spanwise-oscillating wall [5] but achieve far higher energetic efficiency, abate the levels of turbulent skin-friction drag with interesting energetic effectiveness, with one energy unit spent on the control saving up to 30 units of pumping energy. Furthermore, recent evidence has shown that spanwise forcing can bring indirect benefits in terms of the reduction of pressure drag [6]; in addition, it may be highly beneficial by interacting with the shock wave over an aerofoil in transonic flow [7] and reducing significantly the aerodynamic drag of the entire airplane with negligible energy expenditure. Within this context, the present work is motivated by the following simple question: Can spanwise forcing favorably affect transition to turbulence? Since the forcing is known to weaken near-wall streaks in a turbulent flow [8], a similar effect on laminar streaks might alter their growth, thus causing a delay of, or perhaps preventing altogether, transition to turbulence. It must be kept in mind that, at the moment, satisfactory actuators for implementing traveling waves in a real-world application are still lacking, even though some interesting developments exist, including mechanical movement of the wall [9; 10], electroactive polymers [11; 12] and the use of Kagome lattices [13]. However, the prospect of instrumenting e.g. an airplane wing with one actuator that, in the wing fore part, would delay transition while, in the aft, would decrease turbulent skin-friction drag is certainly appealing, and motivates further research efforts into this direction. This work is not the first to investigate the stability properties of a wall-bounded flow modified by spanwise forcing, and the available body of literature provides important guidance. Most of the current knowledge concerns spatially uniform wall oscillations. Jovanovic (2008) [14] demonstrated the capability of properly designed wall oscillations to reduce receptivity of the linearized Navier-Stokes equations to small stochastic disturbances in laminar Poiseuille flow.
2306.07833
Goppa code and quantum stabilizer codes from plane curves given by separated polynomials
In this paper, we examine algebraic geometric (AG) codes associated with curves generated by separated polynomials, and we create AG codes and quantum stabilizer codes from these curves by varying their parameters. Our research involves a thorough examination of the curves' algebraic features as well as the creation of Goppa codes over them. Extending these findings, we create quantum stabilizer codes, revealing that quantum codes built from Hermitian self-orthogonal AG codes have acceptable parameters, improving the reliability and performance of communication networks.
Vahid Nourozi, Farzaneh Ghanbari
2023-06-13T15:11:03Z
http://arxiv.org/abs/2306.07833v2
# Goppa code and quantum stabilizer codes from plane curves given by separated polynomials ###### Abstract In this paper, we investigate Algebraic-Geometric codes associated with the curves given by separated polynomials. Also, we construct \(AG\) codes and quantum stabilizer codes from these curves and modified their parameters. Goppa code, Finite fields, algebraic geometry codes, quantum stabilizer codes, plane curves given by separated polynomials. ## 1 Introduction After Goppa's construction [6], ideas from algebraic geometry proved influential in coding theory. He came up with the brilliant concept of connecting a code \(C\) to a (projective, geometrically irreducible, nonsingular, algebraic) curve \(\mathcal{X}\) defined over \(\mathbb{F}_{q}\), a finite field with \(q\) elements. This code comprises two divisors \(D\), and \(G\), with one of them, \(D\), being the sum of \(n\) distinct \(\mathbb{F}_{q}\) rational points of \(\mathcal{X}\). It becomes out that the minimum distance \(d\) of \(C\) satisfies \[d\geq n-\deg(G).\] This is one of the key aspects of Goppa's construction. In general, there is no lower bound known on the minimum distance of an arbitrary code. This bound is relevant only if \(n\) is large enough. Since \(n\) is upper bounded by the Hasse-Weil upper bound \[1+q+2g\sqrt{q},\] where \(g\) is the genus of the underlying curve, it is of significant interest to study curves having numerous rational points; see [24], and [5]. \(AG\) codes from the Hermitian curve have extended in numerous publications; see [4; 9; 10; 11; 22; 23; 25]. Also, a family of Hermitian self-orthogonal classical codes is from algebraic geometry codes developed in [12; 13; 14]. Section 3 introduces basic notions and preliminary results of \(AG\) codes and plane curve given by separated polynomials curves. Sections 4 contain the quantum Goppa code on curve \(\mathcal{X}\). ## 2 Preliminary ### Curves given by separated polynomials In this paper \(\mathcal{X}\) is a plane curve defined over the algebraic closure \(K\) of a prime finite field \(\mathbb{F}_{q}\) by an equation \[A(Y)=B(X),\] satisfying the following conditions: 1. \(\deg(\mathcal{X})\geq 4\); 2. \(A(Y)=a_{n}Y^{p^{n}}+a_{n-1}Y^{p^{n}-1}+\cdots+a_{0}Y^{n},\quad a_{j}\in K, \quad a_{0},a_{n}\neq=;\) 3. \(B(X)=b_{m}X^{m}+b_{m-1}X^{m-1}+\cdots+b_{1}X+b_{0},\quad b_{j}\in K,\quad b_{m}\neq 0\); 4. \(m\neq 0\pmod{p}\); 5. \(n\geq 1,\quad m\geq 2\). Note that \((2)\) occurs if and only if \(A(Y+a)=A(Y)+A(a)\) for every \(a\in K\), that is, the polynomial \(A(Y)\) is additive. The basic properties of \(\mathcal{X}\) are collected in the following lemmas; see [(7), Section 12.1]. **Lemma 2.1**.: _The curve \(\mathcal{X}\) is an irreducible plane curve with at most one singular point._ 1. _If_ \(\mid m-p^{n}\mid=1\)_, then then_ \(\mathcal{X}\) _is non-singular._ 2. \(\mathcal{X}\) _has genus_ \(g=\frac{(p^{n}-1)(m-1)}{2}\)_._ We suppose that \(q=p^{n}\) and \(\mathcal{X}\) is a curve over \(\mathbb{F}_{q}\) with the above condition defined by following an equation \[y^{q}+y=x^{m},\] where \(q-m=1\). ### Algebraic Geometry Codes In this paper we let \(\mathbb{F}_{q}(\mathcal{X})\) (resp. \(\operatorname{Div}_{q}(\mathcal{X})\)) denotes the field of \(\mathbb{F}_{q}\)-rational functions (resp. the \(\mathbb{F}_{q}\) divisors) of \(\mathcal{X}\). If \(f\in\mathbb{F}_{q}(\mathcal{X})\setminus\{0\}\), \(\operatorname{div}(f)\) denotes the divisor associated with \(f\). For \(A\in\operatorname{Div}_{q}(\mathcal{X})\), \(\mathcal{L}(A)\) denotes the Riemann-Roch \(\mathbb{F}_{q}\)-vector space associated with \(A\), i.e., \[\mathcal{L}(A)=\{f\in\mathbb{F}_{q}(\mathcal{X})\setminus\{0\}:A+\operatorname {div}(f)\succeq 0\}\cup\{0\}.\] and its dimension over \(\mathbb{F}_{q}\) is indicated by \(\ell(A)\). Let \(P_{1},\cdots,P_{n}\) be pairwise distinct \(K\)-rational points of \(\mathcal{X}\) and \(D=P_{1}+\cdots+P_{n}\) of degree \(1\). Choose a divisor \(G\) on \(\mathcal{X}\) such that \(\operatorname{supp}(G)\cap\operatorname{supp}(D)=\phi\). **Definition 2.2**.: The algebraic geometry code (or \(AG\) code) \(C_{\mathcal{L}}(D,G)\) is associated with \(D\) and \(G\)'s divisors, defined as \[C_{\mathcal{L}}(D,G):=\{(x(P_{1}),\cdots,x(P_{n}))\mid x\in\mathcal{L}(G)\} \subseteq\mathbb{F}_{q}^{n}.\] The minimum distance \(d\) satisfies \(d\geq d^{\star}=n-\deg(G)\), where \(d^{\star}\) is called the Goppa designed minimum distance of \(C_{\mathcal{L}}(D,G)\), if \(\deg(G)>2g-2\), then by the Riemann-Roch Theorem \(k=\deg(G)-g+1\); see [(8), Th. 2.65]. The dual code \(C^{\perp}(D,G)\) is an \(AG\) code with dimension \(k^{\perp}=n-k\) and minimum distance \(d^{\perp}\geq\deg G-2g+2\). Let \(H(P)\) be the Weierstrass semigroup associated with \(P\), that is \[H(P):=\{n\in\mathbb{N}_{0}\mid\exists f\in\mathbb{F}_{q}(\mathcal{X}), \operatorname{div}_{\infty}(f)=nP\}=\{\rho_{0}=0<\rho_{1}<\rho_{2}<\cdots\}.\] Recall that the Hermitian inner product for two vectors \(a=(a_{1},\cdots,a_{n})\), \(b=(b_{1},\cdots,b_{n})\) in \(\mathbb{F}_{q}^{n}\), is defined by \(<a,b>_{H}:=\sum_{i=1}^{n}a_{i}b_{i}^{q}\). For a linear code \(C\) over \(\mathbb{F}_{q}^{n}\), the Hermitian dual of \(C\) is determined by \[C^{\perp H}:=\{v\in\mathbb{F}_{q}^{n}:<v,c>_{H}=0\quad\forall c\in C\}.\] Thus,C is Hermitian self-orthogonal if \(C\subseteq C^{\perp H}\). ## 3 Goppa Code Over Curve \(\mathcal{X}\) Let \(r\in\mathbb{N}\), with the notation of Section 2, we consider the sets \[\mathcal{G}:=\mathcal{X}(\mathbb{F}_{q}),\qquad\mathcal{D}:=\mathcal{X}( \mathbb{F}_{q^{2}})\setminus\mathcal{G}\] That \(\mathcal{G}\) is the intersection of \(\mathcal{X}\) with the plane \(t=0\). Fix the \(\mathbb{F}_{q^{2}}\) divisors \[G:=\sum_{P\in\mathcal{G}}rP\quad\text{ and }\quad D:=\sum_{P\in\mathcal{D}}P,\] where \(\deg(G)=r(q^{2}-q+1)\) and \(\deg(D)=q^{3}\). Let \(C\) be the \(C_{\mathcal{L}}(D,G)\) Algebraic Geomtery code over \(\mathbb{F}_{q^{2}}\). With length \(n=q^{3}\), minimum distance \(d\) and dimension \(k\). The designed minimum distance of \(C\) is \[d^{*}=n-\deg(G)=q^{3}-r(q^{2}-q+1).\] From H. Stichtenoth [21] we have following remark. _Remark 3.1_.: Let \(\mathcal{X}\) be a curve in this paper, \(D\) and \(G\) be divisor as above. Then \[C^{\perp}(D,G)=C(D,D-G+K),\] where \(K=div(\eta)\in Div_{q}(\mathcal{X})\) is a canonical divisor defined by a differential \(\eta\) such that \(\nu_{P_{i}}(\eta)=-1\) and \(\text{res}_{P_{i}}(\eta)=1\) for each \(i=1,2,\cdots,n\). **Lemma 3.2**.: _The basis of \(\mathcal{L}(G)\), for \(r\geq 0\), is given by_ \[\{x^{i}y^{j}\ |\ iq+j(q-1)\leq r,i\geq 0,0\leq j\leq q-1\}.\] Proof.: We know that \((x)_{\infty}=qP_{\infty}\) and \((y)_{\infty}=(q-1)P_{\infty}\), therefore, the above set contained in \(\mathcal{L}(G)\). In addition, the restriction on \(j\) it is linearly independent on \(\mathbb{F}_{q^{2}}\). Now the Weierstrass semigroup \(H(P_{\infty})\) is generated by \(q\) and \(q-1\) at \(P_{\infty}\). Suppose that \(\mathcal{L}(G)=\mathcal{L}(\rho_{\ell}P_{\infty})\) where \(\rho_{\ell}\leq r\leq\rho_{\ell+1}\) and \(H(P_{\infty})=\{\rho_{0}=0<\rho_{1}<\cdots\}\). Then \[\dim_{\mathbb{F}_{q}}(\mathcal{L}(G))=\sharp\{iq+j(q-1)\leq r,i\geq 0,0\leq j \leq q-1\}. \tag{3.1}\] In the following lemma, consider the code \(C_{r}:=C_{\mathcal{L}}(D,G)\) and \(k_{r}:=(dim)_{\mathbb{F}_{q^{2}}}(C_{r})\). Also we indicate the divisor \(div(x)\) with \((x)\). **Lemma 3.3**.: _We have_ \[C_{r}^{\perp}=C_{q^{3}+q^{2}-3q-r}.\] _Hence \(C_{r}\) is self-orthogonal if \(2r\leq q^{3}+q^{2}-3q\)._ Proof.: We have \(C_{r}^{\perp}=C(D,D-G+W)\), where \(W\) is a canonical divisor in Remark 3.1. In this case we calculated \(\eta\). Let \(t:=x^{q^{2}}-x=\prod_{a\in\mathbb{F}_{q^{2}}}(x-a)\), and \(\eta:=dt/t\). Therefore \[(x-a)=\sum_{b^{q}:+b=a^{q-1}}P_{a,b}-qP_{\infty}\] thus, \[(t)=D=q^{3}P_{\infty}.\] In addition, \((dt)=(dx)=(2g-2)P_{\infty}=(q^{2}-3q)P_{\infty}.\) So that, \[\nu_{P}(\eta)=-1\quad\text{ and }\quad\operatorname{res}_{p}\eta=1\quad\text{ for }\quad P\in\operatorname{Supp}(D).\] Sine \(D-G-(\eta)=D-G-D+q^{3}P_{\infty}+(q^{2}-3q)P_{\infty}=(q^{3}+q^{2}-3q-r)P_{\infty}\), the statement follows. We suppose that \(T(r):=\sharp\{iq+j(q-1)\leq r,i\geq 0,0\leq j\leq q-1\}\). **Proposition 3.4**.: 1. _If_ \(r<0\)_, then_ \(k_{r}=0\)_;_ 2. _If_ \(0\leq r\leq q^{2}-3q\) _then_ \(k_{r}=T(r)\)_;_ 3. _If_ \(q^{2}-3q<r<q^{3}\)_, then_ \(k_{r}=r(q^{2}-q+1)-\frac{(q-1)(q-2)}{2}\)_;_ 4. _If_ \(q^{3}\leq r\leq q^{3}+q^{2}-3q\)_, then_ \(k_{r}=q^{3}-T(q^{3}-q^{2}-3q-r)\)_;_ 5. _If_ \(r>q^{3}-q^{2}-3q\)_, then_ \(k_{r}=q^{3}\)_._ Proof.: 1. If \(r<0\), it is trivial that \(k_{r}=0\). 2. If \(0\leq r\leq q^{2}-3q\), then follows from Lemma 3.2. 3. By Riemann-Roch Theorem \(k_{r}=\deg(G)+1-g\), since \(n>\deg(G)>2g-2\). 4. Let \(r^{\prime}:=q^{3}+q^{2}-3q-r\), satisfies \(0\leq r\leq q^{2}-3q\). Then from Lemma 3.3, \(k_{r}=q^{3}-\dim_{\mathbb{F}^{q^{2}}}(C_{r^{\prime}})\) and the statement follows. 5. If \(r>q^{3}-q^{2}-3q\), then \(C_{r}^{\perp}=\{0\}\) and so \((dim)_{\mathbb{F}_{q^{2}}}(C_{r})=n=k_{r}\). **Proposition 3.5**.: \(C\) _is monomial equivalent to the one-point code \(C(D,r(q^{2}-q+1)P_{\infty})\)._ Proof.: If \(G^{\prime}=r(q^{2}-q+1)P_{\infty}\), then \(G=G^{\prime}+(t^{r})\). The claim follows the statement as in the proof of [Proposition 3.2. [15]]. **Theorem 3.6**.: _For \(r\leq q^{2}+q-3\), \(C_{r}\) is Hermitian self-orthogonal._ Proof.: If \(r\leq q^{2}+q-3\), then we have \(rq\leq q^{3}+q^{2}-3q-2-r\). Hence, the result follows from the Lemma 3.3. ## 4 Quantum Stabilizer Code Over Curve \(\mathcal{X}\) In this section, in the classical \(AG\) codes, we use the Hermitian self-orthogonality of \(C_{r}\) produced in the previous section to construct quantum stabilizer codes and then analyze this code. We need the following Lemma as a result of quantum codes obtained from Hermitian self-orthogonal classical codes. **Lemma 4.1**.: _[_1_]_ _There is a \(q\)-ary \([[n,n-2k,d^{\perp}]]\) quantum code whenever there exists a \(q\)-ary classical Hermitian self-orthogonal \([n,k]\) linear code with dual distance \(d^{\perp}\)._ We can derive our main result using the Lemma 4.1 of quantum codes with classical Hermitian self-orthogonal codes. Then we provide several examples that prove that the quantum codes created from our theorem are indeed good. **Theorem 4.2**.: _Let \(q\) be a power of \(s\geq 1\), then for the curve \(\mathcal{X}\), there is a \(q\)-ary \([[q^{3},q^{3}+q^{2}-3q-2r,r+2q-q^{2}]]_{q}\) quantum code for any positive \(q^{2}-2\leq r\leq q^{2}+q-3\)._ Proof.: The proof directly follows from Theorems 3.6 and Lemma 4.1. **Example 4.3**.: For \(q=3\) and \(7\leq r\leq 9\), Theorem 4.2 produces \(3\)-ary \([[27,27-2r,r-3]]_{3}\) quantum codes. Quantum codes can drive that, \([[27,13,4]]_{3}\), \([[27,11,5]]_{3}\) and \([[27,9,6]]_{3}\). Where these codes have good parameters. For instance, a quantum code \([[27,13,6]]_{3}\) is expressed in Table [2]. This example implies that our quantum code has a smaller distance for the same dimension and length. **Example 4.4**.: For \(q=5\) and \(18\leq r\leq 22\), Theorem 4.2 produces \(5\)-ary \([[125,135-2r,r-15]]_{5}\) quantum codes. Quantum codes can drive that, \([[125,99,3]]_{5}\), \([[125,97,4]]_{5}\), \([[125,95,5]]_{5}\), \([[125,93,6]]_{5}\) and \([[125,91,7]]_{5}\). Where these codes have good parameters. For instance, the quantum codes \([[125,99,10]]_{5}\) and \([[125,93,12]]_{5}\) are denoted in Table [2].As a result of this example, our quantum code has a lower distance for the same dimension and length. ## conclusion In this work, we investigated Goppa codes of the plane curve given by separated polynomial curves. We introduced and analyzed the Quantum Stabilizer Code over the plane curve given by separated polynomials. For example, we say that the Quantum Stabilizer Code improved with good parameters over this curve. _Acknowledgements._ This paper was written while Vahid Nourozi was visiting Unicamp (Universidade Estadual de Campinas) supported by TWAS/Cnpq (Brazil) with fellowship number \(314966/2018-8\).
2303.08306
Graph embeddings with no Hamiltonian extensions
We show that extending an embedding of a graph $\Gamma$ in a surface to an embedding of a Hamiltonian supergraph can be blocked by certain planar subgraphs but, for some subdivisions of $\Gamma$, Hamiltonian extensions must exist.
Paul C. Kainen, Shannon Overbay
2023-03-15T01:37:20Z
http://arxiv.org/abs/2303.08306v1
# Graph embeddings with no Hamiltonian extensions + ###### Abstract _We show that extending an embedding of a graph \(\Gamma\) in a surface to an embedding of a Hamiltonian supergraph can be blocked by certain planar subgraphs but, for some subdivisions of \(\Gamma\), Hamiltonian extensions must exist._ **Key Phrases**: _extending embeddings, Hamiltonian cycle in embedded graph._ ## 1 Introduction The objects studied in this paper are 2-cell embeddings of graphs in (closed) surfaces. We ask: _When can such an embedding be extended to an embedding of a Hamiltonian graph, containing the original graph as a subgraph?_ The embedding is into the same surface so that the supergraph is obtained as a subdivision of some of the regions of the original embedding but the edges of the original graph are not subdivided. See Fig. 1 below. This problem is a variant of the differently specified question asked in [8], "_When is a graph, embeddable on a surface S, a subgraph of a Hamiltonian graph which is also embeddable on S?_ McKenzie and Overbay showed [8] that the bipartite complete graphs, with genus \(\gamma\leq 1\) which are _not_ Hamiltonian, are subgraphs of genus-\(\gamma\) graphs that _are_ Hamiltonian. The formulation here emphasizes the embedding itself, rather than the possibility of being embedded. The idea of extending graph invariants to graph embeddings goes back (at least) to [3, 4, 5, 6, 9]. Merely being non-Hamiltonian isn't enough to prevent a Hamiltonian extension. For instance, the Petersen graph has an embedding in the torus, and one can add three edges to the embedding to make the enlarged graph Hamiltonian where each added edge occurs within a region of the original embedding. _Which embeddings ensure that no such Hamiltonian extension can be found?_ We obtain a large family of non-Hamiltonian-extendable embeddings using an idea of Klee (see Malkevitch [7]) and conjecture that there are no other such non-Hamiltonian-extendable embeddings. However, if the edges of the original graph can be subdivided before trying to extend it, then we show that every graph embedding has such a _topological_ Hamiltonian extension. The paper proceeds as follows: Section 2 has definitions; in Section 3 we build non-Hamiltonian-extendable graph embeddings. Section 4 proves that weakening the condition of extendability to allow subdivision of edges of the original graph makes it possible to always find a Hamiltonian extension. ## 2 Definitions A **2-cell embedding**\(i\) of a finite graph \(\Gamma\) in a surface \(S\) is a continuous embedding \(i:\Gamma\to S\) such that \(S\setminus i(\Gamma)\) is a disjoint union of open 2-disks, the **regions** (of \(i\)). If \(G\) is some graph which contains \(\Gamma\) as a subgraph and \(j:G\to S\) is a 2-cell embedding, then we say that **j extends i** if \(j|_{\Gamma}=i\). We call a 2-cell embedding \(i\) of \(\Gamma\) in \(S\)**Hamiltonian extendable** if \(i\) can be extended to an embedding of a Hamiltonian supergraph \(G\) in \(S\). Otherwise, \(i\) is **non-Hamiltonian-extendable**. **Figure 1**: _An extension of an embedding_ A path or cycle is **oriented** if its edges are assigned a consistent direction. If \(P\) is an oriented path, let \(P^{o}\) denote \(P\)_minus its terminal point_. A subdivision of an edge is a path whose endpoints agree with the endpoints of the edge. A **subdivsion** of a graph is a graph obtained by subdividing some or all of the edges. Two graphs are homeomorphic iff they have isomorphic subdivisions. An embedding \(i:\Gamma\to S\) will be said to have a **topological extension** if there exists a subdivision \(\Gamma^{\prime}\) of \(\Gamma\) and an extension \(j:G\to S\) of \(i^{\prime}\), where \(i^{\prime}\) is the embedding \(\Gamma^{\prime}\to S\) induced by \(i\). A 2-cell embedding \(i:\Gamma\to S\) is of **Klee type** if the number \(r\) of regions exceeds the number \(p\) of vertices; \(i\) is of **local** Klee type if there exists a cycle \(C\) contained in \(\Gamma\) such that (i) \(C\) separates \(\Gamma\), (ii) \(i(C)\) separates \(S\) (into _inside_ and _outside_), and (iii) if \(r_{C}\) is the number of regions of \(i(\Gamma)\) inside \(C\) and \(p_{C}\) is the number of vertices of \(\Gamma\) inside or on \(C\), then \(r_{C}\geq p_{c}\). See Fig. 2. Labeling inside/outside is arbitrary and both parts of \(S\setminus C\) could be nonplanar. ## 3 Graph embeddings of Klee type Extending a 2-cell embedding of Klee or local Klee type to include points in the interiors of too many regions must produce a non-Hamiltonian-extendable graph. We conjecture that these obstacles are the only way to produce such non-Hamiltonian-extendable graphs. **Theorem 1**.: _(a) Let \(i:\Gamma\to S\) be an embedding of Klee type with \(r>p\). Then, for any extension \(j:G\to S\), \(G\) is not Hamiltonian provided \(G\) contains vertices \(w_{1},\ldots,w_{s}\) inside distinct regions of \(i\), \(R_{1},\ldots,R_{s}\), for \(r\geq s\geq p+1\). (b) Let \(i:\Gamma\to S\) be an embedding of local Klee type with \(r_{C}\geq p_{C}\). Then, for any extension \(j:G\to S\), \(G\) is not Hamiltonian provided \(G\) contains vertices \(w_{1},\ldots,w_{s}\) inside distinct regions of \(i\), \(R_{1},\ldots,R_{s}\), inside \(i(C)\) for \(r_{C}\geq s\geq p_{C}\)._ Proof.: We argue by contradiction. Suppose there is an extension \(j:G\to S\) of \(i\) and let \(Z\) be any oriented cycle contained in \(G\) which includes all \(s\) points. By construction, between any two consecutive (with respect to \(Z\)) points, say, \(w_{k},w_{k+1}\) (\(k=1,\ldots,s\), addition mod \(s\)), there is a unique vertex \(v_{k}\) in the boundary of the region \(R_{k}\) of \(i\) containing \(w_{k}\) such that \(v_{k}\) is in \(Z\) and the subpath \(P_{k}\) of \(Z\) from \(w_{k}\) to \(v_{k}\) contains no other point in \(W:=\{w_{1},\ldots,w_{s}\}\) and no other point in the boundary of \(R_{k}\). In case (a), \(Z\) contains at least \(s\) points in \(V\Gamma\), which contradicts the assumption \(s\geq p+1\). In case (b), if \(r_{C}>p_{C}\), then as in (a), no such cycle \(Z\) can exist, while if \(r_{C}=p_{C}\), the only possibility is that \(Z\) includes all vertices on \(C\) (and some inside it), so \(Z\) can't include the vertices of \(\Gamma\) outside \(C\). Using the genus formula [10, 2] for cubes, \(\gamma(Q_{d})=1+(d-4)2^{d-3}\), easy calculation shows that for the \(d\)-cube, the number of regions in the genus embedding is \(r:=r(d):=d\,2^{d-2}>2^{d}=p\) for \(d\geq 5\). Indeed, by Euler's formula, \[2^{d}-d2^{d-1}+r(d)=2-2(d-4)2^{d-3}-2.\] Solving for \(r(d)\) gives the result. So cubes of dimension \(\geq 5\) are of Klee type. Using the construction in case (a) above, one obtains for the 5-cube, by adding one new vertex \(w\) in the middle of \(s\) of the square faces, \(33\leq s\leq 40\), and using any of the 11 ways to connect each \(w_{k}\) to \(\geq 2\) of the 4 vertices on the boundary of the face which contains it, the number of distinct 2-connected non-Hamiltonian graphs with embedding in \(S_{5}\) extending that of the 5-cube is \[N=\sum_{k=33}^{40}\binom{40}{k}11^{k}\approx 1.45\times 10^{43}.\] The **stellation** of a triangular region puts one new vertex into the interior and joins it to all three corners. Iterating this operation on the resulting three triangles gives a local Klee type graph embedding with \(C=K_{3}\), where \(r_{C}=9\) and \(p_{C}=7\). Hence, stellating all 9 of the regions produces a non-Hamiltonian-embeddable graph, no matter where it occurs in some potentially large graph embedding. Here the inside region is what was inside the triangle. See Fig. 2. ## 4 Topological extensions The planar case of Theorem 2 below is (implicitly) in [11, p. 32]. **Theorem 2**.: _Any embedding \(\Gamma\subset S\) has a Hamiltonian topological extension._ Proof.: Let \(i:\Gamma\to S\) be an embedding. Consider the \(p=|V\Gamma|\) points \(i(v)\in S\) for \(v\in V\Gamma\). As \(S\) is a closed surface, it cannot be disconnected by the removal of any path (or any other contractible subset). Hence, for any enumeration of the points \(i(v)\), say \(i(v_{1}),\ldots,i(v_{p})\), there is a topological path \(P_{1}\) in \(S\) from \(v_{1}\) to \(v_{2}\), then a path \(P_{2}\) in \(S\setminus P_{1}^{o}\) from \(v_{2}\) to \(v_{3}\), and so on, until one chooses a path \(P_{p}\) in \(S\setminus\bigcup_{k=1}^{p-1}P_{k}^{o}\) from \(i(v_{p})\) to \(i(v_{1})\). The union of the paths \(P_{1},\ldots,P_{p}\) Figure 2: Local Klee type embedding on right; triangle C on left is a non-self-intersecting closed curve \(\mathcal{C}\) with \(i(V\Gamma)\subset\mathcal{C}\subset S\). Since \(S\) is triangulable, there is an arbitrarily small perturbation \(\mathcal{C}^{\prime}\) of \(C\setminus i(V\Gamma)\) so that \[\mathcal{C}^{\prime}\cap i(\Gamma)=i(V\Gamma)\cup Y,\] where \(Y\) is a finite set of points at which \(\mathcal{C}^{\prime}\) crosses interiors of edges of \(\Gamma\). Take the points in \(Y\) as subdivision vertices for the edges of \(\Gamma\), and let \(\Gamma^{\prime}\) be the resulting subdivision of \(\Gamma\). Define a graph \(G\) as the union of \(\Gamma^{\prime}\) and the new edges which result by subdividing \(\mathcal{C}^{\prime}\) using both the vertices of \(\Gamma^{\prime}\) and the subdivision points. The resulting copy of \(G\) in \(S\) extends the embedding of \(\Gamma^{\prime}\) and \(G\) has the subdivided \(\mathcal{C}^{\prime}\) as Hamiltonian cycle. \(\blacksquare\) We ask: What is the least number of subdivision points needed? An alternate means to find a Hamiltonian embedding extending a subdivision of some given embedding might be achievable using the "mesh surface" methods in Akleman et al. [1].
2302.10660
A Quantum Algorithmic Approach to Multiconfigurational Valence Bond Theory: Insights from Interpretable Circuit Design
Efficient ways to prepare fermionic ground states on quantum computers are in high demand and different techniques have been developed over the last years. Despite having a vast set of methods, it is still unclear which method performs well for which system. In this work, we combine interpretable circuit designs with an effective basis approach in order to optimize a multiconfigurational valence bond wavefunction. Based on selected model systems, we show how this leads to explainable performance. We demonstrate that the developed methodology outperforms related methods in terms of the size of the effective basis as well as individual quantum resources for the involved circuits.
Jakob S. Kottmann, Francesco Scala
2023-02-21T13:23:51Z
http://arxiv.org/abs/2302.10660v2
# Compact Effective Basis Generation: Insights from Interpretable Circuit Design ###### Abstract Efficient ways to prepare fermionic ground states on quantum computers are in high demand and different techniques ranging from variational to divide-and-conquer were developed over the last years. Despite having a vast set of methods it is still not clear which method performs well for which system. In this work, we combine interpretable circuit designs with a divide-and-conquer approach and show how this leads to explainable performance. We demonstrate that the developed methodology outperforms other divide-and-conquer methods in terms of size of the effective basis as well as individual quantum resources for the involved circuits. Over the last few years, multiple ground-state methods for many-body systems were designed as hybrid approaches for quantum and classical computers leveraging effective bases. In such scenarios, a matrix representation of the original Hamiltonian H is computed in a basis of qubit states \(\ket{\phi_{k}}\) generated by the unitaries \(U_{k}\). As this basis is usually not orthogonal, the generalized eigenvalue equation \[\mathbf{Hc}=\lambda\mathbf{Sc},\;\;H_{ij}=\bra{\psi_{i}}H\ket{\psi_{j}},\;\;S_{ ij}=\bra{\phi_{i}}\phi_{j}\,, \tag{1}\] is solved with Hamiltonian and overlap matrix elements measured on the qubit device. This strategy is often introduced as an alternative to variational methods [1; 2], in particular the variational quantum eigensolver [3; 4], currently representing the largest class of algorithmic procedures on hybrid hardware. Techniques (such as in Refs. [5; 6; 7; 8; 9] developed in the context of variational methods can however be employed within the effective diagonalization of Eq. (1) as well. The generation of the effective basis can be broadly divided into two classes of methods, both starting with a _suitable_ initial state \(\ket{\psi_{0}}\) - a state with at least non-vanishing, but ideally high, overlap with the ground state of the Hamiltonian of interest, which is then used to construct the basis over a set of unitary operations \(\ket{\phi_{k}}=U_{k}\ket{\psi_{0}}\). The first class comes in the form of a _non-orthogonal variational quantum eigensolver_ (NOVQE) [10] with pre-trained many-body basis states constructed from variational quantum circuits \[\ket{\psi_{k}}=U_{k}(\boldsymbol{\theta}_{k}^{(*)})\ket{\psi_{0}} \tag{2}\] \[\boldsymbol{\theta}_{k}^{(*)}=\operatorname{argmin}_{\boldsymbol{\theta}_{k}} \bra{H}_{U_{k}(\boldsymbol{\theta}_{k})}. \tag{3}\] The other class [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21], uses non-variational unitaries, usually derived from the Krylov subspace \[\mathcal{K}=\left\{U_{k}\ket{\psi_{0}}\propto H^{k}\ket{\psi_{0}}\right\}_{ k=1}^{N}, \tag{4}\] where the details lie in the methodologies to approximate \(\mathcal{K}\) (see appendix C for more). An integral part of science is the formulation of interpretable concepts capable to capture the essential aspects of complex processes. Recent examples of such endeavors are for example graph based representations of quantum states, either through interpretable quantum circuit design [22] or in the context of quantum optical setups [23; 24; 25; 26], and concepts in quantum machine learning [27; 28; 29; 30]. Such techniques are not only useful for more effective computational protocols, but their true strength lies in their interpretability allowing for the extraction of principles and insights from small numerical computations that can be leveraged to tackle larger computational tasks more effectively. In this work, we use the interpretable circuit design of Ref. [22] in order to determine compact effective bases suitable to capture the essential physics of a fermionic ground state. We show how this can be leveraged to gain insight from numerical results leading to explainable concepts of the missing effects for a full description of the ground state of interest. This, for example, gives an intuitive explanation of why energy based pre-optimization in the style of Eq. 3 often fails which is illustrated with a detailed example. We will begin by providing more details on the method in section I followed by an overview over the used circuit construction methods II. In section III we illustrate the performance of the developed techniques on explicit use cases that were used in previous works. Here we provide a detailed analysis using a prominent benchmark system and illustrate with extended numerical simulations that our method results in concrete bases with respect to basis size and cost of the individual circuits. ## I Method We start by selecting a suitable many-body basis in the form of parametrized quantum circuits \[\ket{\psi\left(\mathbf{\theta}_{k}\right)}=U_{k}\left(\mathbf{\theta}_{k}\right)\ket{0}, \tag{5}\] in order to represent the total wavefunction \[\ket{\Psi\left(\mathbf{c},\mathbf{\theta}\right)}=\sum_{k=1}^{N}c_{k}U_{k}\left( \mathbf{\theta}_{k}\right)\ket{0}. \tag{6}\] Depending on the ground state problem of interest, the choice of the circuits \(U_{k}\) will have practical implications on the runtime of the involved optimizations and the quality of the final wavefunction. In Sec. II we will discuss a specific choice of basis suitable for electronic structure, which we apply in this work. Once the basis elements are chosen, a concerted optimization of all parameters in the total wavefunction (6) is performed \[\left\{\mathbf{c}^{(*)},\mathbf{\theta}^{(*)}\right\}=\min_{\mathbf{c},\mathbf{\theta}^{(M )}}\frac{\bra{\Psi\left(\mathbf{c},\mathbf{\theta}\right)}{H}\Psi\left(\mathbf{c},\mathbf{\theta}\right)}{\bra{\Psi\left(\mathbf{c},\mathbf{\theta}\right)}{\Psi\left( \mathbf{c},\mathbf{\theta}\right)}}, \tag{7}\] with \(\mathbf{\theta}^{(M)}=\bigcup_{k=1}^{M}\{\mathbf{\theta}_{k}\}\) denoting the set of parameters subjected to the optimization procedure. Depending on the wavefunction and parameters in the optimization we use the notation \(G(N,M)\) with \(N\) denoting the number of circuits included in Eq. (6) and \(M\leq N\) the number of fully optimized parameters \(\mathbf{\theta}^{(M)}\). After the concerted optimization the generalized eigenvalue equation Eq. (1) is invoked as a convergence test. If the so-determined coefficients \(\mathbf{c}\) differ from the optimized coefficients \(\mathbf{c}^{(*)}\), the concerted optimization is restarted with the coefficients determined through (1) as starting values. This scheme proved to be useful in similar optimization frameworks in electronic structure. [33] Prior to the optimization in Eq. (7), the circuit parameters can be initialized in the spirit of NOVQE [10] through individual energy optimization as in Eq. (3). At this point, it is crucial to avoid linear dependencies (i.e. restricting the overlaps \(S_{ij}\) from becoming too close to one). [34] This can either be done by including penalty terms into the optimization or through the design of the individual circuits - in this work we resort to the latter (see Sec. II) and provide an argument (Sec. III) why this can be advantageous. The pre-optimized circuits are then subjected to the generalized eigenvalue equation (1) resulting in initial values for the coefficients \(\mathbf{c}^{(0)}\) and initial energies which we will denote as \(G(N,0)\). ## II Circuits In this work we will employ the circuit design principles of Ref. [22] in the context of fermionic Hamiltonians (encoded via Jordan-Wigner) in the usual form \[H_{\mathrm{f}}=\sum_{kl}h_{k}^{\dagger}a_{k}^{\dagger}a_{l}+\sum_{klmn}g_{kl}^ {mn}a_{k}^{\dagger}a_{l}^{\dagger}a_{m}a_{n}, \tag{8}\] Figure 1: Basis construction: Illustration for the construction of many-body basis states through graph-based heuristics following [22]. On the left: the graphs with one of the corresponding SPA [22; 31] circuits of the quadratic H\({}_{4}\)/STO-6G(4,8) constructed via Heuristic 1 in [22]. On the lower center: Energetic errors with different levels of optimization. \(G(N,M\leq N)\) denotes the optimization (7) of parameters from \(M\) circuits in a total wavefunction assembled from \(N\) circuits as in Eq. (6). Note that the \(G(3,3)\) error is essentially zero [32] and therefore not visible. On the right: illustration of the \(G(N,M)\) wavefunctions obtained through concerted optimizations of the coefficients and angles from \(M\) circuits with the corresponding graphs highlighted in green. From top to bottom the illustrations correspond to \(G(3,0),G(3,1),G(3,2),G(3,3)\). with fermionic annihilation (\(a_{k}\)) and creation operators (\(a_{k}^{\dagger}\)) for electrons in the fermionic basis state (also called spin-orbital) \(\chi_{k}(\mathbf{x})=\phi(\mathbf{x})\otimes\left|\sigma_{k}\right\rangle\) describing an electron with spin \(\sigma_{k}\in\{\uparrow,\downarrow\}\) in the spatial orbital \(\phi\). In the molecular case, the tensors \(h\) and \(g\) are computed as integrals over the states \(\chi_{k}\) (for details see Ref. [8] or the appendix A of Ref. [22]). In this section we will resort to a short illustrative summary of the applied circuit designs. Assume that we have a fermionic Hamiltonian as in Eq. (8) with \(s\) spatial orbitals, and a collection of graphs \(G(V,E)\), with vertices \(V\) corresponding to sets of uniquely assigned orbitals and a set of non-overlapping edges. We can then construct a quantum circuit from that graph as \[U_{\mathrm{G}}=\bigotimes_{e\in E}U_{e}(\theta_{e}) \tag{9}\] where the individual circuits \(U_{e}\) prepare a two-electron wavefunction on the qubits corresponding to the edge. The total wavefunction is then a \(2|E|\)-electron wavefunction and corresponds to a separable pair approximation [31] (see Heuristic 1 in [22] for more details). The circuits \(U_{e}\) for the simplest non-trivial case: one orbital assigned to each vertex (corresponding to two spin-orbitals or qubits) are illustrated in Fig. 1 and consist of two parts \[U_{e}=U\left(\theta\right)U_{\mathrm{R}}\left(\varphi\right). \tag{10}\] The first term \(U(\theta)\) is built from a parametrized \(R_{y}\) rotation and three controlled-NOT operations preparing the 4-qubit wavefunction \[U\left(\theta\right)\left|0\right\rangle= \cos\left(\frac{\theta}{2}\right)\left|1100\right\rangle+\sin \left(\frac{\theta}{2}\right)\left|0011\right\rangle\] \[\equiv \cos\left(\frac{\theta}{2}\right)\left|\vbox{\hbox{\includegraphics[ ]{fig/fop The wavefunction prepared by the circuit derived from the third graph state optimized on G(3,3) level - _i.e._ in a concerted optimization including all three graphs - takes the form (13) Here the up (and down) arrows represent a spin-up (spin-down) electron occupying one of the four spatial orbitals located on the hydrogen atoms. The state in Eq.(13) is assembled from four configurations that cluster the four electrons as close as possible. The wavefunction clearly is energetically not favorable explaining the failure of energy based pre-optimization in the \(G(3,M<3)\) wavefunctions. The reason why the state in Eq.(13) needs to take this specific form becomes clear when we take a look at the \(G(2,2)\) wavefunction \[\ket{G(2,2)}= \bar{c}_{1}\ket{\vbox{\hbox{\includegraphics[scale=0.4]{fig-1.eps}}}}+ \bar{c}_{2}\ket{\vbox{\hbox{\includegraphics[scale=0.4]{fig-1.eps}}}}\] \[= a\left(\ket{\vbox{\hbox{\includegraphics[scale=0.4]{fig-1.eps}}}} +\ket{\vbox{\hbox{\includegraphics[scale=0.4]{fig-1.eps}}}}-\ket{\vbox{ \hbox{\includegraphics[scale=0.4]{fig-1.eps}}}}-\ket{\vbox{\hbox{\includegraphics [scale=0.4]{fig-1.eps}}}}-\ket{\vbox{\hbox{\includegraphics[scale=0.4]{fig-1.eps}}}} \right)\] \[+b\ket{\Psi_{\rm R}}\] \[= 2a\ket{\vbox{\hbox{\includegraphics[scale=0.4]{fig-1.eps}}}}+b \ket{\Psi_{\rm R}} \tag{14}\] with the amplitudes \(\bar{c}_{2}=-\bar{c}_{1}\) and \(a\approx 0.07<b\) and \(\Psi_{\rm R}\) contains all other electronic configurations. We see from Eq. (14) that the (optimized) wavefunction of the third graph (13) is already included. Adding the third graph to the total wavefunction in \(G(3,3)\) does therefore not introduce new configurations into the total wavefunction but it allows a relative reduction of the amplitude \(a\) while preserving the internal structure of the residual (and energetically more important) wavefunction \(\ket{\Psi_{\rm R}}\) - the structure of the \(G(2,2)\) wavefunction alone would not allow this. On the other hand, energy based pre-optimization of the third graph does not result in the energetically unfavorable form of Eq. (13) leading to \(G(3,2)\) having no visible improvement over \(G(2,2)\) as witnessed in Fig. 1. The analysis of the wavefunctions in Eq. (13) and (14) also shows why orthogonality constraints between the individual graphs in the optimization can become problematic. When compared to two isolated H\({}_{2}\) molecules the optimal wavefunction of the third graph would correspond to a product of two ionic non-bonding states, as it can be written as (15) while the corresponding wavefunctions of the other two graphs have more similarity with a product of bonding H\({}_{2}\) wavefunctions. The intuitive picture of the \(G(2,2)\) wavefunction in Eq. (14) that represents the H\({}_{4}\) wavefunction as a superposition of both degenerate realizations of two individual H\({}_{2}\) molecules is therefore still a reasonable model for the true ground state of the system. A suitable interpretation for the third graph is the addition of weak correlation between the two isolated H\({}_{2}\) molecules achieved by destructive interference of energetically unfavorable configurations. This allows an interesting connection to Ref. [23] (in particular Fig. 4) where similar effects were identified in the context of quantum optical setups and similar arguments as in this case will hold for potential future approaches based on individual optimization. A further illustration of the weak type of correlation contributed by the third graph, is to take a wavefunction generated by the first two graphs, but with more flexibility in the individual circuits. In this case we added more orbital rotations to the circuit (denoted by \(U_{\rm R}\) in the corresponding methods), so that all non-connected vertices of the graphs were connected through an orbital rotation. With this, the \(G(2,2)+U_{\rm R}\) wavefunction is sufficient to represent the true ground state with arbitrary precision [32]. ### Linear H\({}_{4}\) and H\({}_{6}\): comparison to quantum Krylov In the previous Section, we have seen how the individual parts of the wavefunction can be interpreted. We have identified weak correlations (as in the third graph of the rectangular H\({}_{4}\)) that can not be generated through energy based pre-optimization as the energetic effect is due to destructive interference and only present in the total wavefunction. In the case of the rectangular H\({}_{4}\) those weak correlations could \begin{table} \begin{tabular}{l c c} \hline \hline Method & H\({}_{4}\) & H\({}_{6}\) \\ \hline MRSQK (m=1) & 2656 & 19944 \\ MRSQK (m=8) & 21248 & 159552 \\ \hline UpCCGSD & 188 & 687 \\ 2-UpCCGSD & 432 & 1387 \\ \hline \(G(N,M)\) & 70 & 150 \\ \(G(N,M)+U_{\rm R}\) & 150 & 425 \\ \hline \hline \end{tabular} \end{table} Table 1: CNOT counts of the deepest circuit in MRSQK [45], NO-VQE with \(k\)-UpCCGSD (compiled with optimizations introduced in [31]) and the \(G(M,N)\) developed in this work. Note further reduction could be achieved through more efficient compiling of the \(U_{\rm R}\) rotations [46; 47] be compensated by equipping the circuits representing the individual graphs with more freedom in the form of orbital rotations. We expect a similar behavior for other systems and tested it on the linear H\({}_{4}\) and H\({}_{6}\) models where the results displayed in Fig. 2 show the same trends as observed before. While the non augmented wavefunctions show good convergence within the first 3, respectively 6, graphs, the overall error is still around 5, respectively 10, millihartree for the fully optimized wavefunctions. On the other hand, the augmented \(G(N,M=N)+U_{\mathrm{R}}\) wavefunction already achieves chemical accuracy at a smaller size \(N\) of the effective many-body basis. The linear H\({}_{4}\) and H\({}_{6}\) hydrogen chains are prominent benchmark systems that have been applied in the context of Refs. [22] as well as in Ref. [11] that introduced the Multi-Reference Selected Quantum Krylov (MRSQK) method - a real-time evolution approach towards approximating the Krylov subspace in Eq. (4). In Fig. 3 we compare the \(G(N,M)+U_{\mathrm{R}}\) energies with MRSQK with respect to the size of the many-body basis \(N\) that is for our method the number of graph-based circuits and for MRSQK the number of multi-reference starting points of the real-time evolution. We see that \(G(N,N)\) always outperforms MRSQK in all Figure 3: Comparison of the present approach with the multireference selected quantum krylov approach (MRSQK) [11] with respect to the number \(N\) of used effective many-body basis states. Energy errors w.r.t. the exact ground state are given for (a) H\({}_{4}\)/STO-6G(4,8) and (b) H\({}_{6}\)/STO-6G(6,12). MRSQK\((N,m)\) and NT-SRQK\((N)\) results are with, and without, Trotter approximation using \(m\) Trotter steps for the real-time evolution. Graph based construction is done according to Fig. 1 with static energies denoting the ground state of the effective Hamiltonian in the pre-optimized basis. \(G(N,M)\) denotes energies of wavefunctions according to Eq. (7) using \(N\) graphs to construct \(N\) circuits \(U_{k}\) with \(M\) of them being fully optimized in a concerted optimization. Note that in the H\({}_{4}\) case using 1 or 8 Trotter steps leads to the same error. Figure 2: Effects of optimization level and circuit expressivity on linear hydrogen chains (equidistant bond lengths of 1.5Å). Basic circuits are constructed analogously to Fig. 1. For \(G(N,)+UR\) datapoints the circuits were augmented with additional orbital rotations. (a): H\({}_{4}\)/STO-6G(4,8) (b): H\({}_{6}\)/STO-6G(6,12), (c): graphs used for H\({}_{6}\)/STO-6G(6,12). flavors while \(G(N>2,2)\) energies can not improve upon the non-Trotterized quantum Krylov variant and \(G(N>4,0)\) can not improve upon the Trotterized variant with 8 Trotter steps. Based on the observations on the rectangular H\({}_{4}\) example, this is not further surprising as we would expect the higher order graphs only to bring significant improvements when they are included into the concerted optimization. Note that apart from the basis size \(N\) the \(G(N,M)\) method requires significantly shallower circuits compared to the Trotterized real-time evolutions necessary to generate the MRSQK basis (see Tab. 1). In comparison to NO-VQE [10] the circuit sizes are still significantly reduced and the method in this work is not relying on repeated randomized initialization. The total number of BFGS iterations is moderate (varying between 15 and 30 iterations) and we expect further reduction through improved implementations. ### Linear BeH\({}_{2}\): transferring concepts from H\({}_{4}\) to a more complex system In Ref. [22] the graph based construction was applied for a single circuit to prepare the wavefunction directly, while in this work we resorted to a divided approach where each graph corresponds to an individual circuit. In the previous section we have seen, that the divided approach here can achieve comparable accuracy in the wavefunctions. So far, we resorted to simplified hydrogenic systems with a single spherical s-type orbital on each atom. With BeH\({}_{2}\)/STO-6G(4,8) we add a model system with more complicated orbital structure (having s- and p-type orbitals on the central Be atom). This model system is the same as in [22] and has the same dimensions as H\({}_{4}\)/STO-6G(4,8). Through the graph description we can treat the BeH\({}_{2}\) now in the same way as the linear H\({}_{4}\). The two main graphs are illustrated in Fig. 4, where one of them is interpreted as molecular and the other as atomic (see also Eq. (25) in [22]). In Fig. 4 we clearly see how the potential energy surface is divided into three domains: the first being the bonded domain (around bond distance \(R=1.5\)A), the second the dissociated domain (\(R>3.0\)A) both dominated by a single graph, while the third domain (around \(R=2.6\)A) requires both graphs for an accurate description. ## IV Conclusion & Outlook Since the advent of variational quantum eigensolver [3; 4] efficient ways to prepare fermionic ground states were investigated through different routes [5; 6]. Despite having a vast set of methods it is still not clear which method performs well for which systems. In this work we provided a first step into that direction by combining interpretable circuit designs with a divide-and-conquer approach where the wavefunction is assembled as a linear combination of smaller parts. We could show, how this method outperforms other divide-and-conquer methods in terms of size of the effective basis as well as individual quantum resources for the involved circuits. Most importantly the developed method allows us to interpret the results and to learn from the discovered effects. At this point, the computational bottleneck comes with the concerted optimization necessary for the determination of the effective basis. As a quantum algorithm, this procedure requires many evaluation of primitive expectation values (in the sense of [38]). We see however promising ways forward in that respect, as illustrated in the following. Through the combination with the circuit designs of [22] the effective basis is described by individual circuits that are equivalent to separable pair approximations [31]. As a consequence, the individual wavefunctions are classically simulable, so that energy based pre-optimization can be performed purely classical with linear memory requirement. The \(G(N,0)\) method therefore defines a de-quantized and de-randomized flavor of a NO-VQE. Based on the reported numerical evidence, we expect this to work well for qualitative descriptions of the wavefunctions. For a quantitative treatment, energy based pre-optimization is however not expected to be practicable. Other types of pre-optimizations, as in Ref. [48], could however be imagined for the future and might lead to more powerful purely classical methods capable of generating compact quantum circuits for accurate state preparation. The interpretable circuit design offers here a chance to effectively predict optimal circuit parameters based on detailed analysis of model systems. ## V Acknowledgment This project was initiated through the Mentorship Program of the Quantum Open Source Foundation (QOSF) Cohort 5. [49] We are grateful to the QOSF for providing this platform. We thank Philipp Schleich for giving valuable feedback to the initial manuscript. Furthermore we thank Nick Stair and Francesco Evangelista for providing user-friendly public access to the MRSQK [11] method through qforte [44].
2304.14788
Pseudocommutativity and Lax Idempotency for Relative Pseudomonads
We extend the classical work of Kock on strong and commutative monads, as well as the work of Hyland and Power for 2-monads, in order to define strong and pseudocommutative relative pseudomonads. In order to achieve this, we work in the more general setting of 2-multicategories rather than monoidal 2-categories. We prove analogous implications to the classical work: that a strong relative pseudomonad is a pseudo-multifunctor, and that a pseudocommutative relative pseudomonad is a multicategorical pseudomonad. Furthermore, we extend the work of L\'opez Franco with a proof that a lax-idempotent strong relative pseudomonad is pseudocommutative. We apply the results of this paper to the example of the presheaf relative pseudomonad.
Andrew Slattery
2023-04-28T11:55:37Z
http://arxiv.org/abs/2304.14788v4
# Pseudocommutativity and Lax idempotency ###### Abstract. We extend the classical work of Kock [16] on strong and commutative monads, as well as the work of Hyland and Power [1] for 2-monads, in order to define strong and pseudocommutative relative pseudomands. In order to achieve this, we work in the more general setting of 2-multicategories rather than monoidal 2-categories. We prove analogous implications to the classical work: that a strong relative pseudomonad is a pseudo-multifunctor, and that a pseudocommutative relative pseudomonad is a multicategorical pseudomonad. Furthermore, we extend the work of Lopez Franco [1] with a proof that a lax-idempotent strong relative pseudomonad is pseudocommutative. We apply the results of this paper to the example of the presheaf relative pseudomonad. 2020 Mathematics Subject Classification: Primary 18N15; Secondary 18D65, 18A05, 18M65 ## 1. Introduction **Context and motivation.** The classical theory of monads provides a framework with which to study algebraic structures on objects of a category. A landmark in this field is Kock's theory of commutative monads [16], developed in the setting of symmetric monoidal categories. The basic notion in this theory is that of a _strong monad_, which comprises a monad on a symmetric monoidal category equipped with a natural transformation with components \[t_{X,Y}:X\otimes TY\to T(X\otimes Y),\] called the _strength_. The underlying endofunctor of a strong monad is a lax monoidal functor, and the monad unit is a monoidal natural transformation. Furthermore, Kock showed that the monad is _commutative_ (a property of a given strength) if and only if the monad is a monoidal monad, which is to say that the monad multiplication is monoidal. Some nice properties follow when this happens. For example, if a symmetric monoidal category \(\mathbb{C}\) has a closed structure and \(T\) is a commutative monad on \(\mathbb{C}\), then the closed structure gives rise to one on Eilenberg-Moore category of \(T\)-algebras. Two-dimensional monad theory [1] has traditionally studied the strict notion of a 2-monad, along with their algebras and lax, pseudo-, and strict algebra morphisms. Kelly [15] and Hyland & Power [1] extended Kock's theory to 2-monads, defining _pseudocommutative 2-monads_. Some aspects of the theory become more subtle; for example, one must distinguish between braiding and symmetry, and between closed structures and pseudo-closed structures. In this setting, an important result is Lopez Franco's theorem [11] that a lax-idempotent pseudocompnad is pseudocommutative (extending work of Power, Cattani and Winskel in [12]). For some applications, it is useful to consider the notions of a pseudomonad [1, 2, 13], in which the axioms for a 2-monad hold only up to coherent isomorphisms, and of a relative pseudomonad [15], in which one further abandons the requirement of having an underlying endofunctor. The latter can be seen as a 2-categorical counterpart of the notion of a relative monad [1, 2]. The aim of this paper is to provide an analogue of the theory of Hyland & Power and of Lopez Franco for relative pseudomonads. We are motivated to do so by the presheaf construction; here, pseudocommutativity and lax idempotency are particularly intuitive and correspond to important properties of the presheaf construction. As a byproduct of the work in this paper, we obtain a theory of commutativity for relative monads. We also expect a close relationship between the work in this paper and that on strength for pseudomonads in [16], which have been developed independently. **Main contributions.** We are naturally led to work in a multicategorical setting, as was already partially done by Hyland & Power in [10]. This step is unavoidable if we wish to avoid dealing with associator and unitor coherences while still having our work apply directly to the 2-categories Cat and CAT of small and locally-small categories. Multicategories subsume monoidal categories, with monoidal categories corresponding to the subclass of'representable multicategories', as laid out by Hermida in [1]. Thus we work in general with \(n\)-ary maps \(f:X_{1},...,X_{n}\to Y\), and our definitions reflect this. We define the notion of strong relative pseudomonad (Definition 3.3), and prove that for a strong relative pseudomonad, the underlying pseudofunctor becomes a multi-pseudofunctor (Proposition 3.9) and the unit becomes multicategorical (part of Theorem 4.7). We also define the notion of pseudocommutative relative pseudomonad, which in our setting is particularly appealing; it amounts to asking for an isomorphism \[(f^{t})^{s}\cong(f^{s})^{t}.\] We then prove that every pseudocommutative relative pseudomonad is a multicategorical relative pseudomonad (Theorem 4.7). We define the notion of lax idempotency for strong relative pseudomonads, extending the earlier definition in [15], and prove that every lax-idempotent strong relative pseudomonad is pseudocommutative (Theorem 5.4). We apply these definitions and results to the example of presheaves (Theorem 6.2). **Roadblocks and technical challenges.** As with any venture at generalisation, we lose some implications and equivalences. For example, while Kock [14] proves an equivalence between strong monads and monads which are lax monoidal as functors, as well as one between commutative and monoidal monads, in our setting we have only been able to prove implications in the forward direction (and suspect there is a genuine obstacle to the reverse). Another assumption we must drop if we are to apply our results to the presheaf construction is that of closure; while Cat is closed, CAT is not (again due to size issues--the functor category \([X,Y]\) need not be locally small even when both \(X\) and \(Y\) are). This means in particular that the proof that lax-idempotent pseudomonads are pseudocommutative given by Lopez Franco in [11] cannot be readily transported to our setting, as it makes heavy use of closure. Other trade-offs come from working in the setting of multicategories. The classical strength employs a binary map \(X\otimes TY\to T(X\otimes Y)\); we will need an \(n\)-ary formulation in order to extend to the notion of strength to the multicategorical setting. In general, we are able to obviate associativity and unitor coherences, at the expense of having to work in an unbiased way on general n-ary morphisms, instead of being able to consider only binary and nullary morphisms. **Organisation of the paper.** Section 2 reviews the definition of a relative pseudomonad and some immediate results, and introduces the example of the presheaf relative pseudomonad. New work begins in section 3, in which we introduce the setting of 2-multicategories, define a notion of relative pseudomonad suitable for this setting (strong relative pseudomonad) and prove that every strong relative pseudomonad is a pseudo-multifunctor. In section 4 we focus on the class of strong relative pseudomonads which are pseudocommutative, and prove that every pseudocommutative relative pseudomonad is a multicategorical relative pseudomonad. Section 5 discusses a particularly nice class of strong relative pseudomonads, the lax-idempotent strong relative pseudomonads, and proves that every lax-idempotent strong relative pseudomonad is pseudocommutative. We close in Section 6 by applying our results to the case of Psh, the presheaf relative pseudomonad. ## 2. Background We recall the definition of a relative pseudomonad from [10]; for our purposes it will suffice to consider relative pseudomonads along a fixed 2-functor \(J:\mathbb{D}\to\mathbb{C}\) between 2-categories \(\mathbb{C}\) and \(\mathbb{D}\) (as opposed to a pseudofunctor between bicategories). **Definition 2.1**.: (Relative pseudomonad) Let \(\mathbb{C},\mathbb{D}\) be 2-categories and let \(J:\mathbb{D}\to\mathbb{C}\) be a 2-functor. A _relative pseudomonad_\((T,i^{*};\eta,\mu,\theta)\) along \(J\) comprises * for \(X\in\operatorname{ob}\mathbb{D}\) an object \(TX\in\operatorname{ob}\mathbb{C}\) and map \(i_{X}:JX\to TX\) (called a _unit map_), and * for \(X,Y\in\operatorname{ob}\mathbb{D}\) a functor \[\mathbb{C}(JX,TY)\xrightarrow{(-)^{*}}\mathbb{C}(TX,TY)\] (called an _extension functor_). The units and extensions furthermore come equipped with three families of 2-cells * \(\eta_{f}:f\to f^{*}i_{X}\) for \(f:JX\to TY\), * \(\mu_{f,g}:(f^{*}g)^{*}\to f^{*}g^{*}\) for \(g:JX\to TY\), \(f:JY\to TZ\), and * \(\theta_{X}:(i_{X})^{*}\to 1_{TX}\) for \(X\in\operatorname{ob}\mathbb{D}\), satisfying the following two coherence conditions: 1. for every \(f:JX\to TY\), \(g:JW\to TX\) and \(h:JV\to TW\) the diagram \[\begin{CD}((f^{*}g)^{*}h)^{*}@>{\mu_{f^{*}g,h}}>{}>(f^{*}g)^{*}h^{*}\\ @V{(\mu_{f,g}h)^{*}}V{}V@V{}V{\mu_{f,g}h^{*}}V\\ (f^{*}g^{*}h)^{*}@>{}>{\mu_{f,g^{*}h}}>f^{*}(g^{*}h)^{*}@>{}>{f^{*}\mu_{g,h}}>f^{*}g^{*}h^{*} \end{CD}\] commutes (the _associativity axiom_), and 2. for every \(f:JX\to TY\) the diagram commutes (the _unit axiom_). We usually omit subscripts from the unit maps \(i:JX\to TX\); we will also refer to a given relative pseudomonad \((T,i,{}^{*};\eta,\mu,\theta)\) simply as \((T,i,{}^{*})\) or \(T\), with the rest of the structure inferred. Given a relative pseudomonad \(T\) along \(J:\mathbb{D}\to\mathbb{C}\), the function \(\operatorname{ob}\mathbb{D}\to\operatorname{ob}C:X\mapsto TX\) can be given the structure of a pseudofunctor, with functors between hom-categories given by \[\mathbb{D}(X,Y)\to\mathbb{C}(TX,TY):f\mapsto(i_{Y}\circ Jf)^{*}.\] **Remark 2.2**.: A relative pseudomonad along the identity \(1:\mathbb{C}\to\mathbb{C}\) induces and is induced by an ordinary pseudomonad with the same action on objects (see [11] Remark 4.5). We can infer more equalities between a relative pseudomonad's structural 2-cells. The following lemma is from [11]; the proof is analogous to the proof that three of the original five axioms for a monoidal category are redundant [12], which also has a version for (ordinary) pseudomonads [13]. **Lemma 2.3**.: _Let \(T\) be a relative pseudomonad along \(J:\mathbb{D}\to\mathbb{C}\). Then in addition to the two equalities of 2-cells given by definition, the following three diagrams also commute:_ 1. _for every_ \(f:JX\to TY\) _and_ \(g:JW\to TX\)_, the diagram commutes._ 2. _for every_ \(f:JX\to TY\)_, the diagram commutes, and_ 3. _for every object_ \(X\in\operatorname{ob}\mathbb{D}\)_, the diagram commutes._ **Example 2.4**.: The example of a relative pseudomonad which will be the focus of this paper is that of the presheaf construction. \[X\mapsto\operatorname{Psh}X:=[X^{op},\operatorname{Set}]\] Write \(\operatorname{Cat}\) for the \(2\)-category of small categories, functors and natural transformations, and write \(\operatorname{CAT}\) for the \(2\)-category of locally-small categories. Since the category of presheaves on a small category is in general only locally small, it is natural to ask whether \(\operatorname{Psh}\) can be given the structure of a relative pseudomonad along the inclusion \(2\)-functor \(J:\operatorname{Cat}\to\operatorname{CAT}\). This is shown in [11] via the construction of a relative pseudoadjunction; the structure of a relative pseudomonad is given to \(\operatorname{Psh}\) as follows: * for an object \(X\in\operatorname{Cat}\) we have \(\operatorname{Psh}X\in\operatorname{CAT}\) and unit map \(y_{X}:X\to\operatorname{Psh}X\) given by the Yoneda embedding, * for \(X,Y\in\operatorname{Cat}\) and a functor \(f:X\to\operatorname{Psh}Y\), the extension \(f^{*}:\operatorname{Psh}X\to\operatorname{Psh}Y\) is given by the left Kan extension of \(f\) along the Yoneda embedding \[X\xrightarrow{\text{\tiny$y$}}\operatorname{Psh}X\] \[\operatorname{Psh}Y\] which also defines the \(2\)-cells \(\eta_{f}:f\to f^{*}y_{X}\) (note that since the Yoneda embedding is fully faithful the maps \(\eta_{f}\) are invertible, as required), * for \(f:JX\to TY\) and \(g:JW\to TX\), the \(2\)-cell \(\mu_{f,g}:(f^{*}g)^{*}\to f^{*}g^{*}\) is uniquely determined by the universal property of the left Kan extension: \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)* {\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0)*{\xyxy(0,0)*{\xy(0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0}{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0 )*{\xy(0*{\xy(0)*{\xy(0)*{\xy(0*{\xy(((((((((( 0 \ We seek to consider the notion of a relative pseudomonad along \(J:\mathbb{D}\to\mathbb{C}\) when \(\mathbb{C}\) and \(\mathbb{D}\) are 2-multicategories. We will define a'strong relative pseudomonad' from scratch to take this role, and note that a every strong relative pseudomonad induces a canonical relative pseudomonad structure. In order to do this, let us recall the definition of a 2-multicategory [1] (taking \(V=\operatorname{Cat}\) to specialise the \(V\)-enriched theory). **Definition 3.1**.: (2-multicategory) A 2-multicategory \(\mathbb{C}\) is a multicategory enriched in \(\operatorname{Cat}\). Unwrapping this statement a little, a 2-multicategory \(\mathbb{C}\) is given by 1. a collection of objects \(X\in\operatorname{ob}\mathbb{C}\), together with 2. a category of multimorphisms \(\mathbb{C}(X_{1},...,X_{n};Y)\) for all \(n\geq 0\) and objects \(X_{1},...,X_{n},Y\) which we call a _hom-category_; an object of the hom-category \(\mathbb{C}(X_{1},...,X_{n};Y)\) is denoted by \(f:X_{1},...,X_{n}\to Y\), 3. an identity multimorphism functor \(\mathbf{1}_{X}:\mathbb{1}\to\mathbb{C}(X;X):\ast\mapsto 1_{X}\) for all \(X\in\operatorname{ob}\mathbb{C}\), and 4. composition functors \[\mathbb{C}(X_{1},...,X_{n};Y)\times\mathbb{C}(W_{1,1},...,W_{1,m_ {1}};X_{1})\times...\times\mathbb{C}(W_{n,1},...,W_{n,m_{n}};X_{n})\\ \to\mathbb{C}(W_{1,1},...,W_{n,m_{n}};Y)\\ (f,g_{1},...,g_{n})\mapsto f\circ(g_{1},...,g_{n})\] for all arities \(n,m_{1},...,m_{n}\) and objects \(Y,X_{1},...,X_{n},W_{1,1},...,W_{n,m_{n}}\) in \(\mathbb{C}\). where the identity and composition functors satisfy the usual associativity and identity axioms for an enrichment. As a point of notation, given \(f:X_{1},...,X_{n}\to Y\) and \(g:W_{1},...,W_{m}\to X_{j}\) we will abbreviate composites of the form \(f\circ(1,...,1,g,1,...,1)\) to \(f\circ_{j}g\). **Remark 3.2**.: We can relate 2-multicategories to more familiar structures. * Every 2-multicategory \(\mathbb{C}\) restricts to a 2-category by considering only the unary hom-categories \(\mathbb{C}(X;Y)\). * Monoidal 2-categories (defined in for example [1]) have underlying 2-multicategories, where hom-categories \(\mathbb{C}(X_{1},...,X_{n};Y)\) are given by \(\mathbb{C}(X_{1}\otimes...\otimes X_{n},Y)\) (choosing the leftmost bracketing of the tensor product); this is shown in [1] Proposition 7.1 (2). For example, both \(\operatorname{Cat}\) and \(\operatorname{CAT}\) can be given 2-multicategorical structures. We seek to generalise Kock's notion of a strong monad [10] (and Uustalu's definition of a strong relative monad [10]) on a monoidal category. A strong monad structure on a monoidal category is given by a map \[t_{X,Y}:X\otimes TY\to T(X\otimes Y)\] satisfying some axioms [10]. To define a suitable notion of strong relative pseudomonad in the 2-multicategorical setting, we extend a relative pseudomonad's unary functors \(\mathbb{C}(JX,TY)\xrightarrow{(-)^{*}}\mathbb{C}(TX,TY)\) to general \(n\)-ary hom-categories \[\mathbb{C}(B_{1},...,JX,...,B_{n};TY)\xrightarrow{(-)^{t_{i}}}\mathbb{C}(B_{1 },...,TX,...,B_{n};TY),\] which we call _strengthenings_. To use this to construct the map \(t\) in the one-dimensional monoidal \(J=1\) case, we begin with the unit \[i:X\otimes Y\to T(X\otimes Y).\] Passing to the underlying multicategory, this corresponds to a map \[i:X,Y\to T(X\otimes Y).\] We can strengthen this map in the second argument to obtain \[i^{t}:X,TY\to T(X\otimes Y).\] Now passing back to the original monoidal category we have found a strength map \(X\otimes TY\to T(X\otimes Y)\), and one can check that this satisfies the strength axioms. This derivation justifies the use of the terminology'strength' to refer to the functors \[\mathbb{C}(B_{1},...,JX,...,B_{n};TY)\xrightarrow{(-)^{t_{i}}}\mathbb{C}(B_{1 },...,TX,...,B_{n};TY).\] **Definition 3.3**.: (Strong relative pseudomonad) Let \(\mathbb{C}\) and \(\mathbb{D}\) be 2-multicategories and let \(J:\mathbb{D}\to\mathbb{C}\) be a (unary) 2-functor between them. A _strong relative pseudomonad_\((T,i,^{t};\tilde{t},\tilde{t},\theta)\) along \(J\) comprises: * for every object \(X\) in \(\mathbb{D}\) an object \(TX\) in \(\mathbb{C}\) and unit map \(i_{X}:JX\to TX\), * for every \(n\), index \(1\leq i\leq n\), objects \(B_{1},...,B_{i-1},B_{i+1},...,B_{n}\) in \(\mathbb{C}\) and objects \(X,Y\) in \(\mathbb{D}\) a functor \[\mathbb{C}(B_{1},...,B_{i-1},JX,B_{i+1},...,B_{n};TY)\xrightarrow{(-)^{t_{i} }}\mathbb{C}(B_{1},...,B_{i-1},TX,B_{i+1},...,B_{n};TY)\] called the _strength_ (in the \(i\)th argument) and which is pseudonatural in all arguments, along with three natural families of invertible 2-cells: * \(\tilde{t}_{f}:f\to f^{t_{j}}\circ_{j}i\), * \(\tilde{t}_{f,g}:(f^{t_{j}}\circ_{j}g)^{t_{j+k-1}}\to f^{t_{j}}\circ_{j}g^{t_{k}}\), and * \(\theta_{X}:(i_{X})^{t_{1}}\to 1_{TX}\) for \(f:B_{1},...,JX,...,B_{n}\to TY\) and \(g:C_{1},...,JW,...,C_{m}\to TX\), satisfying the coherence conditions (1) and (2) shown below. As a notational shorthand, when a map \(f:B_{1},...,JX,...,B_{n}\to TY\) has only one argument in the domain of the form \(JX\) for some \(X\in\operatorname{ob}\mathbb{D}\), we will denote its strengthening simply as \(f^{t}\), rather than \(f^{t_{i}}\). We will furthermore write \(f^{t}\circ_{t}g\) to denote the composition of \(f^{t}\) with \(g\) in this strengthened argument. In this notation the families of invertible 2-cells above are: \[\tilde{t}_{f} :f\to f^{t}\circ_{t}i\] \[\hat{t}_{f,g} :(f^{t}\circ_{t}g)^{t}\to f^{t}\circ_{t}g^{t}\] \[\theta :i^{t}\to 1\] (We also omit subscripts from unit maps and from \(\theta\) when unambiguous.) With this notation in hand, the two coherence conditions for these 2-cells are: 1. for every \(f:B_{1},...JX...B_{n}\to TY\), \(g:C_{1},...,JW,...C_{m}\to TX\) and \(h:D_{1},...,JV,...,D_{l}\to TW\) the diagram \[\begin{CD}((f^{t}\circ_{t}g)^{t}\circ_{t}h)^{t}@>{\hat{t}_{f^{t}\circ_{tg,h}} }>{}>(f^{t}\circ_{t}g)^{t}\circ_{t}h^{t}\\ @V{(\hat{t}_{f,g}\circ_{t}h)^{t}}V{}V@V{}V{\hat{t}_{f,g}\circ_{t}h^{t}}V\\ (f^{t}\circ_{t}g^{t}\circ_{t}h)^{t}@>{\hat{t}_{f,g^{t}\circ_{t}h}}>{}>f^{t} \circ_{t}(g^{t}\circ_{t}h)^{t}@>{}>{f^{t}\circ_{t}f_{g,h}}(f^{t}\circ_{t}g^{t}) \circ_{t}h^{t}\end{CD}\] commutes, and 2. for every \(f:B_{1},...,JX,...,B_{n}\to TY\) the diagram commutes. **Remark 3.4**.: The stipulation that the maps \[\mathbb{C}(B_{1},...,JX,...,B_{n};TY)\xrightarrow{(-)^{t_{j}}}\mathbb{C}(B_{1},...,TX,...,B_{n};TY)\] be pseudonatural in all arguments asks in particular for invertible 2-cells of the form * \((f\circ_{k}g)^{t}\cong f^{t}\circ_{k}g\) for \(g:C_{1},...,C_{m}\to B_{k}\) (where \(k\neq j\)). Wherever such pseudonaturality isomorphisms arise in diagrams we will leave them anonymous, as they can be inferred from the source and target. **Remark 3.5**.: The data for a strong relative pseudomonad resembles that for a (unary) relative pseudomonad very closely. Indeed, restricting \(\mathbb{C}\) and \(\mathbb{D}\) to their 2-categories of unary maps, \((T,i,^{t})\) is exactly a (unary) relative pseudomonad, with \[(-)^{*} :=(-)^{t},\] \[\eta :=\tilde{t},\] \[\mu :=\hat{t},\] \[\theta :=\theta.\] As with relative pseudomonads, we can derive more equalities of 2-cells for a strong relative pseudomonads. The proof of the following Lemma 3.6 is formally identical to the proof of Lemma 2.3. **Lemma 3.6**.: _Let \(T\) be a strong relative pseudomonad along \(J:\mathbb{D}\to\mathbb{C}\). Then the following three diagrams commute:_ 1. _for every_ \(f:B_{1},...,JX,...,B_{n}\to TY\) _and_ \(g:C_{1},...,JW,...,C_{m}\to TX\)_, the diagram_ \[f^{t}\circ_{t}g\xrightarrow{\tilde{t}_{f^{t}\circ_{t}\tilde{g}}}(f^{t}\circ_{ t}g)^{t}\circ_{t}i\] \[\xrightarrow{f^{t}\circ_{t}\tilde{t}_{g}}f^{t}\circ_{t}g^{t}\circ_{t}i\] _commutes._ 2. _for every_ \(f:B_{1},...,JX,...,B_{n}\to TY\)_, the diagram commutes, and_ \[\xrightarrow{(i^{t}\circ f)^{t}}\xrightarrow{\tilde{t}_{i,f}}i^{t}\circ f^{t}\] \[\xrightarrow{\theta\circ f^{t}}f^{t}\] \[\xrightarrow{\theta\circ f^{t}}f^{t}\] _commutes, and_ \[\ _ 3. _for every object_ \(X\in\operatorname{ob}\mathbb{D}\)_, the diagram_ _commutes._ **Example 3.7**.: The presheaf relative pseudomonad from Example 2.4 can be given the structure of a strong relative pseudomonad. Given a multimorphism \(f:B_{1},...,X,...,B_{n}\to\operatorname{Psh}Y\) with \(X,Y\in\operatorname{Cat}\) and \(B_{k}\in\operatorname{CAT}\), its strengthening \(f^{t}\) is defined to be the left Kan extension which also defines the 2-cells \(\tilde{t}_{f}:f\to f^{t}\circ_{t}y\). As when giving \(\operatorname{Psh}\) a relative pseudomonad structure, the 2-cells \(\tilde{t}_{f,g}\), \(\theta\) are defined via the universal property of the left Kan extension. For details and a proof that this indeed endows \(\operatorname{Psh}\) with a strong relative pseudomonad structure, see Proposition 6.1 in the final section. Having generalised Kock's notion of a strong monad, we seek to prove a generalisation of his result that every strong monad is a lax monoidal functor. For this we define a notion of a pseudo-multifunctor on a 2-multicategory. **Definition 3.8**.: (Pseudo-multifunctor) Given multi-2-categories \(\mathbb{C},\mathbb{D}\), a _pseudomultifunctor_\(F:\mathbb{D}\to\mathbb{C}\) consists of: * a function \(\operatorname{ob}\mathbb{D}\xrightarrow{F}\operatorname{ob}\mathbb{C}:X\mapsto FX\), * for each hom-category \(\mathbb{D}(X_{1},...,X_{n};Y)\) in \(\mathbb{D}\) a functor \[\mathbb{D}(X_{1},...,X_{n};Y)\to\mathbb{C}(FX_{1},...,FX_{n};FY):f\mapsto Ff,\] along with * for each \(X\in\operatorname{ob}\mathbb{D}\) an invertible 2-cell \[\tilde{F}_{X}:F1_{X}\implies 1_{FX},\] * for each \(f:X_{1},...,X_{n}\to Y\), \(1\leq i\leq n\) and \(g:W_{1},...,W_{m}\to X_{i}\) an invertible 2-cell \[\hat{F}_{f,g}:F(f\circ_{i}g)\implies Ff\circ_{i}Fg\] satisfying the following three coherence conditions which parallel the unit and associativity diagrams for a lax monoidal functor: (1),(2) two unit axioms: for each \(f:X_{1},...,X_{n}\to Y\) and \(1\leq i\leq n\) the diagrams commute, and 3. one associativity axiom: for each \(f:X_{1},...,X_{n}\to Y\), \(1\leq i\leq n\), \(g:W_{1},...,W_{m}\to X_{i}\), \(1\leq j\leq m\) and \(h:V_{1},...,V_{l}\to W_{j}\) the diagram \[F(f\circ_{i}(g\circ_{j}h))\xrightarrow{\hat{F}_{f,g\circ_{j}h}}Ff\circ_{i}F(g \circ_{j}h)\xrightarrow{Ff\circ_{i}\hat{F}_{g,h}}Ff\circ_{i}(Fg\circ_{j}Fh)\] \[\left\|\begin{array}{c}\\ \\ F((f\circ_{i}g)\circ_{i+j-1}h)\xrightarrow{\hat{F}_{f\circ_{i}g,h}}F(f\circ _{i}g)\circ_{i+j-1}Fh\xrightarrow{\hat{F}_{f,g\circ_{i+j-1}Fh}}(Ff\circ_{i}Fg) \circ_{i+j-1}Fh\end{array}\right.\] commutes. If the 2-cells \(\tilde{F}\), \(\hat{F}\) are all identities we call \(F\) a _(strict) multicategorical 2-functor_. Just as the underlying functor of every strong monad is a lax monoidal functor, the underlying pseudofunctor of every strong relative pseudomonad is a pseudo-multifunctor. **Proposition 3.9**.: _Let \(T\) be a strong relative pseudomonad along multicategorical 2-functor \(J:\mathbb{D}\to\mathbb{C}\). Then \(T\) is a pseudo-multifunctor \(T:\mathbb{D}\to\mathbb{C}\)._ Proof.: Suppose \(T\) is a strong relative pseudomonad. As a point of notation, given a map \(f:X_{1},...,X_{n}\to Y\) let us define \[\bar{f}:=i_{Y}\circ JF:JX_{1},...,JX_{n}\to TY.\] Now to show the \(T\) is a pseudo-multifunctor, we begin by defining the action of \(T\) on 1-cells by the functors \[\mathbb{D}(X_{1},...,X_{n};Y)\xrightarrow{(i_{Y}\circ J-)^{t_{1}t_{2}...t_{n }}}\mathbb{C}(TX_{1},...,TX_{n};TY),\] so that for \(f:X_{1},...,X_{n}\to Y\) we have \[Tf:=(i_{Y}\circ Jf)^{t_{1}t_{2}...t_{n}}=\bar{f}^{t_{1},...,t_{n}}:TX_{1},...,TX_{n}\to TY.\] We need to construct 2-cells \(\tilde{T}_{X}:T1_{X}\implies 1_{TX}\) and \(\hat{T}_{f,g}:T(f\circ_{i}g)\implies Tf\circ_{i}Tg\). For the former, we can use the map \[T1_{X}=(i_{X}\circ J1_{X})^{t}=(i_{X})^{t}\xrightarrow{\theta_{X}}1_{TX}\] and for the latter, we employ the composite \[T(f\circ_{i}g) =(i\circ(Jf\circ_{i}Jg))^{t_{1}...t_{n+m-1}}=(\bar{f}\circ_{i}Jg) ^{t_{1}...t_{n+m-1}}\] \[\xrightarrow{\sim}(\bar{f}^{t_{1}...t_{i-1}}\circ_{i}Jg)^{t_{i}...t_{n+m-1}}\] \[\xrightarrow{\bar{f}}(\bar{f}^{t_{1}...t_{i}}\circ_{i}\bar{g})^{ t_{i}...t_{n+m-1}}\] \[\xrightarrow{\hat{t}...\hat{t}}(\bar{f}^{t_{1}...t_{i}}\circ_{i} \bar{g}^{t_{1}...t_{m}})^{t_{i+m}...t_{n+m-1}}\] \[\xrightarrow{\sim}\bar{f}^{t_{1}...t_{n}}\circ_{i}\bar{g}^{t_{1}...t_{m}}=Tf\circ_{i}Tg.\] It remains to show that the three coherence conditions hold. For the first \[T(1_{Y}\circ f)\xrightarrow{\hat{T}_{1,f}}T1_{Y}\circ Tf\] \[\left\|\begin{array}{c}\\ \\ Tf\end{array}\right.\] we rewrite everything in terms of parameterisation and obtain the diagram To show that this commutes, we fill it in with two naturality squares and equalities of 2-cells (3) and (2) from Lemma 3.6. For the second we rewrite everything in terms of parameterisation and obtain the diagram To show that this commutes, we fill it in with naturality squares and the equality of \(2\)-cells (2) from Definition 3.3. Finally, for the third diagram in the interest of space we shall merely note that verification involves, aside from naturality squares, only the equality of \(2\)-cells (1) from Lemma 3.6. Thus, with these three coherence conditions, every strong relative pseudomonad is indeed a pseudo-multifunctor. **Example 3.10**.: Proposition 3.9 will imply that the presheaf relative pseudomonad is a pseudo-multifunctor. Using the coend formula for the left Kan extension, we find that for example, given a functor \(F:A\times B\times C\to D\) in Cat, the multicategorical action of \(\operatorname{Psh}\) on \(F\) has the form \[\operatorname{Psh}F:\operatorname{Psh}A\times\operatorname{Psh}B \times\operatorname{Psh}C \to\operatorname{Psh}D\] \[(p,q,r) \mapsto\int^{c}\int^{b}\int^{a}p(a)\times q(b)\times r(c)\times y_{ F(a,b,c)}.\] ## 4. Pseudocommutativity In the classical situation described in [10], a strong monad with left-strength \(s\) and right-strength \(t\) can be given the structure of lax monoidal functor in two ways: \[TX\otimes TY \xrightarrow{t}T(TX\otimes Y)\xrightarrow{Ts}TT(X\otimes Y) \xrightarrow{\mu}T(X\otimes Y)\] \[TX\otimes TY \xrightarrow{s}T(X\otimes TY)\xrightarrow{Tt}TT(X\otimes Y) \xrightarrow{\mu}T(X\otimes Y)\] It is then natural to ask about those strong monads for which these two composites are equal, which Kock called _commutative monads_. Hyland and Power [11] extend this notion to the 2-categorical setting, defining _pseudocommutativity_ by asking only for an invertible 2-cell between the two composites. Analogously, there is some freedom in the pseudo-multifunctorial structure we place on a given strong relative pseudomonad \(T\); we defined the action of \(T\) on morphisms by \[Tf:=\bar{f}^{t_{1}...t_{n}},\] but we could equally well have chosen \[Tf:=\bar{f}^{t_{n}...t_{1}}\] with the strengthenings applied in the reverse order. We define pseudocommutativity in our more general setting to imply that the two choices of definition of \(Tf\) are coherently isomorphic. **Definition 4.1**.: (Pseudocommutative monad) Let \(T\) be a strong relative pseudomonad. We say that \(T\) is _pseudocommutative_ if for every pair of indices \(1\leq j<k\leq n\) and map \[f:B_{1},...,B_{j-1},JX,B_{j+1}...,B_{k-1},JY,B_{k+1},...,B_{n}\to TZ\] we have an invertible 2-cell \[\gamma_{f}:f^{t_{k}t_{j}}\to f^{t_{j}t_{k}}:B_{1},...,TX,...,TY,...,B_{n}\to TZ\] which is pseudonatural in all arguments and which satisfies five coherence conditions (two for \(\tilde{t}\), two for \(\hat{t}\), and a braiding condition). We will extend our notation in the following way. When a map \[f:B_{1},...,JX,...,JY,...,B_{n}\to TZ\] has two explicitly possible strengthenings, let strengthening in the leftmost of these two arguments be denoted by \(f^{s}\) with 2-cells \(\tilde{s}:f\to f^{s}\circ_{s}i\) and \(\hat{s}:(f^{s}\circ_{s}g)^{s}\to f^{s}\circ g^{t}\), and let strengthening in the rightmost of these two arguments be denoted by \(f^{t}\) with 2-cells \(\tilde{t}\), \(\hat{t}\). When \(f\) has three explicitly possible strengthenings we furthermore use \(f^{u}\), etc. The coherence conditions \(\gamma\) must satisfy are as follows: 1. 1. 2. Precomposing \(\gamma_{f}\) in the \(j\)th or \(k\)th argument with a unit map \(i\): the diagrams \(f^{t}\)\(f^{t}\)\(f^{ts}\circ_{s}i\)\(f^{s}\circ_{s}i\)\(f^{s}\circ_{s}i\)\(f^{t}\circ_{s}i\)\(f^{t}\circ_{s}i\)\(f^{t}\circ_{s}i\)\(f^{s}\circ_{t 3. 4. Precomposing \(\gamma_{f}\) in the \(j\)th or \(k\)th argument with the strengthening of a map \(g\) in its \(l\)th argument: the diagrams \[\begin{CD}(f^{ts}\circ_{s}g)^{s}@>{\hat{s}_{f^{t},g}}>{}>f^{ts}\circ_{s}g^{t}\\ @V{(\gamma_{f}\circ_{s}g)^{s}}V{}V@V{}V{\gamma_{f}\circ_{s}g^{t}}V@V{(f^{t} \circ_{t}h)^{ts}}>{}>(f^{t}\circ_{t}h^{t})^{s}\\ @V{(f^{st}\circ_{s}g)^{s}}V{}V@V{}V{\gamma_{f}\circ_{s}g^{t}}V@V{\gamma_{f} \circ_{t}h}V{}V@V{\gamma_{f}\circ_{t}h^{t}}V{}V@V{\gamma_{f}\circ_{t}h^{t}}V{ \sim}V\\ @V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{f^{ts}\circ_{t}h^{t}}V@V{\sim}V{ \sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{ \sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim }V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V {\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim }V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V {\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{ \sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V@V{\sim}V{\sim}V{\sim}V@V{ \sim}V{ pseudocommutative directly in this way is challenging; in section 5 we will discuss a property that implies pseudocommutativity and which is much easier to verify. In Kock [10] we have that a strong monad is lax-monoidal as a functor, and even that the monad unit for a strong monad is a monoidal transformation, but that in order for the monad multiplication (and thus the monad as a whole) to be monoidal, the monad must be commutative. In our setting, every strong relative pseudomonad \(T\) has the structure of a multicategorical _pseudofunctor_, and we are now interested in the question of when \(T\) further has the structure of a multicategorical _relative pseudomonad_ (defined below); that is, when the pseudomonadic structure of \(T\) is compatible with the ambient multicategorical structure. We will show in this section that every pseudocommutative relative pseudomonad is a multicategorical relative pseudomonad. **Definition 4.5**.: (Multicategorical relative pseudomonad) Let \(\mathbb{C},\mathbb{D}\) be 2-multicategories and let \(T\) be a relative pseudomonad along \(J:\mathbb{D}\rightarrow\mathbb{C}\). We say \(T\) is a _multicategorical relative pseudomonad_ if * \(T\) is a pseudo-multifunctor, and * The unit and extension of \(T\) are compatible with the multicategorical structure. For the second bullet point, we explicitly ask that * the monad unit \(i\) is multicategorical: for each \(f:X_{1},...,X_{n}\to Y\) we have an invertible 2-cell \[\bar{\imath}_{f}:i_{Y}\circ Jf\to Tf\circ(i_{X_{1}},...,i_{X_{n}}),\] \[\ 1. Compatibility with \(\eta\): given a 2-cell \(\alpha:h\circ Jf\to Tf^{\prime}\circ(g_{1},...,g_{n})\), the composite \(\begin{array}{c}\includegraphics[height=142.26378pt]{images/2-cell}\\ \includegraphics[height=142. is equal to the composite **Remark 4.6**.: In the one-dimensional, monoidal setting and when \(J\) is the identity, this definition reduces to the notion of a monoidal monad. In [10] it is noted that the monad unit of a strong monad is always a monoidal transformation, but the monad multiplication is only a monoidal transformation if the monad is commutative. We shall see in the following proposition an analogous result: that for every strong relative pseudomonad, the monad unit is multicategorical (we can define the invertible 2-cells \(\bar{\imath}_{f}\)), but in order to make the monad extension multicategorical we require the relative pseudomonad to be pseudocommutative. **Theorem 4.7**.: _Let \(T\) be a strong relative pseudomonad along multicategorical 2-functor \(J:\mathbb{D}\to\mathbb{C}\). Suppose \(T\) is pseudocommutative. Then \(T\) is a multicategorical relative pseudomonad._ Proof.: By Proposition 3.9 we know that \(T\) is a pseudo-multifunctor. We must check that the monad unit and extension are compatible with the multicategorical structure. For the unit, we need to find invertible 2-cells \(\bar{\imath}_{f}\) of shape \[i\circ Jf\to Tf\circ(i,...,i)\] for \(f:X_{1},...,X_{n}\to Y\). Since \(Tf:=(i\circ Jf)^{t_{1}...t_{n}}=\bar{f}^{t_{1}...t_{n}}\), we construct \(\bar{\imath}_{f}\) as the composite \[i\circ Jf=\bar{f} \xrightarrow{\bar{f}}\bar{f}^{t_{1}}\circ(i,1,...,1)\] \[\xrightarrow{\bar{\imath}}\bar{f}^{t_{1}t_{2}}\circ(i,i,1,...,1)\] \[\vdots\] \[\xrightarrow{\bar{\imath}}\bar{f}^{t_{1}t_{2}...t_{n}}\circ(i,i, i,...,i)=Tf\circ(i,...,i).\] Note that we do not need the pseudocommutativity to construct the \(\bar{\imath}_{f}\) 2-cells. The construction of \(\alpha^{*}\) given \(\alpha:h\circ Jf\to Tf^{\prime}\circ(g_{1},...,g_{n})\) is more involved. We require a 2-cell of shape \[h^{*}\circ Tf\to Tf^{\prime}\circ(g_{1}^{*},...,g_{n}^{*}).\] We begin with the composite \[h^{*}\circ Tf:=h^{t}\circ\bar{f}^{t_{1}\dots t_{n}} \xrightarrow{\hat{t}^{-1}}(h^{t}\circ\bar{f}^{t_{1}\dots t_{n-1}}) ^{t_{n}}\] \[\xrightarrow{\hat{t}^{-1}}(h^{t}\circ\bar{f}^{t_{1}\dots t_{n-2}}) ^{t_{n-1}t_{n}}\] \[\vdots\] \[\xrightarrow{\hat{t}^{-1}}(h^{t}\circ\bar{f})^{t_{1}\dots t_{n}}\] \[\xrightarrow{\hat{t}^{-1}}(h\circ Jf)^{t_{1}\dots t_{n}},\] at which point we can compose with \(\alpha^{t_{1}\dots t_{n}}\) to arrive at \[(Tf^{\prime}\circ(g_{1},...,g_{n}))^{t_{1}\dots t_{n}}:=(\bar{f}^{t_{1}\dots t _{n}}\circ(g_{1},...,g_{n}))^{t_{1}\dots t_{n}}.\] From here we start needing the pseudocommutativity of \(T\). Let \(\sigma\in S_{n}\) be the cyclic permutation \(1\to 2\to...\to n\to 1\). Now we compose as follows: \[(\bar{f}^{t_{1}\dots t_{n}}\circ(g_{1},...,g_{n}))^{t_{1}\dots t_ {n}}\] \[\xrightarrow{\gamma_{\sigma}}(\bar{f}^{t_{2}\dots t_{1}}\circ(g _{1},...,g_{n}))^{t_{1}\dots t_{n}}\xrightarrow{\hat{t}}(\bar{f}^{t_{2}\dots t _{1}}\circ(g_{1}^{t},g_{2},...,g_{n}))^{t_{2}\dots t_{n}}\] \[\xrightarrow{\gamma_{\sigma}}(\bar{f}^{t_{3}\dots t_{2}}\circ(g _{1}^{t},g_{2},...,g_{n}))^{t_{2}\dots t_{n}}\xrightarrow{\hat{t}}(\bar{f}^{ \prime t_{3}\dots t_{2}}\circ(g_{1}^{t},g_{2}^{t},g_{3},...,g_{n}))^{t_{3} \dots t_{n}}\] \[\vdots\] \[\xrightarrow{\gamma_{\sigma}}(\bar{f}^{t_{1}\dots t_{n}}\circ(g _{1}^{t},...,g_{n-1}^{t},g_{t}))^{t_{n}}\xrightarrow{\hat{t}}\bar{f}^{\prime t _{1}\dots t_{n}}\circ(g_{1}^{t},...,,g_{n}^{t})\] \[=Tf^{\prime}\circ(g_{1}^{*},...,g_{n}^{*}).\] For example, the full composite in the case where \(f\) is a binary map is given by the diagram below: It now remains to verify that the three coherence conditions for a multicategorical relative pseudomonad. Here we shall only do this for binary maps, and we shall abbreviate the diagram chasing. For the first condition, we begin with the composite 2-cell \[h\circ Jf\xrightarrow{\eta}h^{*}\circ i\circ Jf\xrightarrow{\bar{i}}h^{*} \circ Tf\circ(i,i)\xrightarrow{\alpha^{*}}Tf^{\prime}\circ(g_{1}^{*},g_{2}^{*}) \circ(i,i)\] and must show that it is equal to the composite \[h\circ Jf\xrightarrow{\alpha}Tf^{\prime}\circ(g_{1},g_{2})\xrightarrow{\eta, \eta}Tf^{\prime}\circ(g_{1}^{*},g_{2}^{*})\circ(i,i).\] Rewriting \(\bar{\imath}_{f}\) and \(\alpha^{*}\) in terms of our constructions, we must show that the diagram commutes. We can fill in this diagram, aside from naturality squares, with four instances of equality (1) from Lemma 3.6. So indeed the first coherence condition holds. For the second coherence condition, we begin with the composite 2-cell and must show that it is equal to the composite \[(h^{\prime*}\circ h)^{*}\circ Tf\xrightarrow{(\beta^{*}\alpha)^{*}}Tf^{\prime \prime}\circ((g_{1}^{\prime*}g_{1})^{*},(g_{2}^{\prime*}g_{2})^{*}) \xrightarrow{\mu,\mu}Tf^{\prime\prime}\circ(g_{1}^{\prime*},g_{2}^{\prime*}) \circ(g_{1}^{*},g_{2}^{*}).\] Unwrapping our definitions, we need to show that the diagram commutes. Filling this diagram is involved, but aside from naturality squares we require only * five instances of the pentagon axiom (1) from Definition 3.3, and * axioms (3) and (4) from Definition 4.1. Thus also the second coherence condition holds. For the third and final coherence condition, we begin with the composite 2-cell and must show that it is equal to \[i^{*}\circ Tf\xrightarrow{\theta}Tf.\] Rewriting everything in our terms shows that we must show the diagram commutes. Filling the diagram requires, aside from naturality squares: * instances of equalities (2) and (3) from Lemma 3.6, * two uses of axiom (2) from Definition 3.3, and * axioms (1) and (2) from Definition 4.1. Hence the final coherence condition is satisfied, and thus we have shown that every pseudocommutative relative pseudomonad is a multicategorical relative pseudomonad. As the above proof demonstrates, working directly with pseudocommutativity and multicategoricality can be tedious. In the next section we will examine a condition on a relative pseudomonad which both implies pseudocommutativity and which is much easier to verify, being characterised by a universal property. ## 5. Lax idempotency We will now consider a special class of relative pseudomonads. Defined in [11], the _lax-idempotent_ relative pseudomonad generalises the notion of a lax-idempotent or Kock-Zoberlein 2-monad, discussed extensively in [13]. The aim of this section is to generalise the result of Lopez Franco in [10] that every lax-idempotent 2-monad is pseudocommutative. First, we recall the definition of lax-idempotent relative pseudomonad from [11]. **Definition 5.1**.: (Lax-idempotent relative pseudomonad) Let \(T\) be a relative pseudomonad along \(J:\mathbb{D}\to\mathbb{C}\). We say that \(T\) is a _lax-idempotent relative pseudomonad_ if'monad structure is left adjoint to unit', which is to say that we have an adjunction for all objects \(X,Y\) of \(\mathbb{D}\), whose unit \(-\implies(-)^{*}i\) has components given by the \(\eta_{f}:f\to f^{*}i\) from the pseudomonadic structure (note in particular that the unit is thus invertible). **Remark 5.2**.: The definition of lax idempotency is given equivalently in [11] in terms of Kan extensions: \(T\) is lax-idempotent if for all maps \(f:JX\to TY\) the diagram exhibits \(f^{*}\) as the left Kan extension of \(f\) along \(i\). This form of the definition makes it immediate from the construction of Psh as a relative pseudomonad that Psh is lax-idempotent. We turn to showing that every lax-idempotent relative pseudomonad is pseudo-commutative. Just as in Section 3 we defined the notion of strong relative pseudomonad for the multicategorical setting, we will define the notion of _lax-idempotent strong relative pseudomonad_ as follows: **Definition 5.3**.: (Lax-idempotent strong relative pseudomonad) Let \(J:\mathbb{D}\to\mathbb{C}\) be a pseudo-multifunctor and let \(T\) be a strong relative pseudomonad along \(J\). We say \(T\) is a _lax-idempotent strong relative pseudomonad_ if the strength is left adjoint to precomposition with the unit. That is, we have an adjunction for every \(1\leq j\leq n\) and objects \(B_{1},...,B_{j-1},JX,B_{j+1},...,B_{n};TY\) whose unit \(-\implies(-)^{t_{j}}\circ_{j}i\) has components \[\tilde{t}_{f}:f\to f^{t_{j}}\circ_{j}i_{X}\] obtained from the strong structure (again the unit is invertible). As in Remark 5.2 above, we can equivalently state this condition in terms of left Kan extensions: \(T\) is lax-idempotent strong if for every map \(f:B_{1},...,JX,...,B_{n}\to TY\) the diagram exhibits \(f^{t_{j}}\) as the left Kan extension of \(f\) along \(1,...,i,...,1\). As a point of notation, we will use Greek letters to denote the _counit_ of the lax idempotency adjunction; where the strengthening map is called \((-)^{t}\) and the unit \(\tilde{t}\), the counit will be called \[\tau_{f}:(f\circ_{t}i)^{t}\to f,\] and where the strengthening is called \((-)^{s}\) and the unit \(\tilde{s}\), the counit shall be called \[\sigma_{f}:(f\circ_{s}i)^{s}\to f\] (and similarly for \((-)^{u}\) etc.). Note that there is much less data to check in the course of showing that a relative pseudomonad is lax-idempotent compared with showing that it is pseudocommutative. The following result generalising [11] therefore gives us a shortcut for showing relative pseudomonads like Psh are pseudocommutative (and hence by Theorem 4.7 a multicategorical relative pseudomonad). **Theorem 5.4**.: _Let \(T:\mathbb{D}\to\mathbb{C}\) be a lax-idempotent strong relative pseudomonad. Then \(T\) is pseudocommutative, with a pseudocommutativity whose components \(\gamma_{g}:g^{ts}\to g^{st}\) are given by the composite_ \[g^{ts}\xrightarrow{(\tilde{s}_{g})^{ts}}(g^{s}\circ_{s}i)^{ts}\xrightarrow{ \sim}(g^{st}\circ_{s}i)^{s}\xrightarrow{\sigma_{g^{st}}}g^{st}.\] Proof.: To begin, we first show that that putative \(\gamma_{g}\) is invertible. We will show that the composite \[g^{st}\xrightarrow{(\tilde{t}_{g})^{st}}(g^{t}\circ_{t}i)^{st}\xrightarrow{ \sim}(g^{ts}\circ_{t}i)^{t}\xrightarrow{\tau_{g^{ts}}}g^{ts}\] is its inverse. We have the commuting diagram whose clockwise composite is the composite \((\gamma_{g})^{-1}\circ\gamma_{g}\), entirely composed of naturality squares. Then by the following diagram composed of a naturality square and two triangle identities, the anticlockwise composite of the first diagram is equal to the identity on \(g^{ts}\), as required. The same argument (swapping the roles of \(s\) and \(t\)) demonstrates that the other composite \(\gamma_{g}\circ(\gamma_{g})^{-1}\) is also the identity, and so our \(\gamma_{g}\) is indeed invertible. We now must show that our \(\gamma_{g}\) satisfies the coherence conditions for a pseudocommutativity. For the unit condition \[g^{s}\xrightarrow{(\tilde{t}_{g})^{s}}(g^{t}\circ_{t}i)^{s}\xrightarrow{ \sim}f^{ts}\circ_{t}i\] we write out \(\gamma_{g}\circ_{t}i\) in terms of our composite and construct the commuting diagram comprising five naturality squares. Then the anticlockwise composite is, by the following commuting diagram of a naturality square and a triangle identity, equal to \(\tilde{t}_{g^{s}}\), as required. The other unit condition is shown by the same argument, swapping the roles of \(s\) and \(t\). Now, for the strengthening condition we can write out the anticlockwise composite in terms of our \(\gamma\) and construct a large commuting diagram filled in entirely with naturality squares and one triangle identity. The other strengthening condition is shown by the same argument, swapping the roles of \(s\) and \(t\). Finally, for the braiding coherence condition after writing each composite in terms of our \(\gamma\) we obtain a large diagram that may be filled in entirely with naturality squares. So all five coherence conditions are satisfied and hence indeed our \(\gamma\) is a pseudocommutativity for \(T\) In summary, the previous sections have proved the following implications for \(T\) a relative pseudomonad along \(J:\mathbb{D}\to\mathbb{C}\) between 2-multicategories: * Every strong relative pseudomonad \(T\) is a pseudo-multifunctor (Proposition 3.9). * Every pseudocommutative relative pseudomonad \(T\) is a multicategorical relative pseudomonad (Theorem 4.7). * Every lax-idempotent strong relative pseudomonad \(T\) is pseudocommutative (Theorem 5.4). ## 6. The presheaf relative pseudomonad We apply our results to the presheaf construction. As shown in [1], the presheaf construction \(\operatorname{Psh}:\operatorname{Cat}\to\operatorname{CAT}:X\mapsto \operatorname{Psh}X:=[X^{op},\operatorname{Set}]\) can be given the structure of a relative pseudomonad, where the units are given by the Yoneda embedding \(y_{X}:X\to\operatorname{Psh}X\) and the extension of a functor \(f:X\to\operatorname{Psh}Y\) for small categories \(X,Y\) is given by the left Kan extension along the Yoneda embedding, and this diagram also defines the map \(\eta_{f}:f\to f^{*}i\). In order to make use of the our results, we need to further show that the presheaf relative pseudomonad is strong. **Proposition 6.1**.: _The presheaf relative pseudomonad \(\operatorname{Psh}\) along the inclusion \(J:\operatorname{Cat}\to\operatorname{CAT}\) is strong, with the strengthening of a functor_ \[f:B_{1},...,B_{j-1},JX,B_{j+1},...,B_{n}\to\operatorname{Psh}Y\] _defined as the left Kan extension_ _along \(1,...,y,...,1\), and the 2-cell in the above diagram defines the map \(\tilde{t}_{f}\)._ Proof.: We begin by constructing the rest of the data for a strong relative pseudomonad; namely, the invertible families of 2-cells \[\hat{t}_{f,g}:(f^{t}\circ_{t}g)^{t}\to f^{t}\circ g^{t},\ \theta:i^{t}\to 1.\] Using the universal property of the left Kan extension, we define \(\hat{t}_{f,g}\) and \(\theta\) to be the unique 2-cells such that commute, respectively. It remains to check the two coherence conditions of Definition 3.3. For the first: by the universal property of the left Kan extension it suffices to show that the diagram commutes. Rewriting terms we obtain the diagram which we can fill in with two naturality squares. For the second: again by the universal property of the left Kan extension we can equivalently show the diagram commutes. Rewriting terms we obtain which immediately commutes. Hence indeed \(\mathrm{Psh}\) is as constructed a strong relative pseudomonad. We can now apply the results of this paper to the presheaf relative pseudomonad. **Theorem 6.2**.: _The presheaf relative pseudomonad is:_ 1. _a lax-idempotent strong relative pseudomonad,_ 2. _a pseudocommutative relative pseudomonad, and_ 3. _a multicategorical relative pseudomonad._ Proof.: By Theorem 5.4 we know \((1)\implies(2)\), and by Theorem 4.7 we know \((2)\implies(3)\). So it suffices to check that \(\mathrm{Psh}\) is lax-idempotent strong. By Proposition 6.1 \(\mathrm{Psh}\) is strong, and we have diagrams exhibiting \(f^{t}\) as the left Kan extension of \(f\) along \(1,...,y,...,1\). But this means precisely that we have an adjunction \[(-)^{t}\dashv-\circ_{t}y\] whose unit is \(\tilde{t}\), as required. So indeed \(\mathrm{Psh}\) is lax-idempotent strong, and hence also pseudocommutative and a multicategorical relative pseudomonad. ### Acknowledgements The author gives thanks for the support of the Engineering and Physical Sciences Research Council, which has funded the author's position as a postgraduate researcher at the University of Leeds. Personal thanks are given to Nicola Gambino for regular invaluable discussions, as well as to Nathanael Arkor for useful conversations.
2308.05550
Cross-Domain Product Representation Learning for Rich-Content E-Commerce
The proliferation of short video and live-streaming platforms has revolutionized how consumers engage in online shopping. Instead of browsing product pages, consumers are now turning to rich-content e-commerce, where they can purchase products through dynamic and interactive media like short videos and live streams. This emerging form of online shopping has introduced technical challenges, as products may be presented differently across various media domains. Therefore, a unified product representation is essential for achieving cross-domain product recognition to ensure an optimal user search experience and effective product recommendations. Despite the urgent industrial need for a unified cross-domain product representation, previous studies have predominantly focused only on product pages without taking into account short videos and live streams. To fill the gap in the rich-content e-commerce area, in this paper, we introduce a large-scale cRoss-dOmain Product Ecognition dataset, called ROPE. ROPE covers a wide range of product categories and contains over 180,000 products, corresponding to millions of short videos and live streams. It is the first dataset to cover product pages, short videos, and live streams simultaneously, providing the basis for establishing a unified product representation across different media domains. Furthermore, we propose a Cross-dOmain Product rEpresentation framework, namely COPE, which unifies product representations in different domains through multimodal learning including text and vision. Extensive experiments on downstream tasks demonstrate the effectiveness of COPE in learning a joint feature space for all product domains.
Xuehan Bai, Yan Li, Yanhua Cheng, Wenjie Yang, Quan Chen, Han Li
2023-08-10T13:06:05Z
http://arxiv.org/abs/2308.05550v1
# Cross-Domain Product Representation Learning for Rich-Content E-Commerce ###### Abstract The proliferation of short video and live-streaming platforms has revolutionized how consumers engage in online shopping. Instead of browsing product pages, consumers are now turning to rich-content e-commerce, where they can purchase products through dynamic and interactive media like short videos and live streams. This emerging form of online shopping has introduced technical challenges, as products may be presented differently across various media domains. Therefore, a unified product representation is essential for achieving cross-domain product recognition to ensure an optimal user search experience and effective product recommendations. Despite the urgent industrial need for a unified cross-domain product representation, previous studies have predominantly focused only on product pages without taking into account short videos and live streams. To fill the gap in the rich-content e-commerce area, in this paper, we introduce a large-scale **c**Ross-**d**O**main **P**roduct **r**E**cognition dataset, called ROPE. ROPE covers a wide range of product categories and contains over 180,000 products, corresponding to millions of short videos and live streams. It is the first dataset to cover product pages, short videos, and live streams simultaneously, providing the basis for establishing a unified product representation across different media domains. Furthermore, we propose a **c** Cross-**d**O**main **P**roduct **r**E**presentation framework, namely COPE, which unifies product representations in different domains through multimodal learning including text and vision. Extensive experiments on downstream tasks demonstrate the effectiveness of COPE in learning a joint feature space for all product domains. ## 1 Introduction Recently, with the vicissitude in time spent by customers on entertainment media, the way consumers shop online has transformed significantly, and _rich-content e-commerce_ is becoming increasingly popular. In the rich-content e-commerce area, the products are sold not only with traditional product pages but also with dynamic and interactive media formats, _i.e._, short videos and live streams. As a result, consumers are increasingly relying on these formats to make informed purchase decisions. This shift has facilitated a more engaging shopping experience, bridging the gap between consumers and sellers while presenting new opportunities for platforms to capitalize on. Despite the advantages of rich-content e-commerce, it presents several technical challenges. One of the most significant challenges is the inconsistency in product presentation across different media domains. For instance, a product may appear entirely different in a live stream than on Figure 1: Illustrate the importance of cross-domain product representation for rich-content e-commerce. There are two solid demands for such new e-commerce: 1) The platform needs to provide accurate product results of product page, short video, and live streaming corresponding to user’s query; 2) The platform is able to recommend similar products of interest according to user’s behavior history. Both of the two tasks highly depend on a high-performance cross-domain product representation. Examples are from the popular rich-content e-commerce platforms, including TikTok, Kwai and Taobao. a traditional product page. Establishing a unified product representation across different domains is curial and desperately needed in industrial scenarios to address the inconsistency problem. As shown in Figure 1, when users search for a particular product, the unified product representation ensures an enjoyable search experience that the returned product pages, short videos, and live streams precisely describe the same product. When the platform recommends products for users, the unified product representations are beneficial to exploiting users' consuming behaviors in different media for comprehensive product recommendations. In spite of the urgent industrial need for a unified cross-domain product representation, prior efforts have concentrated solely on the product page domain. The most common way to learn the product representations is to train a product classification model with product images and titles [12, 26, 14, 27]. However, such representations are far from acceptable in rich-content e-commerce. Specifically, the pictures displayed on the product pages are generally well-shot by professionals, while in short videos and live streaming, the posture of the products and the positions they occupy in the scene often change a lot. Moreover, in live streams and short videos, it is not always guaranteed that products are visible at every moment. Short videos may be mixed with the story plot, while live streams may contain chats between the sellers and their audiences. These contents are generally irrelevant to the products. To bridge this gap and push forward the related research, we collect a large amount of real data from online shopping platforms and present the first large-scale c**R**oss-d**O**main **P**roduct r**E**cognition dataset, ROPE. Our dataset contains 3,056,624 product pages, 5,867,526 short videos, and 3,495,097 live streams of 189,958 different products. It covers all product categories of online shopping scenarios. To the best of our knowledge, ROPE is the rich-content e-commerce dataset, including product pages, short videos, and live streams. We hope that the publication of ROPE will attract more researchers to the field of content commerce and drive the development of related technologies. In addition to the ROPE dataset, we propose a **C**ross-**d**O**main **P**roduct r**E**presentation baseline, COPE, that maps product pages, short videos, and live streams into the same feature space to build a unified product representation. Based on the ROPE dataset, we evaluate the COPE model on the cross-domain retrieval and few-shot classification tasks. The experimental results show significant improvement over the existing state-of-the-arts. In summary, our contributions are as follows: 1) As far as we know, our work is the first exploration that tries to build a unified product representation across the product pages, short videos, and live streams to meet the urgent industrial need of the emerging rich-content e-commerce. 2) We collect realistic data from online e-commerce platforms and build a large-scale c**R**oss-d**O**main **P**roduct **r**E**cognition dataset, ROPE. It contains 3,056,624 product pages, 5,867,526 short videos, and 3,495,097 live streams belonging to 189,958 different products. The included product categories cover the full spectrum of online shopping scenarios. 3) A **C**ross-**d**O**main **P**roduct **r**E**presentation model, COPE, is proposed to learn the cross-domain product representations. The experimental results prove the superiority of the COPE model to the existing methods. ## 2 Related Work ### E-Commerce Datasets A large number of e-commerce datasets have been proposed to advance the technical developments in the area [2, 25, 32, 8, 11, 33]. The earlier datasets traditionally have limited size. Corbiere _et al_. introduce the Dress Retrieval [6] dataset in 2017, which has 20,000 pairs of the product image and text pairs. Rostamzadeh _et al_. propose the FashionGen [25] dataset, which includes 293,000 samples, covering only 48 product categories. In recent years, large-scale product recognition datasets have been introduced with the development of deep-learning-based methods. Product 1M [33] increases the scale of the training samples to a million level, but all samples come from 48 cosmetic brands. The coverage of the products is quite limited. MEP-3M [2] dataset includes more than three million samples, and each sample consists of the product image, product title, and hierarchical classification labels. However, all these datasets focus solely on the product page domain. In the experiment section, we will demonstrate that the representations learned on the product page domain are insufficient to handle the cross-domain product recognition task. The most related datasets to our ROPE dataset are M5Product [8] and MovingFashion [11]. M5Product comprises six million samples, and for each sample, it provides product images, product titles, category labels, attribute tables, assigned advertising videos, and audios extracted from videos. However, the provided videos in M5Product are quite different from the live streams introduced in our ROPE dataset. The videos in M5Product all come from the product page and are usually closely related to the advertised products, with products displayed in the center and described throughout. By contrast, there are many chat contents between the sellers and audiences in live streams of ROPE, which are unrelated to the products. Furthermore, the poses and locations of the products vary significantly in live streams, making the ROPE dataset more challenging for product recognition. MovingFashion [11] also focuses on aligning videos and product pages. It only comprises 15,000 videos, covering 13 product categories. The scale of the MovingFahsion is much smaller than our ROPE dataset, which covers more than 1,000 product categories and provides million-level samples of the product page, short video, and live streaming domains. ### Cross-Domain Retrieval Methods Existing cross-domain retrieval methods typically learn unified representations between the visual and text domains. Some of the most popular models following the single-stream architecture, such as VL-bert [28], ImageNet [23], Videobert [29], Visualbert [13], and Uniter [4]. These models concatenate visual and text features and then use a binary classifier to predict whether the image-text pairs match. Although these methods usually perform better, they suffer from inferior inference efficiency. The ViLBERT [18], LXMERT [30], CLIP [24], and CoOp [34] utilize the two-stream architecture. In this approach, the visual and text features are extracted using independent encoders, and the visual-text similarity is efficiently calculated using the dot product operation. The proposed COPE model learns representations of different domains using contrastive loss to ensure efficient cross-domain retrieval. ## 3 ROPE Dataset ### Data Collection and Cleaning We collect data from the online e-commerce platform over 1,300 product categories. Three steps are taken to construct the ROPE dataset. Firstly, we collect a large amount of unsupervised multi-modal samples from the product page domain, short video domain, and live streaming domain. For the product page domain, we offer the product images and titles; for the short video and live streaming domains, the extracted frames and ASR (automatic speed recognition) texts are provided. The resulting dataset includes over 200 million samples and is defined as \(\mathcal{D}_{\textit{mw}}\). Secondly, a small portion of \(\mathcal{D}_{\textit{mw}}\) (0.1%, 200K data points) is sampled and defined as \(\mathcal{D}_{\textit{sample}}\). For each sample in \(\mathcal{D}_{\textit{sample}}\), we ask the human annotators to find other samples from \(\mathcal{D}_{\textit{mw}}\) that shares the _same_ product. To reduce the annotation costs, the extracted features with the public Chinese CLIP model [31]1 are utilized to find relevant samples for further human checkout. The annotated samples are used to train a baseline COPE model. Footnote 1: For short videos and live streamings, the average of frame-level image features are adopted as the visual representations. The visual and text features are concatenated as the final multi-model representations for retrieving relevant samples. Thirdly, for remaining unannotated samples in \(\mathcal{D}_{\textit{mw}}\), the baseline COPE model is employed to filter out relevant samples, and only the samples whose matching scores are higher than 0.7 are kept. Afterward, the product pages, short videos, and live streams belonging to the same product are aggregated. We only retain the completed paired samples, including data from all three domains. ### Datasets Statistics The final ROPE dataset comprises 3,056,624 product pages, 5,867,526 short videos, and 3,495,097 live streams associated with 189,958 products. Table 1 compares the ROPE and previous product datasets. We divided the ROPE dataset into train and test sets. The train set has 187,431 products with 3,025,236 product pages, 5,837,716 short videos, and 3,464,116 live streams. On average, each product has 16 product pages, 31 short videos, and 18 live streams. The distribution of training samples across product categories is illustrated in Figure 2, showing a long-tailed pattern that reflects online shopping interests. The top five categories are Carpet, Calligraphy/Painting, Quilt Cover, Emerald, and Sheet. The test set contains 2,527 products, \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Samples & Categories & Products & Domains \\ \hline FashionGen [25] & 293,008 & 48 & 78850 & product page \\ Dress Retrieval [6] & 20,200 & 50 & 20,200 & product page \\ Product1M [33] & 1,182,083 & 458 & 92,200 & product page \\ MEP-3M [2] & 3,012,959 & 599 & - & product page \\ M5Product [8] & 6,313,067 & 6,232 & - & product page \\ MovingFashion [11] & 15,000 & - & - & product page/short video \\ ROPE(ours) & 12,027,068 & 1,396 & 187,431 & product page/short video/live streaming \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons with other product datasets. “-” means not mentioned. Figure 2: The distribution of training samples over product categories. It is biased and long-tailed. with 31,388 product pages, 29,810 short videos, and 30,981 live streams. The average duration of short videos and live streams is 31.78 seconds and 129.09 seconds, respectively, and each product has an average of 12 product pages, 11 short videos, and 12 live streamings. The product categories in the test set are different from those in the train set to ensure an accurate evaluation, and human annotators have thoroughly reviewed the test set. ### Evaluation Tasks We propose two evaluation tasks based on the ROPE dataset to verify the unified cross-domain product representations. The first is the cross-domain product retrieval task, which aims to find the matched samples from the identical product from the two domains. There are six variations of the task: \(\mathrm{P}\)\(\rightarrow\)\(\mathrm{V}\), \(\mathrm{V}\)\(\rightarrow\)\(\mathrm{P}\), \(\mathrm{P}\)\(\rightarrow\)\(\mathrm{L}\), \(\mathrm{L}\)\(\rightarrow\)\(\mathrm{V}\), \(\mathrm{V}\)\(\rightarrow\)\(\mathrm{P}\), \(\mathrm{V}\), and \(\mathrm{L}\) indicate product page domain, short video domain, and live streaming domain, respectively. The second one is the cross-domain few-shot (\(k\)=1) classification task. Similar to the retrieval task, it also has six variations. Taking the \(\mathrm{P}\)\(\rightarrow\)\(\mathrm{V}\) variation as an example, we elaborate on the detailed evaluation processes for the two tasks. For the retrieval task, we collect all the short videos in the test set as the gallery set \(G_{\mathrm{V}}\) and regard all the product pages in the test set as the query set \(Q_{\mathrm{P}}\). For each query in \(Q_{\mathrm{P}}\), the goal is to find a matched short video from \(G_{\mathrm{V}}\), whose product label is the same as the query product page. For the few-shot (\(k\)=1) classification task, we randomly sample one short video from each product in the test set. The sampled shot videos are considered anchors. Then we try to classify all the product pages in test set by finding the nearest short video anchor. ## 4 Method The overall framework of the proposed COPE model is illustrated in Figure 3. It comprises the visual encoder, the text encoder, the fusion encoder, and the domain projection layers. The visual and text encoders are shared between the three domains, and the parameters of the domain projection layers in each domain are not shared. ### Architectural Design As stated in Section 3.2, we provide training samples with multiple modalities for each domain. Specifically, we offer product titles and images for the product domain, while for the short video and live streaming domain, we provide extracted frames and ASR (automatic speech recognition) texts. The COPE model is designed with a two-stream pipeline to handle both visual and textual modalities. At the bottom of the model, we utilize a shared text encoder Figure 3: The overall framework of the proposed COPE model. The text encoder and visual encoder are utilized to extract features from the single modality, and the fusion encoder is adopted to aggregate the two features. To model the temporal information in videos and live streams, we insert the cross-frame communication transformer into each block of the visual encoder. The multi-frame integration transformer is placed at the top of the visual encoder to summarize the whole video’s representation. and visual encoder to extract representations for raw texts and images/frames for each domain. These extracted features are fed into three domain-specific projection layers to obtain domain-specific representations. Additionally, we employ a fusion encoder module, followed by a projection layer, to aggregate visual and text features. The parameters of the fusion encoder are shared across domains, while the projection layers are domain-specific. It is important to note that we do not utilize the ASR texts, and we remove text-modal related modules for the short video and live streaming domains in our initial version of COPE. The excessive noise information in raw ASR texts can negatively impact the final presentations for videos and live streams. In our future work, we will explore possible approaches to utilize the ASR texts by extracting product-related keywords from the raw texts. The visual encoder follows the same architecture as in [21], which consists of \(N\) cross-frame communication transformer (CCT) modules and a multi-frame integration transformer (MIT) module. The CCT module is a revised ViT [9] block by inserting the temporal encoder to enable temporal information exchangeability. The MIT module is placed at the top of \(N\) stack CCT modules to integrate the sequence of frame-level features into a unified video representation. Given an input video \(\mathbf{V}\in\mathbb{R}^{T*H*W*3}\) (the product image can be regarded as a video with only one frame), where \(T\) denotes the number of frames. \(H\) and \(W\) indicate the spatial resolution of the video; we split the \(t\)-th frame into \(M\) non-overlapping patches, \(\mathbf{X}_{\textit{vis}}^{t}\in\mathbb{R}^{M*3}\). The learnable class token is inserted at the beginning of the patch sequence, and the spatial position encoding is added to the patch sequence. Formally, \[\mathbf{z}_{t}^{(0)}=[e_{\textit{vis}}^{cls};\ \mathbf{X}_{\textit{vis}}]+e^{ \textit{spa}} \tag{1}\] Then we feed \(\mathbf{z}_{t}^{(0)}\) into \(N\) CCT modules to obtain the frame-level representations: \[\begin{split}\mathbf{z}_{t}^{(n)}&=\mathrm{CCT}^{(n )}(\mathbf{z}_{t}^{(n-I)}),\ n=1,...,N\\ &=[h_{t,\textit{cls}}^{(n),\textit{vis}},\ h_{t,I}^{(n),\textit{ vis}},\ h_{t,2}^{(n),\textit{vis}},\,...,\ h_{t,M}^{(n),\textit{vis}}]\end{split} \tag{2}\] where \(n\) denotes the CCT module index. We take the final output of the class token at the \(N\)-th CCT module, \(h_{t,\textit{cls}}^{(N),\textit{vis}}\), to represent the \(t\)-th frame. Then the global representation of the video is obtained by aggregating frame-level features with the MIT module. Formally, \[Z_{\textit{vis}}=\mathrm{AvgPool}(\mathrm{MIT}([h_{\textit{1},\textit{cls}}^{ (N),\textit{vis}},...,h_{\textit{T,cls}}^{(N),\textit{vis}}]+e^{\textit{temp }})) \tag{3}\] where \(\mathrm{AvgPool}\) and \(e^{\textit{temp}}\) denote the average pooling operator and temporal position encoding, respectively. \(Z_{\textit{vis}}\in\mathbb{R}^{d}\) is utilized as the visual representation for the input product image or videos. The text encoder is a three-layer RoBERTa [15, 7] model. The input raw texts are firstly tokenized and defined as \(\mathbf{X}_{\textit{txt}}\in\mathbb{R}^{L}\) where \(L\) indicates the length of the token sequence. Then the class token is inserted at the beginning of the sequence, and the position embeddings are added to retrain positional information. The final obtained text sequence is fed into the text encoder to extract text representations. Formally, \[\begin{split}\mathbf{H}_{\textit{txt}}&=\mathrm{ RoBERTa}([e^{cls};\ \mathbf{X}_{\textit{txt}}]+e^{\textit{pos}})\\ &=[h_{\textit{cls}}^{\textit{txt}},\ h_{\textit{1}}^{\textit{txt} },\ h_{\textit{2}}^{\textit{txt}},\...,\ h_{L}^{\textit{txt}}]\end{split} \tag{4}\] where \(e^{\textit{cls}}\) and \(e^{\textit{pos}}\) denote the input class token embedding and position embeddings, respectively. \(h_{\textit{cls}}^{\textit{txt}}\in\mathbb{R}^{d}\) indicates the extracted feature of the class token. We utilize \(h_{\textit{cls}}^{\textit{txt}}\) as the text representation for input raw texts. The visual representation \(Z_{\textit{vis}}\) and text representation \(h_{\textit{cls}}^{\textit{txt}}\) of the three domains are extracted with the shared visual and text encoders, even though the samples in different domains vary significantly. Such a scheme is expected to enhance the generalization capability of the basic feature extractors. The characteristics of each domain are retained and magnified by utilizing different projection layers that are not shared across domains to transform the general representations into domain-specific representations. For each domain, the projection layer is a linear layer with weight \(\mathbf{W}\) and bias \(b\), and the domain-specific representations are obtained as: \[\begin{split} E_{\textit{vis}}^{\mathrm{P}}&=\mathbf{ W}_{\textit{vis}}^{\mathrm{P}}Z_{\textit{vis}}^{\mathrm{P}}+b_{\textit{vis}}^{ \mathrm{P}}\\ E_{\textit{txt}}^{\mathrm{P}}&=\mathbf{W}_{\textit{txt }}^{\mathrm{P}}h_{\textit{txt}}^{\mathrm{P}}+b_{\textit{txt}}^{\mathrm{P}} \\ E_{\textit{vis}}^{\mathrm{V}}&=\mathbf{W}_{\textit{ vis}}^{\mathrm{V}}Z_{\textit{vis}}^{\mathrm{V}}+b_{\textit{vis}}^{\mathrm{V}} \\ E_{\textit{vis}}^{\mathrm{L}}&=\mathbf{W}_{\textit{ vis}}^{\mathrm{L}}Z_{\textit{vis}}^{\mathrm{L}}+b_{\textit{vis}}^{\mathrm{L}} \end{split} \tag{5}\] where \(\mathrm{P}\), \(\mathrm{V}\), and \(\mathrm{L}\) denote the product page domain, short video domain, and the live streaming domain. It should be noted that in the short video domain and live streaming domain, we do not include the text modality, and only visual representations, _i.e._, \(E_{\textit{vis}}^{\mathrm{V}}\) and \(E_{\textit{vis}}^{\mathrm{L}}\), are utilized for the two domains. Finally, the fusion encoder, followed by a projection layer, is proposed to aggregate the visual and text representations. The fusion encoder is implemented with a self-attention layer, and the projection layer is a linear layer. Also, in our initial version of COPE, the fusion operation is only applied to the product page domain. Formally, \[\begin{split} E_{\textit{fus}}^{\mathrm{P}}&=\mathrm{ SelfAtt}([E_{\textit{vis}}^{\mathrm{P}};\ E_{\textit{txt}}^{\mathrm{P}}])\\ E_{\textit{fus}}^{\mathrm{P}}&=\mathbf{W}_{\textit{ fus}}^{\mathrm{P}}E_{\textit{fus}}^{\mathrm{P}}+b_{\textit{fus}}^{\mathrm{P}} \end{split} \tag{6}\] where \(\mathrm{SelfAtt}\) denotes the self-attention layer. \(E_{\textit{fus}}^{\mathrm{P}}\) is the obtained multi-modal representation for the product page domain \(\mathrm{P}\). For the other two domains \(\mathrm{V}\) and \(\mathrm{L}\), the visual representations \(E_{vis}^{\mathrm{V}}\) and \(E_{vis}^{\mathrm{L}}\) are the final obtained representations for them. ### Training Objective To learn a unified product representation across the different domains, we first leverage contrastive learning to train the proposed COPE model following previous self-supervised learning methods [3, 10, 22]. The basic formulation of the contrastive loss function [22] is defined as: \[\mathcal{L}_{con}=-\log\frac{\exp(s_{qk_{i}}/\tau)}{\sum_{i=0}^{i=K}\exp(s_{qk _{i}}/\tau)} \tag{7}\] where \(s_{qk_{i}}\) denotes the cosine similarity between the sample \(q\) and the sample \(k_{i}\). The positive sample \(k_{+}\) indicates the sample that has the same product label with \(q\). The similarity \(s_{qk}\) can be calculated with a different form of representations (vision, text, or fusion), and the samples \(q\) and \(k\) can come from different domains (product page, short video, or live streaming). In this paper, we choose seven different implementations of the similarity \(s_{qk}\), resulting in seven different contrastive loss functions. The details of the implementations are summarized in Table 2. Based on the seven contrastive loss functions, we define cross-domain loss as the sum of them. Formally, \[\mathcal{L}_{cd}=\sum_{n=1}^{n=7}\alpha_{n}\mathcal{L}_{con}^{n} \tag{8}\] where \(\alpha_{n}\) is the weight of \(n\)-th contrastive learning loss function. In addition to the cross-domain loss, we also adopt the product classification loss to train our COPE model. Specifically, we use an MLP (multi-layer perceptron) with shared parameters to predict product classification scores for each domain with the domain-specific representations. For the product page domain, the multi-modal representation \(E_{\textit{fus}}^{\mathrm{P}}\) is utilized. For the short video and live streaming domains, the visual representations \(E_{vis}^{\mathrm{V}}\) and \(E_{\textit{vis}}^{\mathrm{L}}\) are adopted. Formally, \[s^{\mathrm{P}} =\mathrm{MLP}(E_{\textit{fus}}^{\mathrm{P}}) \tag{9}\] \[s^{\mathrm{V}} =\mathrm{MLP}(E_{\textit{vis}}^{\mathrm{V}})\] \[s^{\mathrm{L}} =\mathrm{MLP}(E_{\textit{vis}}^{\mathrm{L}})\] where \(s\) denotes the classification score for each domain. Then the standard softmax loss is used to train the model. Formally, \[\mathcal{L}_{\textit{cls}}=-(\log\frac{e^{e^{\mathrm{P}}_{i}}}{\sum_{j}^{C}e^ {s^{\mathrm{P}}_{j}}}+\log\frac{e^{s^{\mathrm{V}}_{i}}}{\sum_{j}^{C}e^{s^{ \mathrm{V}}_{j}}}+\log\frac{e^{s^{\mathrm{L}}_{i}}}{\sum_{j}^{C}e^{s^{\mathrm{ L}}_{j}}}) \tag{10}\] The total loss to train the COPE model is the combination of the cross-domain loss and the classification loss. Formally, \[\mathcal{L}_{\textit{f}}=\mathcal{L}_{\textit{cd}}+\beta\mathcal{L}_{\textit{ cls}} \tag{11}\] where \(\beta\) indicates the weight of classification loss. ### Implementation Details. We initialize the text encoder with the public Chinese RoBERTa model [7, 15] and the visual encoder with the pre-trained model in [21]. Eight frames are extracted to obtain features for short videos and live streams. The training batch size is set to 84, and the training process continues for 80 epochs. We optimize the model using AdamW [17], and the cosine schedule with a linear warmup is used for adjusting the learning rate. The warmup approach continues for two epochs, and the max learning rates are set to 5e-5, 5e-7, and 5e-3 for the text encoder, visual encoder, and other layers. ### Experimental Results In this section, we evaluate our proposed COPE model and compare it with the state-of-the-arts on the ROPE dataset. The cross-domain product retrieval task and one-shot cross-domain classification task are considered. Since no existing methods are precisely suitable to our cross-domain setting, we compare the COPE model with the multi-modal vision-language models [5, 16, 19, 20, 31], which are not fine-tuned on our dataset. The product page representation is obtained for these models by averaging the image and text features. The short video and live streaming representations are extracted by averaging the representations of all frames. The vision-language models trained with general image-text pairs are compared in the first compartment of Table 3. We can see that all of them obtain inferior performance to our COPE model on each setting of the two evaluation tasks. In both the retrieval and classification tasks, the performance in the live related settings, _i.e_., \(\mathrm{P}\mathrm{\rightarrow}\mathrm{L}\), \(\mathrm{L}\mathrm{\rightarrow}\mathrm{P}\), \(\mathrm{V}\mathrm{\rightarrow}\mathrm{L}\), and \(\mathrm{L}\mathrm{\rightarrow}\mathrm{V}\), is obviously lower than others. For example, the \begin{table} \begin{tabular}{c c c} \hline \hline similarity \(s_{qk}\) & domain & modality \\ \hline \(<E_{\textit{fus}}^{\mathrm{P}}(q),E_{\textit{vis}}^{\mathrm{V}}(k)>\) & _product-video_ & _fusion-vision_ \\ \hline \(<E_{\textit{fus}}^{\mathrm{P}}(q),E_{\textit{vis}}^{\mathrm{L}}(k)>\) & _product-live_ & _fusion-vision_ \\ \hline \(<E_{\textit{vis}}^{\mathrm{V}}(q),E_{\textit{vis}}^{\mathrm{L}}(k)>\) & _video-live_ & _vision-vision_ \\ \hline \(<E_{\textit{vis}}^{\mathrm{P}}(q),E_{\textit{vis}}^{\mathrm{V}}(k)>\) & _product-video_ & _vision-vision_ \\ \hline \(<E_{\textit{txt}}^{\mathrm{P}}(q),E_{\textit{vis}}^{\mathrm{V}}(k)>\) & _product-video_ & _text-vision_ \\ \hline \(<E_{\textit{vis}}^{\mathrm{P}}(q),E_{\textit{vis}}^{\mathrm{L}}(k)>\) & _product-live_ & _vision-vision_ \\ \hline \(<E_{\textit{txt}}^{\mathrm{P}}(q),E_{\textit{vis}}^{\mathrm{L}}(k)>\) & _product-live_ & _text-vision_ \\ \hline \hline \end{tabular} \end{table} Table 2: The implementations of different similarity functions \(s_{qk}\). COPE model obtains 82.58% R@1 in the \(\mathrm{P}\mathrm{\rightarrow}\mathrm{V}\) retrieval task and 59.84% Acc in the \(\mathrm{P}\mathrm{\rightarrow}\mathrm{V}\) classification task. By contrast, the performance of COPE in the \(\mathrm{P}\mathrm{\rightarrow}\mathrm{L}\) settings is 54.06% and 34.95%, respectively. The scales and views of products in live streams differ from the product pages. Moreover, the products are not always visible in the live stream durations. These situations significantly improve the difficulty of recognizing products in live streams. In the second compartment of Table 3, we compare the COPE model with the FashionClip model. Although the FashionClip model is trained with the product images and titles rather than the general data, there are still large margins between the obtained results of FashionClip and COPE. As described in Section 2.1, the representations learned on the product page domain only are insufficient to deal with the cross-domain product recognition problem. ### Performance on other datasets In order to verify the generalization of our ROPE dataset. We directly utilize the learned COPE model on ROPE to extract product representations and conduct evaluations on other datasets, such as Product1M [33] and M5Product [8]. \begin{table} \begin{tabular}{c c|c c c c c|c} \hline \hline \multicolumn{2}{c|}{} & \multicolumn{5}{c|}{cross domain retrieval} & few-shot classification \\ \hline models & cross domain setting & R@1 & R@5 & R@10 & R@20 & R@50 & R@mean & Top1 Acc \\ \hline \multirow{6}{*}{CLIP4CLIP [19]} & P2V & 59.06 & 79.31 & 86.02 & 91.01 & 95.03 & 82.08 & 27.94 \\ & V2P & 38.48 & 52.25 & 59.16 & 66.54 & 74.65 & 58.21 & 26.55 \\ & P2L & 23.68 & 38.14 & 45.32 & 54.27 & 66.79 & 45.64 & 9.97 \\ & L2P & 14.46 & 24.52 & 30.77 & 38.09 & 48.91 & 31.35 & 10.75 \\ & V2L & 18.10 & 29.83 & 35.65 & 42.22 & 52.01 & 35.56 & 9.47 \\ & L2V & 20.14 & 33.51 & 40.44 & 48.05 & 58.68 & 40.16 & 7.22 \\ \hline \multirow{6}{*}{TS2-Net [16]} & P2V & 57.42 & 77.88 & 85.29 & 90.44 & 94.92 & 81.19 & 26.11 \\ & V2P & 36.56 & 50.93 & 58.02 & 65.12 & 73.89 & 56.90 & 24.09 \\ & P2L & 22.85 & 38.49 & 45.91 & 54.11 & 65.89 & 45.45 & 9.83 \\ & L2P & 14.16 & 24.52 & 30.50 & 37.52 & 48.37 & 31.01 & 10.57 \\ & V2L & 17.69 & 29.63 & 34.84 & 41.27 & 50.95 & 34.87 & 9.68 \\ & L2V & 20.55 & 33.80 & 40.91 & 48.46 & 59.16 & 40.57 & 7.40 \\ \hline \multirow{6}{*}{X-CLIP [20]} & P2V & 56.61 & 77.46 & 84.84 & 90.11 & 94.51 & 80.70 & 26.97 \\ & V2P & 35.29 & 49.41 & 56.82 & 64.13 & 72.54 & 55.63 & 23.55 \\ & P2L & 22.66 & 37.47 & 44.33 & 52.11 & 63.38 & 43.98 & 9.72 \\ & L2P & 13.52 & 23.08 & 28.92 & 35.98 & 46.14 & 29.52 & 8.88 \\ & V2L & 17.64 & 28.71 & 34.03 & 40.17 & 49.67 & 34.04 & 9.05 \\ & L2V & 19.60 & 32.73 & 39.51 & 47.07 & 57.25 & 39.23 & 7.42 \\ \hline \multirow{6}{*}{ChineseCLIP [31]} & P2V & 56.93 & 79.80 & 87.43 & 92.48 & 96.51 & 82.65 & 31.44 \\ & V2P & 40.48 & 57.85 & 66.74 & 75.25 & 84.03 & 64.87 & 29.10 \\ & P2L & 34.37 & 50.83 & 58.66 & 67.05 & 78.57 & 57.89 & 19.23 \\ & L2P & 22.49 & 37.11 & 46.78 & 56.30 & 68.14 & 46.16 & 15.73 \\ & V2L & 25.51 & 38.28 & 45.02 & 52.27 & 62.53 & 44.72 & 13.24 \\ & L2V & 28.28 & 45.87 & 53.67 & 62.18 & 72.27 & 52.45 & 14.16 \\ \hline \hline \multirow{6}{*}{FashionClip [5]} & P2V & 44.31 & 67.06 & 75.25 & 82.57 & 89.29 & 71.69 & 18.59 \\ & V2P & 25.51 & 40.75 & 48.71 & 56.63 & 65.94 & 47.50 & 15.88 \\ \cline{1-1} & P2L & 19.54 & 31.14 & 36.98 & 43.91 & 54.39 & 37.19 & 8.70 \\ \cline{1-1} & L2P & 11.22 & 24.23 & 31.90 & 40.05 & 50.96 & 31.67 & 7.57 \\ \cline{1-1} & V2L & 15.55 & 24.88 & 29.51 & 35.07 & 42.68 & 29.53 & 6.80 \\ \cline{1-1} & L2V & 21.20 & 35.72 & 42.55 & 49.60 & 58.77 & 41.56 & 10.40 \\ \hline \multirow{6}{*}{COPE (Ours)} & P2V & **82.58** & **94.88** & **97.54** & **98.89** & **99.65** & **94.70** & **59.84** \\ \cline{1-1} & V2P & **65.20** & **76.56** & **82.04** & **86.86** & **91.69** & **80.47** & **57.12** \\ \cline{1-1} & P2L & **54.06** & **71.07** & **77.14** & **82.86** & **89.70** & **74.96** & **34.95** \\ \cline{1-1} & L2P & **42.33** & **56.48** & **63.67** & **71.11** & **80.22** & **62.76** & **36.51** \\ \cline{1-1} & V2L & **45.95** & **63.63** & **70.64** & **77.50** & **85.47** & **68.63** & **30.43** \\ \cline{1-1} & L2V & **48.28** & **67.20** & **74.70** & **81.52** & **89.15** & **72.17** & **33.30** \\ \hline \hline \end{tabular} \end{table} Table 3: Retrieval and classification results on COPE. P, V, and L means product page, short video, and live stream domains. The results are shown in Table 4 and Table 5. We can see that without any fine-tuning approach, the COPE model can achieve better performance to the origin SOTAs. ### Effectiveness of Classification Loss In this section, we examine the influence of classification loss on our model. Due to a large number of categories in our dataset, we utilize Partial-FC [1] to enhance training efficiency. As indicated in Table 6, including the classification loss substantially improves the model's performance across all retrieval tasks. The model with \(\mathcal{L}cls\) outperforms the model without \(\mathcal{L}cls\) by 30% and 19% in rank-1 accuracy on the P2V and L2P tasks, respectively. It provides compelling evidence for the efficacy of the classification loss. ### Sampling Strategy In Table 7, we present a comparison between random sampling and product-balance sampling. In a mini-batch with \(N\) samples, random sampling refers to randomly selecting \(N\) samples from the training set. By contrast, product-balance sampling selects \(P\) products and then samples \(K\) instances from each product, resulting in \(N=P\times K\) samples. The experimental results indicate that balanced sampling significantly enhances the model's performance. of our COPE approach in distinguishing between different products. Furthermore, Figure 5 displays some of our retrieval results. Notably, most of the false positive results belong to the same category as the query and possess similar visual characteristics. ## 5 Conclusion To enable the creation of a unified cross-domain product representation, we introduce a large-scale E-commerce cross-domain dataset that includes three domains (product pages, short videos, and live streams) and two modalities (vision and language). It is the first dataset that encompasses various domains in the e-commerce scenario. We propose our COPE as the baseline and evaluate it on cross-domain retrieval and few-shot classification tasks. Finally, we provide an analysis and visualization of the results. This task applies to most e-commerce platforms, and both the dataset and the proposed framework will inspire research on cross-domain product representation.
2306.13846
Multi-scale accretion in dense cloud cores and the delayed formation of massive stars
The formation mechanism of massive stars remains one of the main open problems in astrophysics, in particular the relationship between the mass of the most massive stars, and that of the cores in which they form. Numerical simulations of the formation and evolution of large molecular clouds, within which dense cores and stars form self-consistently, show in general that the cores' masses increase in time, and also that the most massive stars tend to appear later (by a few to several Myr) than lower-mass stars. Here we present an idealized model that incorporates accretion onto the cores as well as onto the stars, in which the core's mass growth is regulated by a ``gravitational choking'' mechanism that does not involve any form of support. This process is of purely gravitational origin, and causes some of the mass accreted onto the core to stagnate there, rather than being transferred to the central stars. Thus, the simultaneous mass growth of the core and of the stellar mass can be computed. In addition, we estimate the mass of the most massive allowed star before its photoionizing radiation is capable of overcoming the accretion flow onto the core. This model constitutes a proof-of-concept for the simultaneous growth of the gas reservoir and the stellar mass, the delay in the formation of massive stars observed in cloud-scale numerical simulations, the need for massive, dense cores in order to form massive stars, and the observed correlation between the mass of the most massive star and the mass of the cluster it resides in. Also, our model implies that by the time massive stars begin to form in a core, a number of low-mass stars are expected to have already formed.
Enrique Vázquez-Semadeni, Gilberto C. Gómez, Alejandro González-Samaniego
2023-06-24T03:08:09Z
http://arxiv.org/abs/2306.13846v3
# Multi-scale accretion in dense cloud cores and the delayed formation of massive stars ###### Abstract The formation mechanism of massive stars remains one of the main open problem in astrophysics, in particular the relationship between the mass of the most massive stars, and that of the cores in which they form. Numerical simulations of the formation and evolution of large molecular clouds, within which dense cores and stars form self-consistently, show in general that the cores' masses increase in time, and also that the most massive stars tend to appear later (by a few to several Myr) than lower-mass stars. Here we present a model that incorporates accretion onto the cores as well as onto the stars, in which the core's mass grows by a "gravitational choking" mechanism that does not involve any form of support. This process is of purely gravitational origin, and causes some of the mass accreted onto the core to stagnate there, rather than being transferred to the central stars. Thus, the simultaneous mass growth of the core and of the stellar mass can be computed. In addition, we estimate the mass of the most massive allowed star before its photoionizing radiation is capable of overcoming the accretion flow onto the core. This model constitutes a proof-of-concept for the simultaneous growth of the gas reservoir and the stellar mass, the delay in the formation of massive stars observed in cloud-scale numerical simulations, the need for massive, dense cores in order to form massive stars, and the observed correlation between the mass of the most massive star and the mass of the cluster it resides in. Also, our model implies that by the time massive stars begin to form in a core, a number of low-mass stars are expected to have already formed. keywords: keyword1, Keyword2, Keyword3, Keyword4 ## 1 Introduction The formation mechanism of massive stars and its relationship to the physical nature and state of the dense cores in which these stars form is a crucial ingredient for understanding the origin of the stellar initial mass function (IMF) and the evolution of molecular clouds, which are strongly affected by the feedback from these massive stars. Two main models exist for the formation of massive stars, which are based on fundamentally different scenarios for the process. On the one hand, the _competitive accretion_ (CA) scenario (Bonnell et al, 2001a) assumes that the stars in a forming cluster, as well as the gas from which they accrete, both generate and reside in a common gravitational potential well. Thus, the stars near the bottom of the well accrete at a higher rate, and therefore become more massive than stars in the periphery. On the other hand, the _turbulent core_ (TC) model (McKee and Tan, 2003) assumes that a massive, dense core must form, so that the pressure within it is high enough to provide the high enough accretion rates onto the protostar that they can persist and resist the feedback from the protostar itself. In spite of their very different sets of assumptions, they are both developed within the context of a fixed gas mass reservoir. However, recent numerical simulations of star cluster formation under the GHC scenario have shown that the clumps and cores harboring star-forming regions grow in density, mass, and size due to accretion from their respective environments (Heitsch et al, 2008; Heitsch and Hartmann, 2008; Vazquez-Semadeni et al, 2009; Gonzalez-Samaniego and Vazquez-Semadeni, 2020; Camacho et al, 2020), and that the formation of massive stars begins a few to several Myr after the first stars began to form (Vazquez-Semadeni et al, 2017). In addition, somewhat surprisingly, in reference (Gonzalez-Samaniego and Vazquez-Semadeni, 2020), it was reported that the clumps manage to continue growing in mass and density even when star formation has already begun. This implies that somehow the mass transfer rate from the clump-scale to the protostars is _not_ fully efficient, allowing part of the gas mass to "stagnate" in the core, allowing the latter's mass to grow. In this case, the natural evolution of a core would be to start out as a low-mass star-forming structure, and to evolve into a high-mass one, until finally destroyed by its own stellar population. In addition, this would provide a possible natural, physical explanation for the observed correlation between the mass of the most massive star in a young cluster with the cluster's mass (e.g., Weidner and Kroupa, 2006; Weidner et al, 2010), if the latter in turn is somehow capped by the parent clump's mass, as also suggested by Oey (2011). However, several important questions then arise, such as: 1) What is the mechanism that allows the cores to accumulate mass without transferring it fully to their central parts? 2) How does the core mass growth correlate with the stellar mass growth? 3) At what point during a core's growth does its internal stellar content become capable of disrupting the core? 4) Does this limit depend on the boundary or initial conditions of the core's growth? A possible mechanism of core growth could be that turbulence within the core maintains it in approximate equilibrium (McKee and Tan, 2003), so that it can continue to accumulate mass without collapsing. However, since the core itself must have formed by gravitational contraction and accretion from an external environment, it does not appear feasible that the collapse would be halted by virialization at the core scale while continuing to accrete at the large scale (as suggested, for example, by Field et al (2008)), and then resume again to form stars. Instead, a continuous mechanism of core growth would be desirable. An alternative explanation is provided by the _Global Hierarchical Collapse_ (GHC) scenario (Vazquez-Semadeni et al, 2009, 2019), which proposes that each level in the hierarchy of density structures within molecular clouds is accreting from its parent structure due to large-scale gravity. At the scale of accretion onto the young stellar objects (YSOs), it is important to note that the standard Bondi-Hoyle-Lyttleton accretion mechanism (Hoyle and Lyttleton, 1939; Bondi and Hoyle, 1944) assumes that the accretion is driven exclusively by the gravity of the _(proto)-stellar_ object, neglecting the self-gravity of the gas. Clearly, the gas flow cannot be driven by this mechanism during the _prestellar_ stage of collapse of a core, since there is no stellar object yet during that period. The gravity from the stellar component also cannot be the driver of the flow at large distances from the stellar object, where the mass is dominated by the gaseous component, even after a YSO has appeared. Therefore, accounting for the self-gravity of the gas is crucial at all radii during the prestellar stage, and at the large (core) scale, during both stages. This was taken into account in the CA scenario (Bonnell et al, 2001), although still within the context of a fixed core mass. In the present paper, we address these questions, along the following steps: We first discuss a plausible mechanism of core mass growth through gravity-driven accretion for isothermal spherical collapse, regulated by the logarithmic slope of the density profile, including the possibility of deviations due to the presence of filaments (Sec. 2). Next, we explore how the instantaneous core mass limits the mass of the most massive star that the core can harbor (Sec. 3). We then determine the time when the photoionizing flux from the most massive star becomes capable of destroying the core, by equating the power of the accretion onto it to the photoionization power (Sec. 3.1). We conclude in Sec. 5 with a summary of our results and some conclusions. It is worth noting that, in a closely related study, Myers (2011) considered both clump-to-core as well as core-to-protostar accretion, although he focused more on the generation of the stellar mass _distribution_, while here we focus on the simultaneous evolution of the core's mass and the mass of the most massive possible star. ## 2 Core mass evolution The most general analytical solutions of spherical and isothermal gravitational collapse that describe (under idealized conditions) the collapse of dense molecular cloud cores Whitworth and Summers (1985), show that the latter go through two distinct asymptotic dynamic evolutionary stages, each of which has two distinct spatial regions, denoted _inner_ and _outer_. This evolution is described via a similarity analysis, in which the variables are nondimensionalized, and are functions of the "similarity variable", \(x\equiv r/c_{\rm s}t\), where \(r\) is the radius, \(t\) is time, and \(c_{\rm s}\) is the isothermal sound speed. The time at which the protostar1 forms is usually denoted \(t=0\), and so the time interval \(t<0\) (which implies \(x<0\) as well) corresponds to the _prestellar_ stage, while the interval \(t>0\) corresponds to the _protostellar_ stage. During the prestellar stage, the inner region has a uniform density and a (negative) infall speed that scales linearly with radius (\(v(x)\propto x\)), so that its magnitude decreases inwards. The size of the inner region is comparable to the Jeans length corresponding to its uniform density Keto and Caselli (2010). The outer region has a density profile \(\rho(x)\propto|x|^{-2}\) and an infall speed that decreases with radius as \(v(x)\propto x^{-1}\) if the speed is required to vanish at infinity (see eq. [3.8] of reference Whitworth and Summers (1985) with \(y_{\infty}=0\)). During the _protostellar_ stage, the inner region has \(\rho(x)\propto x^{-3/2}\) and \(v(x)\propto-x^{-1/2}\), and the outer region has \(\rho(x)\propto x^{-2}\) and again \(v(x)\propto-x^{-1}\). Footnote 1: Mathematically, the formation of the protostar corresponds to the appearance of a _singularity_, with divergent central density. In the strict spherical description, during the prestellar stage, the mass contained within a certain fixed radius increases because both the central and the mean density increase with time. However, during the protostellar stage, the _gaseous_ mass in the core remains fixed, because it can be shown that the radial density profile \(\rho(x)\propto x^{-2}\) and the nondimensionalization of \(\rho\) by the time-dependent quantity \(4\pi Gt^{2}\) combine to make the density profile independent of \(t\). That is, the density profile becomes fixed in time, and so does the gaseous mass in the core (see also Murray and Chang (2015)). Only the mass of the central protostar increases with time. In the remainder of this section, we revisit the accretion flow into and across the core towards the star, using the approximate description of Gomez et al (2021, hereafter Paper I) for the prestellar stage as a single power law with time-dependent slope out to the radius \(r_{\rm c}\) (also time-dependent) where a uniform-density background is reached, and then consider possible deviations from sphericity that may cause the core to continue to grow in mass. ### Core mass evolution in spherical geometry. Gravitational choking Already the first numerical simulations and analytical calculations of isothermal spherical collapse showed the development of a density profile approaching \(\rho(r)\propto r^{-2}\)(Larson, 1969; Penston, 1969). The early interpretation of such a profile was that a period of quasi-static contraction (Shu, 1977) was required in order for sound waves to establish detailed pressure balance throughout the core, similarly to the case of a hydrostatic Bonnor-Ebert (Ebert, 1955; Bonnor, 1956) sphere. However, a recent study (Li, 2018) has shown that the \(r^{-2}\) profile can arise simply from letting the radial infall speed at every radius \(r\) be the gravitational velocity induced by the gas mass internal to that radius, \[v_{\rm inf}(r)=\sqrt{\frac{2GM(r)}{r}}, \tag{1}\] where \[M(r)=\int_{0}^{r}4\pi\rho(r^{\prime})r^{\prime 2}\mathrm{d}r^{\prime}, \tag{2}\] and requiring that the surface-integrated mass flux at radius \(r\) \[\mathcal{F}(r)=4\pi\rho(r)v(r)r^{2} \tag{3}\] be independent of radius. Furthermore, in Paper I an additional step was taken by considering the transient evolution of the logarithmic slope of the core's density profile. The core was assumed to begin its life as a moderate, arbitrary density fluctuation of radius \(r_{\rm c}\approx L_{\rm J}(\rho_{0})/2\), where \(\rho_{0}\) is the density of the background medium and \(L_{\rm J}(\rho_{0})\) is the Jeans length at that density.2 During the prestellar stage studied in that paper, the core was assumed to evolve by increasing the slope of its density profile, keeping \(r_{\rm c}\) fixed. However, as we shall see below, \(r_{\rm c}\) can vary over time, and so in general we will have \(r_{\rm c}=r_{\rm c}(t)\). For the radius-averaged density profile from \(r=0\) to \(r=r_{\rm c}\), Paper I assumed a power law of the form Footnote 2: Recall the medium is assumed to be isothermal. \[\rho(r)=\rho_{0}\left(\frac{r}{r_{\rm c}}\right)^{-p} \tag{4}\] at all times. Note that this is not strictly true since, as seen in eq. (5) below, the slope varies at different rates at different radii. In this context, \(p\) should be regarded as the _mean_ logarithmic slope of the density profile over the core's radial extent. Then, making the approximations that the infall speed is given by eq. (1) and that the density profile is given by eq. (4) at all times (i.e., that it evolves from one power law to another), and introducing them into the continuity equation, Paper I showed that the rate of change of the logarithmic slope \(p\) at radius \(r\) is given by \[\frac{\mathrm{d}p(r)}{\mathrm{d}t}=\left(3-\frac{3}{2}p\right)\left(\frac{4\pi G \rho_{0}}{3-p}\right)^{1/2}\left[\frac{(r/r_{\mathrm{c}})^{-p/2}}{-\ln(r/r_{ \mathrm{c}})}\right]. \tag{5}\] Furthermore, Paper I noted that the sign of this derivative is determined exclusively by the factor \((3-3p/2)\) in the right hand side of eq. (5), so that, if \(p<2\), then the slope increases over time, while if \(p>2\), then the slope decreases. If \(p=2\), the slope remains stationary. That is, \(p=2\) is an _attractor_ for the logarithmic slope of the density profile of a flow generated by gravitational attraction of the internal mass, _under spherical symmetry_. An additional implication of equations (1) and (4) for the velocity and density profiles is that, if \(p\neq 2\), then the surface-integrated mass flux given by (3) is _not_ constant with radius, but instead depends on \(r\) as \[\mathcal{F}(r)=\left(\frac{128\pi^{3}G\rho_{0}^{3}r_{\mathrm{c}}^{3p}}{3-p} \right)^{1/2}r^{(3-\frac{3}{2}p)}, \tag{6}\] As a reference, the above expression, expressed in dimensional form and evaluated at the initial outer boundary of the core, \(r_{0}=r_{\mathrm{c}}(t=0)\), reads \[\mathcal{F}(r_{0})=\frac{1.88\times 10^{3}(M_{\odot}\,\mathrm{Myr}^{-1})}{(3-p)^ {1/2}}\left(\frac{r_{0}}{1\,\mathrm{pc}}\right)^{3}\left(\frac{n_{0}}{10^{3} \,\mathrm{cm}^{-3}}\right)^{3/2} \tag{7}\] Differentiating eq. (6) with respect to the radius then gives \[\frac{\mathrm{d}\mathcal{F}(r)}{\mathrm{d}r}=\frac{3(2-p)}{2}\left(\frac{128 \pi^{3}G\rho_{0}^{3}r_{\mathrm{c}}^{3p}}{3-p}\right)^{1/2}r^{(2-\frac{3}{2}p)}. \tag{8}\] This equation shows that _the integrated mass flux decreases with decreasing radius for \(p<2\)_, implying that not all of the mass entering the core at its outer boundary can be transferred to its center. Some of the mass is trapped in the core, causing the core's mass to grow, as long as \(p<2\). We refer to this phenomenon as _gravitational choking_ of the gravity-driven inflow. It is important to note that this phenomenon does not involve any kind of support; it is just that gravity cannot transfer mass at a constant rate across the radius for a profile with \(p<2\). Finally, it is also important to note that eq. (6) implies, for strict spherical symmetry, that the mass accretion flow vanishes at \(r=0\) during the _prestellar_ stage, in which \(p<2\). This situation changes during the _protostellar_ stage, which starts when the density profile slope reaches \(p=2\), since at this point, _the integrated mass flux \(\mathcal{F}\) becomes independent of radius_. That is, during the protostellar stage, _in spherical geometry_, all of the mass entering through the core's boundary is transferred to the central stellar object. Summarizing, under spherical symmetry, three important conditions are reached when the slope reaches the value \(p=2\): * The slope becomes stationary. * The mass accretion rate becomes independent of radius (all the mass entering the core on the outside is uniformly transferred across all radii). * A singularity (protostar) is formed. Note that the latter condition does not follow from the model in Paper I, but rather from the similarity solutions and numerical simulations (e.g., (Larson, 1969; Whitworth and Summers, 1985)), which show that the entire density profile becomes a single power law with a logarithmic slope of \(-2\) when the central singularity first appears. The exact solution has a constant-density inner region and an \(r^{-2}\) outer envelope during the prestellar stage. When the central region shrinks to zero radius, the singularity appears. ### Mass fraction retained in the core per unit time We now wish to estimate the amount of mass that is retained in the core and the amount of mass that goes into "stars" as a function of time. However, as noted above, eq. (6) implies that, for all \(p<2\), the surface-integrated mass flux \(\mathcal{F}\) vanishes at the center. We can circumvent this problem by defining a suitable inner boundary for the core, \(r_{\rm i}\), so that the fraction of mass retained in the core per unit time is given by the accretion rate at the outer boundary, \(r_{\rm c}\), minus the accretion rate at \(r_{\rm i}\). We can thus write \[\dot{M}_{\rm core}=\mathcal{F}(r_{\rm c})\left[1-\frac{\mathcal{F}(r_{\rm i}) }{\mathcal{F}(r_{\rm c})}\right]=\mathcal{F}(r_{\rm c})\left[1-\left(\frac{r_ {\rm i}}{r_{\rm c}}\right)^{\frac{3}{2}(2-p)}\right]. \tag{9}\] Strictly speaking, for perfectly spherical collapse, during the prestellar stage with \(p<2\), _all_ of the mass entering the core is retained in the gaseous phase, since the central density has not diverged yet, and so there is zero mass transfer to the "stars" (the region \(r<r_{\rm i}\)). Conversely, the mass transfer from the boundary to the center becomes 100% efficient at the onset of the protostellar stage (the time at which the singularity--the protostar--appears), because at that time the logarithmic slope becomes \(p=2\). However, in real molecular cloud cores, the actual profiles appear to be shallower than \(-2\)_even after protostellar objects have appeared_. Indeed, the mean slope for the compilation of cores examined in Paper I was found to be \(p\approx 1.9\) for low-mass cores and \(p\approx 1.7\) for high-mass cores. Moreover, in Fig. 1 we show the spherically-averaged density profiles of the three star-forming regions appearing in the simulation LAF0 (without stellar feedback) studied by Gonzalez-Samaniego and Vazquez-Semadeni (2020), and briefly described here in the Methods section, from the onset of star formation to a few megayears later, showing that the logarithmic slopes are consistently shallower than \(-2\).3 The reason for this is not clear, but one possibility is that filamentary accretion flows may cause a flattening of the spherically-averaged density distribution. As a proof-of-concept, in Appendix A we show that the superposition of a uniform-density filament on top of a power-law spherical density distribution flattens the net spherically-averaged profile, thus decreasing the depth of the core's gravitational potential well. We thus suggest that the presence of filaments in hub-filament systems can flatten the spherically-averaged density profile, thus causing the gravitational attraction of the inner gas mass to be insufficient for driving all of the material entering the core to be transferred to the central stars, and therefore allowing the core to grow by mass accumulation. In the next section we now describe the simultaneous mass growth of the central star(s) and of the core, as a function of time and of the density profile's slope. Figure 1: Spherically-averaged density profile for “clumps” in the numerical simulation LAF0 of reference (González-Samaniego and Vazquez-Semadeni, 2020) at different times after the formation of the first stellar particle, defined as described in the Methods section; each clump is clearly identified as a different star forming region in the numerical box and they are labeled as Group 1, Group 2, and Group 3. The median of density cells in spherical bins (shells) is used to compute the density profile. In each panel dashed lines represent the shallowest and steepest slopes for the different times considered. In each clump and at all the times considered, density profile slopes are always lower than \(p=2\) (represented by the dot-dashed line in each panel). Interestingly, we do not see an evolutionary trend in the profiles. ### Evolution of the core's mass and radius Following Paper I, let us assume a core with the density profile given by equation (4), surrounded by an environment at constant density \(\rho_{0}\). Let us ignore, for the moment, the core's inner boundary at \(r_{\rm i}\) and label as "core" the gas with density greater than \(\rho_{0}\). Then, the core's mass \(M_{\rm c}\) is given by \[M_{\rm c}(p) =\frac{4\pi}{3-p}r_{\rm c}^{3}\rho_{0} \tag{10}\] \[=\frac{732M_{\odot}}{3-p}\left(\frac{r_{\rm c}}{1\,{\rm pc}} \right)^{3}\left(\frac{n_{0}}{10^{3}\,{\rm cm}^{-3}}\right).\] This equation gives the time dependence of the core's mass implicitly, through the time dependence of \(p\) given by eq. (5). However, since the slope \(p\) becomes stationary after forming a star (possibly at a value \(p<2\) due to of the presence of filaments), and we are assuming that \(\rho_{0}\) remains constant during this stage, the mass growth of the core caused by accretion and gravitational choking requires the core's radius \(r_{\rm c}\) to increase--i.e., the core expands. This implies a departure from the asymptotic self-similar solution, which implictly pushes the core boundaries to infinity. For a finite core, then, the expansion rate \({\rm d}r_{\rm c}/{\rm d}t\), after the slope becomes stationary, can be related to the mass growth rate within the core by taking the time derivative of eq. (10), \[\frac{{\rm d}M_{\rm c}}{{\rm d}t}=\frac{12\pi\rho_{0}r_{\rm c}^{2}}{3-p}\frac{{ \rm d}r_{\rm c}}{{\rm d}t}. \tag{11}\] In turn, this mass growth rate must be equal to the net surface-integrated mass flux given by eq. (9), neglecting the factor within square brackets, since we are ignoring the core's inner boundary. We thus have: \[\frac{{\rm d}r_{\rm c}}{{\rm d}t}\approx\frac{2}{3}\left[2\left(3-p\right)\pi G \rho_{0}\right]^{1/2}r_{\rm c} \tag{12}\] It is thus seen that \(r_{\rm c}\) grows approximately exponentially. In appendix B we show that the mass increase due to the evolution of \(p\) occurs on a shorter timescale than that for the increase of \(r_{\rm c}\). Therefore, the core's mass must grow initially by increasing the slope of its density profile at nearly constant radius \(r_{\rm c}\) (during the prestellar stage), and later by increasing its size at a nearly fixed slope (during the protostellar stage). However, due to the presence of filaments, we expect that the slope may saturate at a value \(p<2\). Thus, the core's mass can continue to grow by gravitational choking until the accretion supply is exhausted, and the evolution of the core's mass can be followed together with that of the internal stellar object(s). We now take into account the fact that the stellar mass accumulates at the center. In this case, eq. (12) reads \[\frac{\mathrm{d}r_{\mathrm{c}}}{\mathrm{d}t}\approx\frac{2}{3}\left[2\left(3-p \right)\pi G\rho_{0}\right]^{1/2}r_{\mathrm{c}}\left[1-\left(\frac{r_{\mathrm{ i}}}{r_{\mathrm{c}}}\right)^{3\left(2-p\right)/2}\right], \tag{13}\] and the gravitational velocity \(v_{\mathrm{inf}}\) at radius \(r\) is given by \[v_{\mathrm{inf}}(r)\approx\sqrt{\frac{2GM_{\mathrm{tot}}}{r}}=\sqrt{\frac{2G \left[M_{\mathrm{g}}(r)+M_{\mathrm{i}}\right]}{r}}, \tag{14}\] where \(M_{\mathrm{g}}(r)\) is the gas mass contained between \(r_{\mathrm{i}}\) and \(r\) and \(M_{\mathrm{i}}\) is the total mass that has been accreted onto the internal "stellar" region through \(r_{\mathrm{i}}\) over the entire evolution. We refer to \(M_{\mathrm{i}}\) as the "stellar mass". At short radii, where \(r\to r_{\mathrm{i}}\), we have that \(M_{\mathrm{i}}\gg M_{\mathrm{g}}\), and so \[v_{\mathrm{inf}}(r_{\mathrm{i}})\approx\sqrt{2GM_{\mathrm{i}}}\,r_{\mathrm{i }}^{-1/2}. \tag{15}\] Therefore the mass accretion rate (the surface-integrated mass flux) into the "stellar region", \(\mathcal{F}(r_{\mathrm{i}})=4\pi\rho(r_{\mathrm{i}})v_{\mathrm{inf}}(r_{ \mathrm{i}})r_{\mathrm{i}}^{2}\), using the density profile given by eq. (4), becomes \[\mathcal{F}(r_{\mathrm{i}})=\left(32\pi^{2}GM_{\mathrm{i}}\right)^{1/2}\rho_{ 0}r_{\mathrm{c}}^{p}r_{\mathrm{i}}^{\frac{3}{2}-p} \tag{16}\] On the other hand, at the outer radius of the core, \(r_{\mathrm{c}}\gg r_{\mathrm{i}}\), we have \(M_{\mathrm{i}}\ll M_{\mathrm{g}}\), and therefore the infall speed is \[v_{\mathrm{inf}}(r_{\mathrm{c}}\gg r_{\mathrm{i}})\approx\left(\frac{8\pi G \rho_{0}}{3-p}\right)^{1/2}r_{\mathrm{c}}, \tag{17}\] and the accretion rate onto the core is then \[\mathcal{F}(r_{\mathrm{c}})=\left(\frac{128\pi^{3}G}{3-p}\right)^{1/2}\rho_{0 }^{3/2}r_{\mathrm{c}}^{3}. \tag{18}\] It is important now to determine under which conditions can the accretion from the core to the stellar region overcome the accretion onto the core. For this, using eqs. (16) and (18), we compute the ratio of the inner and outer accretion rates: \[\frac{\mathcal{F}(r_{\mathrm{i}})}{\mathcal{F}(r_{\mathrm{c}})}\propto\left( \frac{M_{\mathrm{i}}}{\rho_{0}}\right)^{1/2}\,r_{\mathrm{i}}^{\frac{3}{2}-p}r_ {\mathrm{c}}^{p-3}. \tag{19}\] This expression can be more easily interpreted noting that \(M_{\mathrm{i}}/\rho_{0}=(M_{\mathrm{c}}/\rho_{0})(M_{\mathrm{i}}/M_{\mathrm{c }})\) and that, from eqs. (2) and (4), \[\frac{M_{\mathrm{c}}}{\rho_{0}}=\frac{4\pi r_{\mathrm{c}}^{3}}{3-p}. \tag{20}\] Therefore, writing \(r_{\rm i}=\epsilon r_{\rm c}\), we finally obtain \[\frac{\mathcal{F}(r_{\rm i})}{\mathcal{F}(r_{\rm c})}\propto\epsilon^{\frac{3}{2 }-p}. \tag{21}\] This equation shows that, for \(p>3/2\), the accretion rate ratio _increases_ as the size of the inner stellar region becomes a smaller fraction of the core's size. This happens precisely as the core's radius increases, therefore allowing for the possibility that the core is _depleted_ by the accretion onto the stellar region if \(p>3/2\). This depletion implies that it is possible for the mass accumulated in the stellar region to become larger than the gaseous mass of the core. On the contrary, for \(p<3/2\) the core's mass always grows faster than the mass of the stellar region. Therefore, \(p=3/2\) is another critical value of the logarithmic slope, determining whether the core grows or is depleted by the accretion onto the stellar region. This is illustrated in Fig. 2, which shows the evolution of the core size (_left_ panel), and of the core's gas mass and the stellar mass (_right_ panel) obtained by numerically integrating the mass fluxes4\(\mathcal{F}(r_{\rm i})\) and \(\mathcal{F}(r_{\rm c})\) in time for a range of logarithmic slope values. The surrounding medium is assumed to have a density of \(3\times 10^{3}\,\rm cm^{-3}\) and temperature of \(15\,\rm K\), implying a Jeans radius of \(0.22\,\rm pc\), which is taken as the initial value for the core's radius. The core's inner boundary is assumed to be at \(r_{\rm i}=3\times 10^{3}\,\rm AU\). We choose this value of \(r_{\rm i}\) as representative of the region where an accretion disk begins to form, based on estimates of the Oort cloud's mean radius (Morbidelli, 2005). As expected, for \(p>3/2\) the flow across the inner boundary depletes the gaseous mass of the core, although the stellar mass still becomes larger than the core's mass for \(p=3/2\). Footnote 4: By numerically integrating the mass fluxes, and so calculating the core and stellar mass evolution, we avoid the need to make assumptions about the masses relative importance in the estimation of infall velocities, as it was necessary leading to eqs. (15) and (17). ## 3 The most massive star a core can harbor ### The competition between core and stellar mass growth Having obtained the mass accretion rate onto the core, we can now estimate the mass of the most massive star the core can harbor without its accretion supply being destroyed by the UV feedback from this star.5 Assuming that the gas infall velocity onto the core at every radius is given by the free-fall velocity (eq. [1]) driven by the total (stellar and gaseous) mass internal to that radius, then the power \(P_{K}\) associated to the kinetic energy density in the accretion flow at the boundary, \(K=1/2\)\(\rho(r_{\rm c})v_{\rm inf}^{2}(r_{\rm c})\), is \[P_{K} = 4\uppi r_{\rm c}^{2}Kv_{\rm inf}(r_{\rm c})\] \[\sim \frac{0.97L_{\odot}}{(3-p)^{3/2}}\left(\frac{r_{\rm c}}{1\,{\rm pc }}\right)^{5}\left(\frac{n_{0}}{10^{3}\,{\rm cm}^{-3}}\right)^{5/2},\] where the order-of-magnitude value in the second line corresponds to the calculation ignoring the stellar mass at the center. We may estimate the mass of the most massive star that a given core can harbor before the accretion flow onto it is halted by equating this accretion power to the ionizing power generated by a star of mass \(M\), which we approximate by \(P_{\rm i}=(13.6\,{\rm eV})S_{\nu}\), where \(S_{\nu}\) is the ionizing photon flux from the star, taken from Diaz-Miller et al (1998). The _left_ panel of Fig. 3 shows the scaling of this accretion-destroying stellar mass, \(M_{*}\), _versus_ the core's mass, \(M_{\rm c}\), the latter obtained by integration of eq. (9), for a range of \(p\) values. On the other hand, the _right_ panel of Fig. 3 shows the evolution of the mass accreted onto the central region of the core with radius \(r_{\rm i}=3000\,{\rm AU}\) (\(M_{\rm i}(t)\); dashed lines), obtained by integrating the equation \(\dot{M}_{\rm i}={\cal F}(r_{\rm i})\), for the same values of \(p\). Therefore, \(M_{\rm i}(t)\) can be thought of as the mass of the most massive possible star (if there were no fragmentation within \(r_{\rm i}\)) as a function of time. In this panel, we also show the evolution of the mass of the accretion-destroying star, \(M_{*}\) (solid lines). We observe that the stellar mass \(M_{\rm i}\) starts smaller, but increases faster, than the accretion-destroying stellar mass \(M_{*}\), and so eventually \(M_{\rm i}\) becomes equal to \(M_{*}\). At this time, accretion onto the core can be disrupted, ending Figure 2: Core size (\(r_{\rm c}\), _left_), and gaseous and stellar mass (\(M_{\rm c}\) and \(M_{\rm i}\), _right_) as a function of time for a range of \(p\)-values. The masses are obtained by numerically integrating the fluxes at \(r_{\rm c}(t)\) and the core’s inner boundary (at \(r_{\rm i}\)). When \(p>3/2\), the flow across \(r_{\rm i}\) is larger than the flow across \(r_{\rm c}\). So, the core ends up being depleted. In these plots, we have assumed an environmental density \(\rho_{0}=3\times 10^{3}\,{\rm cm}^{-3}\) and an initial radius of the core \(r_{\rm c}(t=0)\) equal to the Jeans length at \(\rho_{0}\). the local star formation episode. Thus, the actual maximum possible stellar mass within a core is then given by \(\min[M_{\rm i},M_{*}]\) at every moment in time. Note also that this stellar mass reaches values \(\sim 10M_{\odot}\) within a time of the order of one megayear, with the precise time depending on the slope of the core's density profile. Note that this total stellar mass will be distributed among a population of collapsed objects, and therefore \(M_{\rm i}\) needs to be much larger than \(M_{*}\) in order to have a star of this later mass. This can be compared to the time required in numerical simulations for massive stars to appear. For example, Fig. 4 shows the evolution of the stellar mass distribution in the simulation LAF16 (including feedback) analyzed in (Vazquez-Semadeni et al, 2017) in differential (or PDF) form.7 It can be seen that stars of masses up to \(M\lesssim 5M_{\odot}\) appear within 2 Myr, and a star with \(M\sim 10M_{\odot}\) appears after \(\sim 4\) Myr. At this time, the total stellar mass within 1 pc is \(\sim 150M_{\odot}\), and the gas mass is \(\sim 1000M_{\odot}\) (see Fig. 5). Although the model core and stellar masses shown in Fig. 2 are still significantly larger than the simulation values, it is shown in App. C that a slightly more realistic case, assuming a background that is not uniform, but rather has a shallower slope than that of the core, produces numbers closer to those of the simulation. Regardless, both figures 3 and 4 illustrate the sequential appearance of more massive stars as time proceeds, implying that the region evolves from being a low-mass star forming region to a high-mass one, as predicted by the GHC scenario (Vazquez-Semadeni et al, 2009). Figure 3: _Left:_ Mass of the star (\(M_{*}\)) required to halt, by photoionizing radiation, the accretion driven by self-gravity onto a core of mass \(M_{\rm c}\). _Right:_ Temporal evolution of \(M_{*}\) (solid lines), and of the mass that has been accumulated within an inner, characteristic “accretion disk radius” \(r_{\rm i}=3000\,{\rm AU}\) (\(M_{\rm i}\), dashed lines). As in Fig. 2, we have assumed an environmental density \(\rho_{0}=3\times 10^{3}\,{\rm cm}^{-3}\) and an initial radius of the core \(r_{\rm c}(t=0)\) equal to the Jeans length at \(\rho_{0}\). It is important to note that the maximum stellar mass allowed by the feedback, \(M_{*}\), plotted in the two panels of Fig. 3, does _not_ intend to predict the mass of the stars that will actually be forming within a core, since the core will undoubtedly undergo fragmentation, forming stars with a certain mass distribution, probably through the competitive accretion mechanism (Bonnell et al, 2001a,b). This result should instead be interpreted as meaning that, due to the accretion flow, both the core's mass \(M_{\rm c}\) and the total mass available Figure 4: Evolution of the mass function for one of the star forming regions (“Group 1”) appearing in the simulation labeled “LAF1” of (González-Samaniego and Vazquez-Semadeni, 2020), representing cluster formation in clouds undergoing global hierarchical collapse, including stellar feedback. More massive stars appear later during the evolution of the region. We show this simulation because the one without feedback (LAF0) does not produce a Salpeter slope, and produces massive stars too rapidly, although they still form later than less massive ones. Figure 5: Evolution of the stellar and dense gas mass in the inner parsec of Group 2 for the simulation without feedback (LAF0). for star formation increase with time, thus increasing the mass of the most massive possible star (\(M_{\rm i}\)), as well as the latter's corresponding photoionizing feedback power. When the mass of this most massive possible star surpasses the mass of the star that can disrupt the accretion flow (\(M_{*}\)), the core can stop growing, and the star-formation episode may be terminated by gas exhaustion, as inferred observationally by Ginsburg et al (2016). Although our model is highly idealized and approximate, its main relevance is the implication that, when the accretion both from a clump to a core and from the core to the star(s) are taken into account, the maximum stellar mass that a core can harbor increases over time, together with the core's mass, a proof-of-concept model for the numerical result that more massive stars appear later in a cluster (Vazquez-Semadeni et al, 2017), and for the observed property of clusters that the mass of the most massive star correlates with the mass of its parent cluster (e.g., Weidner and Kroupa, 2006; Weidner et al, 2010). ## 4 Discussion ### Caveats Our model is certainly highly idealized, and, as a consequence, it still falls short of having a strong predictive power. Its main limitations are: * _The restriction to spherical symmetry._ Although we have considered the possible effect of filaments of producing some flattening of the effective spherically-averaged density profile, all our accretion rates are computed assuming spherical symmetry. This assumption causes the gravitational potential to be deeper than if the same mass were to be distributed in a sheet- or filament-like geometry, causing longer infall times and smaller accretion velocities (Toala et al, 2012; Pon et al, 2012). * _The neglect of any form of agents counteracting gravity_, such as thermal pressure or magnetic fields, or of low-mass-star feedback, that may delay the gravitational contraction. As a consequence, our infall speeds and accretion rates should be considered as upper limits, and our evolutionary timescales as lower limits, to what may be expected in actual molecular cloud cores. Nevertheless, our model incorporates several important features observed in numerical simulations that point towards aspects of gravitational contraction that, to our knowledge, have not been previously addressed, and provides a proof-of-concept discussion of the mechanism of gravitational choking, and the importance of the density profile slope in determining accretion rates. In addition, our model compares well to order of magnitude with the numerical results of Gonzalez-Samaniego and Vazquez-Semadeni (2020). ## 5 Summary and conclusions In this paper we have presented a simple, idealized model, in quasi-spherical geometry, of a mechanism that can account for the observation, in numerical simulations of cloud formation and evolution from warm diffuse atomic gas, that star-forming regions grow in mass and size, and that massive stars form with a delay of a few to several megayears after the first stars begin to form. The model is based on two main ingredients. First, on the assumption that there is accretion both from the core to the stars and from the cloud to the core. That is, we account for the two last stages of the multi-stage accretion process predicted by the GHC scenario to be occurring in molecular clouds). Second, on the result from Paper I that the gravity-driven accretion rate in a core varies with radius in general. In a core with a power-law density profile with logarithmic slope \(-p\), the accretion rate is only independent of radius when \(p=2\). For \(p<2\), the accretion rate decreases inwards, causing some of the infalling material to stagnate in the gaseous phase, increasing the core's gaseous mass. We call this process "gravitational choking", and it continues until the stellar mass at the center becomes large enough to dominate the core-to-star accretion rate, at which point the core may be depleted (Appendix C). In Paper I we had furthermore shown that, under strict spherical geometry, the value \(p=2\) is an _attractor_, meaning that the slope evolves toward that value, reaching it at the time when a protostar forms. However, here we have shown (Appendix A) that deviations from sphericity, such as those induced by the presence of filaments feeding a hub, may allow the spherically-averaged slope to remain at values \(p<2\), thus allowing for the core's mass to grow even after stars have begun to form at the core's center, and for simultaneous growth of the core and stellar masses. A key element of the GHC paradigm (Vazquez-Semadeni et al, 2019) is the interconnection of scales through mass accretion. Computing the accretion rate at the core's outer boundary, as well as the accretion from the core to the "stellar region" (defined by an inner radius within the core comparable to the estimated size of the Oort cloud), we were able to obtain the simultaneous growth by accretion of both the core's and stellar masses. The latter constitutes an upper limit to the mass of the most massive star that the core can harbor. Moreover, we also computed the mass of the star whose photoionizing radiation flux balances the power of the accretion onto the core. When the total mass in stars is large enough to produce this disrupting star, we suggest that the accretion can be halted, and the local star formation event can be terminated. In the GHC picture, the evolution of the core and the formation of massive stars with it are not defined by the core's own mass, as in the competitive accretion or the turbulent-core models, but by the mass in its environment _susceptible to be accreted onto the core_. So, the formation of a star capable of halting the accretion flow sets the limit of the mass available to fall onto the core. In this way, the model implies that more massive stars require more time to form than low-mass stars, because a sufficient amount of mass must be collected at the center of the cores, and that the mass of the most massive star present in a core must correlate with the core's own mass, because the core's mass also grows while the mass available to form stars in its center increases. Our model is, of course, highly idealized, as it assumes a spherical geometry with a single power law for the density profile, and neglects any processes opposing the collapse before the disruption of the accretion flow. The only deviation from sphericity is the consideration that filamentary structures may flatten the spherically-averaged density profile. However, our model provides a proof-of-concept that the simultaneous core-to-stars and cloud-to-core accretion processes imply a delayed formation of the more massive stars, and a correlation between the mass of the most massive star and that of its parent core. The latter suggests that a correlation between the mass of the most massive star and that of its host cluster should exist as well. Note, however, that the delayed formation of the massive stars does not imply that the less massive stars are all older than the more massive ones. This is because the star formation rate also increases, and so most of the low-mass stars are coeval with the more massive ones (Vazquez-Semadeni et al, 2017). However, a small population of old, low-mass stars is expected to exist in the cluster in addition to the majority of young, low- and high-mass stars. The main implication of our model is that the distribution of stellar masses in a star-forming region extends to ever larger masses as time proceeds, until the local episode of star formation is halted by the stellar feedback. Finally, our model can be considered as a time-dependent alternative to the _turbulent core_ model of McKee and Tan (2003), which was based on the assumption of turbulent support of a massive core, and has triggered intense searches for prestellar massive cores. Instead, our model generally predicts that, by the time massive stars begin to form in a clump or core, a significant number of low-mass must have already formed. ## 6 Methods In Gonzalez-Samaniego and Vazquez-Semadeni (2020), two simulations of converging flows in the warm Galactic atomic medium were considered, one without stellar feedback, labeled LAF0, and one with a sub-grid prescription for emulating the UV ionizing feedback from massive stars, extended down to masses \(\sim 1M_{\odot}\), labeled LAF1. The numerical box was 256 pc on a side, at a maximum resolution of 0.0625 pc. These simulations, first presented in Colin et al (2013), used a stochastic star formation prescription that allowed the sink particles to have stellar masses and with a Salpeter (Salpeter, 1955) slope in the case with feedback, making them the first simulations at the giant molecular cloud scale with stellar-mass sink particles. We refer the reader to that paper for details on the simulations. In Figs. 1, 4, and 5 this paper, we refer to "clumps", which are defined as spherical regions centered at the center of mass of the stellar clusters forming in the simulations, and of the radii indicated in the figures.
2301.09661
Estimating marginal treatment effects from observational studies and indirect treatment comparisons: When are standardization-based methods preferable to those based on propensity score weighting?
In light of newly developed standardization methods, we evaluate, via simulation study, how propensity score weighting and standardization -based approaches compare for obtaining estimates of the marginal odds ratio and the marginal hazard ratio. Specifically, we consider how the two approaches compare in two different scenarios: (1) in a single observational study, and (2) in an anchored indirect treatment comparison (ITC) of randomized controlled trials. We present the material in such a way so that the matching-adjusted indirect comparison (MAIC) and the (novel) simulated treatment comparison (STC) methods in the ITC setting may be viewed as analogous to the propensity score weighting and standardization methods in the single observational study setting. Our results suggest that current recommendations for conducting ITCs can be improved and underscore the importance of adjusting for purely prognostic factors.
Harlan Campbell, Julie E Park, Jeroen P Jansen, Shannon Cope
2023-01-23T19:00:40Z
http://arxiv.org/abs/2301.09661v3
Estimating marginal treatment effects from single studies and indirect treatment comparisons: When are standardization-based methods preferable to inverse probability of treatment weighting? ###### Abstract In light of newly developed standardization methods, we evaluate, via simulation study, how inverse probability of treatment weighting (IPTW) and standardization-based approaches compare for obtaining estimates of the marginal odds-ratio and the marginal hazards ratio. Specifically, we consider how the two approaches compare in two different scenarios: (1) in a single comparative study (either randomized or non-randomized), and (2) in an anchored indirect treatment comparison of randomized controlled trials (where we compare the matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC) methods). We conclude that, in general, standardization-based methods with correctly specified outcome models are more efficient than those based on IPTW. While IPTW is robust to model misspecification in a single comparative study, we find that this is not necessarily the case for MAIC in an indirect treatment comparison. C 2010 1 Footnote 1: Corresponding author: e-mail: harlan [email protected] Causal Inference; Evidence synthesis; Indirect treatment comparisons; Non-collapsibility; Observational studies ## 1 Introduction Recently there has been a spirited debate on the most appropriate target of inference when it comes to estimating relative treatment effects (Xiao et al., 2021; Remiro-Azocar, 2021; Liu et al. 2022). Some researchers contend that the most appropriate target is the marginal treatment effect while others make an argument in favour of the conditional treatment effect. The distinction between the two is important when the parameter of interest is non-collapsible, which is the case for the frequently used odds ratio (OR) and hazard ratio (HR). This is because with non-collapsible parameters, the marginal and conditional estimands may not coincide due to non-linearity; see Greenland and Pearl (2011). Even in the absence of confounding bias, with perfectly balanced treatment-groups, marginal and conditional estimates for the OR/HR will be different whenever variables included in the logistic/Cox outcome regression model are associated with the outcome (i.e., whenever covariates are "prognostic"); see Greenland et al. (1999). Put another way, in logistic and Cox proportional hazards regression models, conditioning on a prognostic variable inherently changes the relative treatment effect being estimated. In this article, we put aside the question of which is the most appropriate target of inference and focus on how best to obtain marginal estimates for the OR and HR. There are two general approaches for obtaining covariate-adjusted marginal estimates: (1) inverse probability of treatment weighting (IPTW) (also known as propensity score weighting) and (2) standardization (also known as g-computation); see Hernan and Robins (2006). Of course, in the absence of any confounders, the simplest way to obtain a marginal estimate is to simply fit a regression model without any covariates, thereby obtaining an unadjusted marginal estimate. However, the covariate-adjusted marginal estimate (obtained using either IPTW or standardization) will be more efficient (i.e., have a smaller standard error/narrower confidence interval) than the unadjusted marginal estimate whenever the covariates in question are prognostic of the outcome; see Hernandez et al. (2004), Kahan et al. (2014), and Colantuoni and Rosenblum (2015). IPTW-based methods have their origins with Rosenbaum and Rubin (1983) and are ever increasing in popularity; see Webster-Clark et al. (2021) for a recent review of the literature. Researchers have also been applying standardization-based methods for marginalizing adjusted ORs for several decades (Snowden, 2011; Vansteelandt and Keiding, 2011; Zhang, 2008). However, for time-to-event outcomes, a satisfactory standardization procedure for marginalizing adjusted HRs has only recently been developed by Daniel et al. (2021). Relative treatment effects can be estimated by means of a randomized controlled trial (RCT), an observational study, or by combining the findings of multiple RCTs, each comparing a subset of the treatment comparisons of interest. In the simplest form, this is known as an anchored indirect treatment comparison (ITC), but the general approach is known as network meta-analysis (NMA) (Tonin et al., 2017). It is often the case that individual participant data (IPD) is available for one of the RCTs included in an ITC or NMA, whereas data is restricted to published aggregate level data (AD) for the other trial(s). Two so called "IPD-AD methods" often used in ITCs of two trials are: (1) matching-adjusted indirect comparison (MAIC), an IPTW-based approach (Signorovitch et al., 2010; Ishak et al., 2015); and (2) simulated treatment comparison (STC) (Ishak et al., 2015; Caro et al., 2010), a regression-based approach which can be done with standardization in order to obtain a marginal estimate (Remiro-Azocar et al., 2021). Given the recent development of standardization-based methods for single studies, as well as for ITCs to estimate marginal treatment effects, it is important to compare their performance to IPTW-based methods to determine the optimal approach in terms of bias and efficiency. Our goal in this article is to compare IPTW-based and standardization-based approaches for obtaining estimates of the marginal OR and the marginal HR in (1) a single comparative study (randomized and non-randomized), and (2) in an anchored ITC of RCTs. This paper is organized as follows. In Section 2, we conduct a first simulation study to determine how IPTW and standardization compare in a single comparative study in the presence/absence of measured confounders and effect modifiers. In Section 3, we conduct a simulation study to compare MAIC and STC approaches in an anchored ITC of RCTs. We summarize our overall findings and discuss implications and avenues for future research in Section 4. ## 2 Marginalization in the context of a single comparative study ### Objectives Consider a single study for which IPD are available for the exposure, outcome, and covariates. If a covariate is a prognostic factor (but not a confounder), adjusting for it will lead to more efficient estimation than excluding it and it is understood that IPTW and standardization methods will typically result in identical gains in efficiency (Williamson, Forbes, and White, 2014; Daniel et al., 2021; Morris et al., 2022; Hernan and Robins, 2006). However, if a covariate is a confounder (i.e., predictive both of the outcome and of the treatment assignment), standardization is thought to be more efficient than IPTW, but there is relatively little research demonstrating this definitively; see simulation studies of Daniel et al. (2021) and Chatton et al. (2020). With Simulation Study 1, we aim to shed light on this point. Daniel et al. (2021) present their standardization method, based on non-parametric Monte Carlo integration, for marginalizing adjusted HRs in the context of both a RCT and an observational study. However, Daniel et al. (2021) do not explore how their novel procedure might handle effect modifiers (i.e., treatment-covariate interaction effects). Chatton et al. (2020), in their simulation study, also fail to consider how using standardization to estimate the marginal OR may be impacted by effect modifiers. It also remains unclear how IPTW performs in the presence of effect modifiers, with Morris et al. (2022) recently stating that IPTW simply "does not handle" treatment-covariate interactions. However, a simulation study by Delaney et al. (2017) suggests that IPTW provides unbiased estimates of the OR in the presence of non-linear treatment-covariate interactions whereas a standard logistic regression will be biased unless correctly specified. Delaney et al. (2017) specifically consider a treatment by covariate-squared interaction term. The second objective of Simulation Study 1 is to investigate how IPTW and standardization perform relative to one another in the presence of effect modifiers. We will also investigate the consequences of misspecifying one's model with respect to effect modifiers. ### Data generation and mechanism We simulate data as in Daniel et al. (2021) with the addition of possible treatment-covariate interaction effects. Let \(X\) be the binary treatment indicator (equal to either 0 or 1), and \(L\) be a measured continuous covariate. For the binary outcome data, let \(Y\) be the binary outcome (equal to either 0 or 1). We simulate \(Y\) values from the following logistic regression model: \[\text{logit}(\text{Pr}(Y=1|X=x,L=l))\ =\ 1\ +\ \alpha x\ +\beta_{1}l\ +\beta_{2}xl\ +\beta_{3}xl^{2},\] where \(\alpha\), \(\beta_{1}\), \(\beta_{2}\) and \(\beta_{3}\) are regression coefficients. As in Daniel et al. (2021), for TTE outcome data, we simulate individuals to enter the study uniformly over 2 years, and their event time then occurs at a random \(Y\) years after their entry time. We simulate \(Y\) values from a Weibull distribution with density function: \[f(y|a,\sigma)=\Big{(}\frac{\alpha}{\sigma}\Big{)}\Big{(}\frac{y}{\sigma}\Big{)} ^{a-1}\exp\Big{(}-\Big{(}\frac{y}{\sigma}\Big{)}^{a}\Big{)},\] where \(a=3/2\) and \(\sigma=(0.1\exp(\alpha x+\beta_{1}l+\beta_{2}xl+\beta_{3}xl^{2}))^{-\frac{2}{3}}\), and \(\alpha\), \(\beta_{1}\), \(\beta_{2}\), and \(\beta_{3}\) are regression coefficients. All individuals having not experienced the event at 10 years since the start of the recruitment window are censored and the timescale for analysis is time since recruitment. For both binary outcome and TTE outcome data, we simulate the treatment exposure, \(X\), from the following logistic model: \[\text{logit}(\text{Pr}(X=1|L=l))\ =\ \beta_{4}l,\] where \(\beta_{4}\) is a regression coefficient. Note that when \(\beta_{4}=0\), treatment exposure will be independent of \(L\) and will be approximately balanced. Finally, we simulate the covariate \(L\) from a standard normal distribution: \[L\sim Normal(0,1).\] We consider six different scenarios (S1-S6) for both the binary outcome data and the TTE outcome data, and for both a null treatment effect (setting \(\alpha=0\)) and non-null treatment effect (setting \(\alpha=1\)): S1. \(L\) is entirely unrelated to \(X\) and \(Y\) such that the regression coefficients are set to: \(\beta_{1}=0\), \(\beta_{2}=0\), \(\beta_{3}=0\), \(\beta_{4}=0\); S2. \(L\) is a prognostic factor, but not an effect modifier such that: \(\beta_{1}=1\), \(\beta_{2}=0\), \(\beta_{3}=0\), \(\beta_{4}=0\); S3. \(L\) is a confounder, but not an effect modifier such that: \(\beta_{1}=1\), \(\beta_{2}=0\), \(\beta_{3}=0\), \(\beta_{4}=1\); S4. \(L\) is a prognostic factor and an effect modifier such that: \(\beta_{1}=1\), \(\beta_{2}=1\), \(\beta_{3}=0\), \(\beta_{4}=0\); S5. \(L\) is a confounder and an effect modifier such that: \(\beta_{1}=1\), \(\beta_{2}=1\), \(\beta_{3}=0\), \(\beta_{4}=1\); and S6. \(L\) is a confounder and a nonlinear effect modifier such that: \(\beta_{1}=1\), \(\beta_{2}=0\), \(\beta_{3}=1\), \(\beta_{4}=1\). Note that for all scenarios, whenever \(\alpha=0\), we set \(\beta_{2}=\beta_{3}=0\) so that the treatment effect across all levels of the covariate is indeed null. As such, for Scenarios S4, S5, and S6, we only consider \(\alpha=1\) (and not \(\alpha=0\)) to avoid duplication of null scenarios. Scenarios S1, S2, and S4 can be thought of as RCTs since there are no confounders, whereas scenarios S3, S5, and S6 can be thought of as observational studies in which a single confounding variable is known and measured. ### Estimands of interest As in Daniel et al. (2021), we target the average treatment effect (ATE) (i.e., the effect that would be observed by switching every individual in the whole population from one treatment to the other). As target estimands, we consider the causal marginal log-odds ratio (log-OR) for the analysis of a binary outcome and the causal marginal log-hazard ratio (log-HR) for the analysis of a time-to-event outcome. Note that even when the conditional HR between an exposed individual and an unexposed individual with the same \(L\) values is constant over time, the marginal HR will vary over time if \(L\) is a prognostic factor (due to the non-collapsibility of the HR). As such, the definition of the marginal HR technically depends on the timeframe of interest. Daniel et al. (2021) explain this subtlety as follows: The marginal HR can be defined as "the probability limit (as the sample size \(\rightarrow\infty\)) of the marginal hazard ratio that would be estimated [with an unadjusted univariate Cox regression model] from an RCT of the chosen length". For the analysis in our simulation study, the timeframe of interest is 10 years (the point at which all remaining individuals are censored). We approximate the true non-null marginal log-OR/log-HR values for each of the six scenarios by simulating 10 million observations from the true sampling distributions. We obtain values for the marginal log-OR of 1.000, 0.865, 0.865, 0.417, 0.417, and 1.572, for Scenarios S1-S6, respectively; and values for the marginal log-HR of 1.000, 0.659, 0.659, 0.352, 0.352, and 1.217 for Scenarios S1-S6, respectively. ### Methods to be compared We compare four approaches: (A1) univariate (unadjusted) logistic/Cox regression model, (A2) IPTW, (A3) standardization without an interaction term, and (A4) standardization with a \(X\)by \(L\) interaction term. The **IPTW method** involves two steps. First, a logistic regression model is fit with \(X\), the binary treatment indicator, as the outcome and \(L\) is the predictor. From this model, one estimates the so-called propensity scores, \(e\), which are the probabilities of each subject receiving the treatment of interest conditional on their observed baseline covariate(s): \(e=\Pr(X=1|L=l)\). In the second step one estimates the marginal log-OR (or log-HR) by fitting a univariate logistic (or Cox) regression model with \(Y\) as the outcome and \(X\)as the only predictor and with individuals weighted according to the stabilized weights: \(w=X\Pr(X=1)e\ +\ (1-X)Pr(X=0)(1-e)\). See Austin and Stuart (2015) for further details. For the binary outcome data, **the standardization method** also involves two steps. The first step is fitting the "outcome model", which in our case is a logistic regression conditional on \(L\) (either with or without a treatment-covariate interaction term) to obtain estimates of the conditional probabilities: \(\Pr(Y=1|X=1,L=l_{l})\) and \(\Pr(Y=1|X=0,L=l_{l})\), for \(i\) in 1,..., \(N\), where \(N\) is the number of observations in the study. Note that, for the simulation study, neither of the standardization approaches (A3 and A4) specify an outcome model with a \(XL^{2}\) interaction term. In the second step, one averages over the empirical distribution of the covariate(s) to obtain estimates of the marginal probabilities: \[\overset{\wedge}{\Pr}(Y=1|X=x)=\frac{1}{N}\sum_{i=1}^{N}\overset{\wedge}{\Pr}(Y=1|X=x,L=l_{i}),\] for \(x\)=0 and \(x\)=1. Finally, the covariate-adjusted estimator of the marginal log-OR is calculated as: \[\log-\text{OR}\ =\ \log\left(\frac{\overset{\wedge}{\Pr}(Y=1|X=1)}{1-\Pr(Y=1|X=1 )}\right)-\log\left(\frac{\overset{\wedge}{\Pr}(Y=1|X=0)}{1-\Pr(Y=1|X=0)}\right).\] For TTE outcome data, the principle of standardization is similar. In the first step, the outcome model is a Cox regression model conditional on \(L\) (either with or without a treatment-covariate interaction term). In the second step, to average over the empirical distribution of the covariate(s), one uses non-parametric Monte Carlo integration following the ten-step procedure proposed by Daniel et al. (2021). For details on how to implement this procedure, see Section 4 of Daniel et al. (2021). ### Sample size, simulations, and performance measures Consistent with Daniel at al. (2021) we set the study sample size to \(N\)=1000 and values for the covariate \(L\) are held fixed across simulations (i.e., only simulated once). Following the advice of Daniel et al. (2021), the standardization method is done with a value of \(m\) (the number of simulations used for Monte Carlo integration) set such that, upon repeating the analysis a second time with a value of \(m\) 10% larger, the point estimate of the marginal log-HR is within 0.005 of the original result (with a minimum of \(m\)=50000 and a maximum of \(m\)=100000). For each of the 18 configurations (6 scenarios (S1-S6), 2 outcome types (binary and TTE), 2 possible conditional treatment effects (\(\alpha=0\) and \(\alpha=1\)), less duplications of null scenarios), we simulate 10000 datasets and calculate the marginal treatment effect estimate using each of the 4 methods. Performance measures are defined by the sample mean and standard deviation of these 10000 estimates, which respectively reflect our simulation estimators of the mean and empirical standard error of each estimator. We also calculate the Monte Carlo SE of both simulation estimators using the formulae given in Morris et al. (2019). We quote our results to 3 decimal places. ### Results Results for the binary outcome are listed in Table 1. For the six scenarios, S1-S6, we see that: * **S1:** When the measured covariate is unrelated to both treatment exposure and outcome, all methods appear to be unbiased and equally efficient. The cost of adjusting for an unrelated variable therefore appears to be relatively minimal if not entirely negligible. * **S2:** When the measured covariate is predictive of the outcome but independent of treatment assignment, the IPTW, standardization without interaction, and standardization with interaction approaches are comparable in terms of efficiency, whereas the unadjusted approach is somewhat less efficient. * **S3:** When the measured covariate is a confounder but not effect modifier, the IPTW is less efficient than both standardization approaches. The standardization without interaction is more efficient than the standardization with interaction approach. The unadjusted approach, as expected, is biased because of failing to adjust for the confounder. * **S4:** When the measured covariate is an effect modifier, the IPTW, the standardization without interaction and standardization with interaction approaches all seem unbiased and equally efficient. The unadjusted approach is, as expected, also unbiased but notably less efficient. **S5:** When the measured covariate is a confounder and an effect modifier, the unadjusted and standardization without interaction approaches are both biased. The IPTW and standardization with interaction approaches are both unbiased, with the latter being notably more efficient. **S6:** When the measured covariate is a confounder and a nonlinear effect modifier, the unadjusted approach and both standardization approaches, define misspecified outcome models and are therefore biased. The IPTW is the only approach that is unbiased. Results for the TTE outcome are listed in Table 2 and are similar to the above with two notable exceptions: **S2:** When the measured covariate is prognostic of the outcome but independent of treatment assignment, the standardization approaches appear to be more efficient than the IPTW: For \(\alpha=1\), compare SE=0.046 (std. without interaction) and SE=0.052 (std. with interaction) to SE=0.054 (IPTW). These estimated SEs themselves are estimated with SE of 0.00033, 0.00037, and 0.00039, respectively (Morris et al., 2019). Thus, even if we ignore the correlation between the estimated SEs (expected to be greater than 0 given that the same 10000 simulated data sets are used for all approaches), the Z-score for the difference between the estimated SE for std. without interaction and the estimated SE for IPTW is 15.7, suggesting that the observed difference is not simply the result of Monte Carlo error (due to an insufficient number of simulated data sets in the simulation study). (The Z-score for the difference between the estimated SE for std. with interaction and the estimated SE for IPTW is 3.7.) **S4 and S5:** When the measured covariate is an effect modifier, the standardization without interaction approach is biased. The standardization with interaction approach is more efficient than both the IPTW and the unadjusted approaches. In summary, the standardization with interaction approach is, for scenarios S1-S5, unbiased and is always more (or equally as) efficient relative to the IPTW approach. Furthermore, if one is confident that a measured covariate is not an effect modifier, one can potentially gain even more efficiency by removing the interaction effect. However, the standardization approach can be biased if the outcome model defined is misspecified, as in scenario S6. In contrast, results suggest that the specific functional form of the treatment-covariate interaction does not need to be specified (or even known) in order to obtain unbiased estimates using IPTW. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Scenario** & & **True marginal log-OR*** & **Performance measure** & **univariate regression** & **IPTW** & **std. _without interaction_ & **std. _with interaction_ \\ \hline \hline S1. \(L\) is unrelated to \(X\) and unrelated to \(Y\) & \(\beta_{1}=0\) & 0.000 & Mean (MC error) & 0.002 (0.00143) & 0.002 (0.00143) & 0.002 (0.00143) & 0.002 (0.00143) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.143 (0.00101) & 0.143 (0.00101) & 0.143 (0.00101) \\ \cline{2-7} & \(\beta_{1}=0\) & 1.000 & Mean (MC error) & 1.008 (0.00171) & 1.008 (0.00172) & 1.008 (0.00172) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.171 (0.00121) & 0.172 (0.00121) & 0.172 (0.00121) \\ \hline S2. \(L\) is prognostic, but not an effect modifier & \(\beta_{1}=1\) & 0.000 & Mean (MC error) & 0.000 (0.00140) & 0.001 (0.00129) & 0.001 (0.00128) & 0.001 (0.00128) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.140 (0.00099) & 0.129 (0.00091) & 0.128 (0.00091) & 0.128 (0.00091) \\ \cline{2-7} & \(\beta_{1}=1\) & 0.865 & Mean (MC error) & 0.871 (0.00159) & 0.871 (0.00150) & 0.871 (0.00149) & 0.871 (0.00149) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.159 (0.00112) & 0.150 (0.00106) & 0.149 (0.00105) & 0.149 (0.00105) \\ \hline S3. \(L\) is a confounder, but not an effect modifier & \(\beta_{1}=1\) & 0.000 & Mean (MC error) & 0.706 (0.00138) & 0.003 (0.00144) & 0.002 (0.00137) & 0.002 (0.00137) \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.138 (0.00098) & 0.144 (0.00102) & 0.137 (0.00097) & 0.137 (0.00097) \\ \cline{2-7} & \(\beta_{1}=1\) & 0.865 & Mean (MC error) & 1.612 (0.00172) & 0.875 (0.00186) & 0.872 (0.00172) & 0.871 (0.00176) \\ & \(\beta_{3}=1\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.172 (0.00122) & 0.186 (0.00132) & 0.172 (0.00121) & 0.176 (0.00124) \\ \hline S4. \(L\) is prognostic, and an effect modifier & \(\beta_{1}=1\) & 0.417 (\(\beta_{2}=1\) & Mean (MC error) & 0.418 (0.00145) & 0.418 (0.00129) & 0.418 (0.00126) & 0.417 (0.00126) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.145 (0.00103) & 0.129 (0.00091) & 0.126 (0.00089) & 0.126 (0.00089) \\ \hline S5. \(L\) is a confounder, and an effect modifier & \(\beta_{1}=1\) & 0.417 (\(\beta_{2}=1\) & Empirical SE (MC error) & 1.397 (0.00161) & 0.423 (0.00155) & 0.517 (0.00153) & 0.420 (0.00141) \\ & \(\beta_{3}=1\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE (MC error) & 0.161 (0.00114) & 0.155 (0.00110) & 0.153 (0.00108) & 0.141 (0.00100) \\ \hline S6. \(L\) is a confounder, and a nonlinear effect modifier & \(\beta_{1}=1\) & 1.572 (\(\beta_{2}=0\) & Mean (MC error) & 2.086 (0.00199) & 1.587 (0.00213) & 1.373 (0.00204) & 1.433 (0.00212) \\ & \(\beta_{3}=1\) & & & & & \\ & \(\beta_{4}=1\) & & Empirical SE (MC error) & 0.199 (0.00141) & 0.213 (0.00151) & 0.204 (0.00150) \\ \hline \end{tabular} \end{table} Table 1: Results from Simulation Study 1 for binary outcome data; results obtained for the marginal log-OR.*Values listed under “True marginal log-OR” were calculated by approximation by simulating 10 million observations from the true Binomial sampling distributions. Numbers in red correspond to methods that are likely biased. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \hline **Scenario** & **True** & **Performance** & **univariate** & **IPTW** & **std.** _without_ & **std.** _with_ \\ & & **marginal** & **measure** & **regression** & & **interaction** & **interaction** \\ \hline S1. \(L\) is unrelated to \(X\) and unrelated to \(Y\) & \(\beta_{1}=0\) & 0.000 & Mean & 0.000 & 0.000 & 0.000 & 0.000 \\ & \(\beta_{2}=0\) & & (MC error) & (0.00066) & (0.00066) & (0.00066) & (0.00066) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.066 & 0.066 & 0.066 & 0.066 \\ & \(\alpha=0\) & & (MC error) & (0.00047) & (0.00047) & (0.00047) & (0.00047) \\ \hline & \(\beta_{1}=0\) & 1.000 & Mean & 1.000 & 1.000 & 1.000 & 1.000 \\ & \(\beta_{2}=0\) & & (MC error) & (0.00070) & (0.00070) & (0.00070) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.070 & 0.070 & 0.070 & 0.070 \\ & \(\alpha=1\) & & (MC error) & (0.00050) & (0.00050) & (0.00050) & (0.00050) \\ \hline S2. \(L\) is prognostic, but & \(\beta_{1}=1\) & 0.000 & Mean & -0.001 & -0.001 & -0.001 & -0.001 \\ not an effect modifier & \(\beta_{2}=0\) & & (MC error) & (0.00068) & (0.00054) & (0.00047) & (0.00052) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.068 & 0.054 & 0.047 & 0.052 \\ & \(\alpha=0\) & & (MC error) & (0.00048) & (0.00038) & (0.00033) & (0.00037) \\ \hline & \(\beta_{1}=1\) & 0.659 & Mean & 0.660 & 0.660 & 0.660 & 0.660 \\ & \(\beta_{2}=0\) & & (MC error) & (0.00068) & (0.00055) & (0.00046) & (0.00052) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.068 & 0.055 & 0.046 & 0.052 \\ & \(\alpha=1\) & & (MC error) & (0.00048) & (0.00039) & (0.00033) & (0.00037) \\ \hline S3. \(L\) is a confounder, but not an effect modifier & \(\beta_{1}=1\) & 0.000 & Mean & 0.573 & 0.001 & -0.000 & -0.001 \\ & \(\beta_{2}=0\) & & (MC error) & (0.00066) & (0.00063) & (0.00051) & (0.00055) \\ & \(\beta_{3}=1\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.066 & 0.063 & 0.051 & 0.055 \\ & \(\alpha=0\) & & (MC error) & (0.00047) & (0.00045) & (0.00036) & (0.00039) \\ \hline & \(\beta_{1}=1\) & 0.659 & Mean & 1.260 & 0.664 & 0.661 & 0.661 \\ & \(\beta_{2}=0\) & & (MC error) & (0.00076) & (0.00078) & (0.00055) & (0.00059) \\ & \(\beta_{3}=1\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.076 & 0.078 & 0.055 & 0.059 \\ & \(\alpha=1\) & & (MC error) & (0.00053) & (0.00055) & (0.00039) & (0.00042) \\ \hline S4. \(L\) is prognostic, and an effect modifier & \(\beta_{1}=1\) & 0.352 & Mean & 0.353 & 0.352 & 0.623 & 0.353 \\ & \(\beta_{2}=1\) & & (MC error) & (0.00068) & (0.00049) & (0.00039) & (0.00044) \\ & \(\beta_{3}=0\) & & & & & \\ & \(\beta_{4}=0\) & & Empirical SE & 0.068 & 0.049 & 0.039 & 0.044 \\ & \(\alpha=1\) & & (MC error) & (0.00048) & (0.00034) & (0.00027) & (0.00031) \\ \hline S5. \(L\) is a confounder, and an effect modifier & \(\beta_{1}=1\) & 0.352 & Mean & 1.037 & 0.354 & 0.576 & 0.353 \\ & \(\beta_{2}=1\) & & (MC error) & (0.00074) & (0.00064) & (0.00045) & (0.00046) \\ & \(\beta_{4}=0\) & & Empirical SE & 0.074 & 0.064 & 0.045 & 0.046 \\ & \(\alpha=1\) & & (MC error) & (0.00052) & (0.00045) & (0.00032) & (0.00033) \\ \hline S6. \(L\) is a confounder, and a nonlinear effect modifier & \(\beta_{1}=1\) & 1.217 & Mean & 1.634 & 1.220 & 0.822 & 0.624 \\ & \(\beta_{2}=0\) & & (MC error) & (0.00074) & (0.00071) & (0.00057) & (0.00066) \\ & \(\beta_{3}=1\) & & & & & \\ & \(\beta_{4}=0\) & & (MC error) & (0.00052) & (0.00050) & (0.00040) & (0.00047) \\ \hline \hline \end{tabular} \end{table} Table 2: Results from Simulation Study 1 for time-to-event outcome data; results obtained for the marginal log-HR. *Values listed under “True marginal log-HR” were calculated by approximation by simulating 10 million observations from the true Weibull sampling distributions. Numbers in red correspond to methods that are likely biased. ## 3 Marginalization in the context of an indirect treatment comparison ### Objectives Consider a new treatment, treatment \(C\), which must be compared to an already established treatment, treatment \(B\). Suppose treatment \(B\) has been evaluated in a RCT against a comparator, treatment \(A\) (e.g., standard of care or placebo) and AD from this RCT are available. Furthermore, suppose treatment \(C\) has also been evaluated against treatment \(A\) in a RCT and individual level data (IPD) are available. Since treatments \(B\) and \(C\) have not been evaluated against each other directly within a single study, an anchored indirect treatment comparison (ITC) using IPD-AD methods must be performed to estimate the relative \(B\) vs. \(C\) treatment effect. Complicating matters further, the population in study 1 (the \(A\) vs. \(C\) study) may differ from the population in study 2 (the \(A\) vs. \(B\) study) such that the \(B\) vs. \(C\) treatment effect that would be observed in the study 1 population is not the same as the \(B\) vs. \(C\) treatment effect that would be observed in the study 2 population. Notably, both MAIC and STC methods implicitly assume that the target population is that from the study 2 population (i.e., the study for which only AD is available), and this will be our assumption for our simulation study as well. Recently, several simulation studies have attempted to compare the STC and MAIC approaches. Remiro-Azocar et al. (2021) compared MAIC and STC for time-to-event outcomes using a simulation and noted that conventional STC is inappropriate for estimating the marginal effect since conventional STC targets the conditional effect and will therefore have "systematic bias as a result of the non-collapsibility of the log hazard ratio." They identified the need for an "alternative formulation to STC that estimates a marginal treatment effect [...]", which was later accomplished in Remiro-Azocar (2022) who propose using g-computation (standardization) "as an extension to the conventional STC" and found that it provided "more precise and more accurate estimates than MAIC" for binary outcomes. For time-to-event outcomes, the g-computation extended STC approach proposed by Remiro-Azocar (2022) is less than ideal since it is limited to marginalization at a specific time-point. On this point, Remiro-Azocar (2022) speculate that the standardization methods of Daniel et al. (2021) may prove useful. As the first objective of Simulation Study 2, we follow through with this idea and investigate the standardization method of Daniel et al. (2021) as an extension to conventional STC for the estimation of marginal HRs in the context of an ITC of RCTs for time-to-event outcomes. The distribution of effect modifiers may differ across studies in an ITC such that, unless one can properly adjust for these differences, estimation will be rendered invalid; see Jansen and Naci (2013). Therefore, understanding how best to adjust for effect modifiers to obtain unbiased marginal estimates is particularly important. The second objective of Simulation Study 2 is therefore to examine how the STC and MAIC techniques compare in terms of obtaining unbiased marginal OR/HR estimates in the presence of effect modifiers. As in Simulation Study 1, we will also investigate the consequences of misspecifying one's model with respect to effect modifiers. Recently, Vo (2023) demonstrated that the presence of a nonlinear effect modifier will cause the STC method (with a misspecified outcome model) to be "substantially biased," and that "MAIC does not perform much better." (Notably, Vo (2023) only considers MAIC with matching on 1\({}^{\text{st}}\) moments, not on both the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) moments.) As in Delaney et al. (2017), Vo (2023) specifically considers a treatment by covariate-squared interaction term (i.e., a \(XL^{2}\) interaction term). Vo (2023)'s results would appear to perhaps contradict the conclusions of Signorovitch et al. (2010) who write that an "advantage of the proposed matching-adjusted [MAIC] estimate is its robustness to model mis-specification." Clarifying matters with respect to the "robustness" of MAIC and STC will be the third objective of Simulation Study 2. The MAIC and STC approaches are relatively new and as a result there is little guidance on best practices apart from the recommendations available is a National Institute for Health and Care Excellence Decision Support Unit Technical Support Document (Phillippo et al., 2016). Notably, Phillippo et al. (2016) recommend that when using STC for an anchored ITC, one should adjust for all imbalanced effect modifiers and that "further effect modifiers and prognostic variables may be adjusted for if this improves model fit to reduce standard error." When using MAIC for an anchored ITC, Phillippo et al. (2016) recommend adjusting for all effect modifiers but not adjusting for any purely prognostic variables ("all effect modifiers should be adjusted for to ensure balance and reduce bias, but no purely prognostic variables to avoid inflating standard error due to over-matching.") However, this recommendation seems to contradict the recent conclusions of Riley et al. (2022) who note that, for non-collapsible effect measures, failing to adjust for purely prognostic variables can lead to inconsistency when the distribution of prognostic factors differs across studies. Considering this seemingly conflicting advice, the fourth objective of Simulation Study 2 is to examine the consequences of adjusting for purely prognostic variables in both MAICs and STCs. When using MAIC, there is also the question of whether it is best to implement the matching on only first moments, or on both first and second moments. According to Hatswell et al. (2020), who considered MAIC in the context of unanchored ITCs, matching on both first and second moments cannot be recommended ("in no scenarios did it provide a meaningful advantage, while also showing the potential for large errors"). However, others disagree with this recommendation. For instance, Petto et al. (2019) recommend that, if "different variances of an important effect modifier are observed", one should match on both the first and second moments, even though this will likely lead to wider confidence intervals. Most recently, Phillippo et al. (2021) summarize the current (lack of) understanding writing that: "the question of when, if at all, it is preferable or necessary to match higher moments between populations for MAIC remains an interesting area for further theoretical research and simulation studies." Answering this question will be the fifth objective of Simulation Study 2. To summarize, the objectives of Simulation Study 2 are: 1. Determine if using the standardization method of Daniel et al. (2021) as an extension to conventional STC for the estimation of marginal HRs in an anchored ITC of RCTs is effective and efficient (relative to MAIC techniques). 2. Determine how STC and MAIC techniques compare in terms of obtaining unbiased marginal OR/HR estimates in the presence/absence of unbalanced effect modifiers. 3. Determine the consequences of misspecifying one's model with respect to effect modifiers. 4. Determine the consequences of adjusting for purely prognostic variables when using STC and MAIC. 5. Determine when, if at all, it is preferable or necessary to match on both the first and second moments (as opposed to only the first moment) when using MAIC. ### Data generation and mechanism As in Simulation Study 1, we consider both binary outcomes and TTE outcomes. We simulate individual participant level data (IPD) from "study 1" to mimic data from a RCT evaluating treatment \(C\) versus treatment \(A\), and simulate aggregate level data (AD) from "study 2" to mimic data from a RCT evaluating treatment \(B\) versus treatment \(A\). Let \(\mathcal{X}\) be the binary treatment indicator (equal to either 0 (treatment \(A\)) or 1 (treatment \(C\) in study 1, and treatment \(B\) in study 2)), let \(L\) be a measured continuous covariate, and let \(U\)be an unmeasured continuous covariate. In both studies, we assume the treatment exposure, \(X\), to be a balanced binary variable and therefore simulate \(X\sim Bernoulli(0.5)\). The covariates, \(L\) and \(U\), are simulated from two independent normal distributions such that, for study 1, we have: \[L\sim Normal(\mu_{L1},\sigma_{L1}),\] and \[U\sim Normal(\mu_{U1},\sigma_{U1});\] and, for study 2, we have: \[L\sim Normal(\mu_{L2},\sigma_{L2}),\] and \[U\sim Normal(\mu_{U2},\sigma_{U2}).\] For binary outcome data, \(Y\) is the binary outcome, equal to either 0 or 1. For study 1, \(Y\) is simulated from the following logistic regression model: \[\text{logit}\big{(}\Pr(Y=1|X=x,L=l,U=u)\big{)}=\ 1\ +\ \alpha_{AC}x\ +\beta_{1}l\ +\beta_{2,AC}xl\ +\beta_{3,AC}xl^{2}+\gamma_{1}u+\gamma_{2,AC}xu;\] and for study 2, \(Y\) is simulated from: \[\text{logit}(\Pr(Y=1|X=x,L=l,U=u))\ =\ 1\ +\ \alpha_{AB}x\ +\beta_{1}l\ +\beta_{2,AB}xl\ +\beta_{3,AB}xl^{2}+\gamma_{1}u+\gamma_{2,AB}xu,\] where \(\alpha_{AC}\), \(\beta_{1}\), \(\beta_{2,AC}\), \(\beta_{3,AC}\)\(\gamma_{1}\), \(\gamma_{2,AC}\), \(\alpha_{AB}\), \(\beta_{2,AB}\), \(\beta_{3,AB}\) and \(\gamma_{2,AB}\) are regression coefficients. For TTE outcome data, individuals (in both studies) are simulated (as in Simulation Study 1) to enter the study uniformly over 2 years, and their event times are then simulated to occur at a random time \(Y\) years after this entry time. The \(Y\) times are then simulated from a Weibull distribution with density function: \[f(y|a,\sigma)=\Big{(}\frac{a}{\sigma}\Big{)}\,\Big{(}\frac{\gamma}{\sigma} \Big{)}^{a-1}\,\exp\Big{(}-\Big{(}\frac{\gamma}{\sigma}\Big{)}^{a}\Big{)},\] where \(a=3/2\), and, for study 1: \[\sigma=(0.1\,\exp\big{(}\alpha_{AC}x\ +\ \beta_{1}l+\ \beta_{2,AC}xl\ +\beta_{3,AC}xl^{2}+\ \gamma_{1}u\ +\gamma_{2,AC}xu\big{)})^{-\frac{2}{3}},\] and for study 2: \[\sigma=(0.1\,\exp\big{(}\alpha_{AB}x\ +\ \beta_{1}l+\ \beta_{2,AB}xl\ +\beta_{3,AB}xl^{2}+\ \gamma_{1}u\ +\gamma_{2,AB}xu\big{)})^{-\frac{2}{3}},\] where \(\alpha_{AC}\), \(\alpha_{AB}\), \(\beta_{1}\), \(\beta_{2,AC}\), \(\beta_{2,AB}\), \(\beta_{3,AC}\), \(\beta_{3,AB}\), \(\gamma_{1}\), \(\gamma_{2,AC}\), \(\gamma_{2,AB}\) are regression coefficients. The aggregate data from study 2 consists only of the estimated log-OR/log-HR, \(\overset{\wedge}{\theta}_{AB|Studyz}\), and its standard error, \(SE(\overset{\wedge}{\theta}_{AB|Studyz})\), as well as the mean and standard deviation of the covariate \(L\), \(\overset{\wedge}{\mu}_{L2}\) and \(\overset{\wedge}{\sigma}_{L2}\), respectively. Note that this information is typically available in a study's primary results and table of baseline characteristics. We consider six scenarios (S1-S6) for both binary and TTE outcomes. In all six scenarios, we set \(\alpha_{AC}=\alpha_{BC}=1\), such that \(\alpha_{BC}=0\); and consider: S1. \(L\) and \(U\) are entirely unrelated to \(Y\), such that: \[\beta_{1}=0,\,\beta_{2,AC}=\beta_{2,AB}\ =0,\,\beta_{3,AC}=\beta_{3,AB}\ =0,\,\gamma_{1}=0,\,\text{and}\,\,\gamma_{2,AC}=\gamma_{2,AB}=0.\] S2. \(L\) is a prognostic factor, but not an effect modifier; \(U\) is entirely unrelated to \(Y\), such that: \[\beta_{1}=1,\,\beta_{2,AC}=\beta_{2,AB}\ =0,\,\beta_{3,AC}=\beta_{3,AB}\ =0,\, \gamma_{1}=0,\,\text{and}\,\gamma_{2,AC}=\gamma_{2,AB}=0.\] S3. \(L\) is a prognostic factor, and an effect modifier; \(U\) is entirely unrelated to \(Y\), such that: \[\beta_{1}=1,\,\beta_{2,AC}=\beta_{2,AB}\ =1,\,\beta_{3,AC}=\beta_{3,AB}\ =0,\, \gamma_{1}=0,\,\text{and}\,\gamma_{2,AC}=\gamma_{2,AB}=0.\] S4. \(U\) is an unmeasured prognostic factor; \(L\) is entirely unrelated to \(Y\), such that: \[\beta_{1}=0,\,\beta_{2,AC}=\beta_{2,AB}\ =0,\,\beta_{3,AC}=\beta_{3,AB}\ =0,\, \gamma_{1}=1,\,\text{and}\,\gamma_{2,AC}=\gamma_{2,AB}=0.\] S5. \(U\) is an unmeasured effect modifier; \(L\) is entirely unrelated to \(Y\), such that: \[\beta_{1}=0,\,\beta_{2,AC}=\beta_{2,AB}\ =0,\,\beta_{3,AC}=\beta_{3,AB}\ =0,\, \gamma_{1}=1,\,\text{and}\,\gamma_{2,AC}=\gamma_{2,AB}=1.\] S6. \(L\) is a prognostic factor, and a nonlinear effect modifier; \(U\) is entirely unrelated to \(Y\), such that: \[\beta_{1}=1,\,\beta_{2,AC}=\beta_{2,AB}\ =0,\,\beta_{3,AC}=\beta_{3,AB}\ =1,\, \gamma_{1}=0,\,\text{and}\,\gamma_{2,AC}=\gamma_{2,AB}=0.\] We also consider three possibilities for how the covariates are distributed in the two studies: C1. The means of the covariates (\(L\) and \(U\)) differ across studies and the variances are equal, such that: \[\mu_{L1}=-0.25,\,\mu_{L2}=0.25,\,\mu_{U1}=0.25,\,\mu_{U2}=-0.25,\,\,\text{ and}\,\sigma_{L1}=\sigma_{L2}=\sigma_{U1}=\sigma_{U2}=1.\] C2. The means of the covariates (\(L\) and \(U\)) are equal in both studies, but the variances differ, such that: \[\mu_{L1}=\mu_{L2}=\mu_{U1}=\mu_{U2}=0,\,\text{and}\,\,\sigma_{L1}=0.75,\, \sigma_{L2}=1.25,\,\sigma_{U1}=0.75,\,\sigma_{U2}=1.25\] C3. The means of the covariates (\(L\) and \(U\)) are equal in both studies and the variances are also equal, such that: \[\mu_{L1}=\mu_{L2}=\mu_{U1}=\mu_{U2}=0,\,\text{and}\,\,\sigma_{L1}=\sigma_{L2}= \sigma_{U1}=\sigma_{U2}=1.\] ### Estimands of interest The target estimand in Simulation Study 2 is the causal marginal \(C\) vs. \(B\) treatment effect that would be observed in the study 2 population, \(\theta_{CB|Study2}\). For all scenarios under consideration this is a null effect, \(\theta_{CB|Study2}=0\). ### Methods to be compared We compare five different estimation approaches for both the binary outcome and the TTE outcome data: A1. univariate (unadjusted) regression; A2. MAIC (matching on 1\({}^{\text{st}}\) moment); A3. MAIC (matching on 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) moments); A4. STC with standardization _without_ a \(X\) by \(L\) interaction term; and A5. STC with standardization _with_ a \(X\) by \(L\) interaction term. The **univariate (unadjusted) regression** approach involves first fitting a (logistic or Cox) regression model to the study 1 IPD to obtain \(\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{ \overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset}}}}}}}}}}}}{\theta_{ AC|Study1,univ}}\), the estimated marginal treatment effect (i.e., the log-OR or log-HR) of treatment \(A\) vs. \(C\) (in the study 1 population). This approach assumes that this is also a reasonable estimate for the marginal treatment effect of \(A\) vs. \(C\) in the study 2 population. With \(\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{ \overset{\overset{\overset{\overset{\overset{\overset{\overset{ \cdot}}}}}}}}}}}}}{\theta_{ AB|Study2}}\) available from the study 2 AD, to obtain the estimated marginal treatment effect of \(C\) vs. \(B\) (in the study 2 population), we simply calculate: \(\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{ \overset{\overset{\overset{\overset{\overset{\overset{\overset{\cdot}}}}}}}}}}}}}{\theta_{ CB|Study2,univ}}=\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{ \overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{\cdot}}}}}}}}}}}}}}}{ \theta_{AC|Study1,univ}}-\overset{\overset{\overset{\overset{\overset{\{ \overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{\overset{ \overset{\overset{\overset{\overset{\overset{\ the \(L\) values observed in study 2. For Simulation Study 2, we simulate \(W\)=10,000 pseudo-values from a normal distribution with mean and standard deviation equal to \(\overset{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{}}}}}}}}}}}}}}}{}_{L2}\) and \(\overset{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{ \tiny{{\tiny{{\tiny{{\tiny{{{}}}}}}}}}}}}}}}}{}_{L2}}\), respectively. (Recall that \(\overset{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{{ \tiny{\tiny{{\tiny{{\tiny{{\tiny{{\tiny{{}}}}}}}}}}}}}}}}}}{}_{L2}\) and \(\overset{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{\tiny{{\tiny{\tiny{{\tiny{{}}}}}}}}}}}}}}}}}}}{}_{L2}\) are available from the study 2 AD). Then, in the third step, one averages over these pseudo-values to obtain estimates of the marginal probabilities of \(Y\) given treatment \(A\) and given treatment \(C\) under the study 2 population: \[\overset{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{}}}}}}}}}}}}}}}}}}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{} ### Results Results for the binary outcome are listed in Table 3A and Table 3B and results for the TTE outcome are listed in Table 4A and 4B. In summary, for Scenarios S1-S6, we see that: **S1:** When both \(L\) and \(U\) are entirely unrelated to \(Y\), all methods appear approximately unbiased. However, the STC with interaction and both MAIC methods may be less efficient relative to the univariate and STC without interaction approaches depending on how the distribution of the covariates differs across studies. Notably, when the variance of \(L\) is unequal across studies (C2), the MAIC approach matching on both first and second moments is substantially less efficient than all other approaches (and possibly slightly biased). \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Scenario** & **Covariate** & **Performance** & **univariate** & **MAIC** & **MAIC (1\({}^{\text{st}}\))** & **STC with std.** & **STC with std.** _with without_ & **STC with interactio_** & **std.** _with_ \\ & **\(\alpha_{\text{AC}}=\alpha_{\text{BC}}=1\)** & **11** & **11** & **11** & **11** & **11** & **11** \\ \hline S1- \(L\) is entirely unrelated to \(Y\) & \(\beta_{1}=0\) & C1-Unequal & Mean (MC error) & -0.000 (0.00245) & 0.001 (0.00262) & -0.000 (0.00245) & 0.000 \\ \hline S1- \(L\) is entirely unrelated to \(Y\); \(U\) is in unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C2-Equal Means/Unequa (MC error) & Mean (0.00245) & 0.001 (0.00246) & 0.013 (0.00369) & -0.001 (0.00245) & 0.000 \\ \hline S1- \(L\) is entirely unrelated to \(Y\) & \(\beta_{1}=0\) & C1-Unequal Means/Equal Variances & Mean (MC error) & -0.004 (0.00243) & -0.004 (0.00243) & -0.004 (0.00243) & 0.000 \\ \hline S1- \(L\) is entirely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C1-Unequal Means/Equal Variances & Mean (MC error) & -0.002 (0.00223) & 0.001 (0.00236) & -0.001 (0.00228) & 0.259 (0.00245) \\ \hline S2- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=0\) & C2-Equal Means/Unequa (MC error) & Mean (0.00224) & -0.001 (0.00223) & -0.002 (0.00236) & -0.002 (0.00218) & 0.002 (0.00227) \\ \hline S3- \(L\) is entirely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=0\) & C2-Equal Means/Unequa (MC error) & Mean (0.00224) & 0.113 (0.00224) & 0.006 (0.00207) & -0.006 (0.00207) & -0.006 (0.00215) \\ \hline S3- \(L\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=0\) & C3-Equal Means/Equal (MC error) & Mean (0.00158) & 0.00158) & 0.00265) & 0.001 (0.001 (0.00153) \\ \hline S3- \(L\) is prognostic, and an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{1}=1\) & C1-Unequal Means/Equal (MC error) & Mean (0.00209) & -0.000 (0.00217) & -0.002 (0.00216) & -0.215 (0.00198) & -0.001 (0.00203) \\ \hline S4- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C2-Equal Means/Unequa (MC error) & Mean (0.00207) & 0.260 (0.00207) & 0.152 (0.00340) & 0.152 (0.00180) & 0.203 (0.00181) \\ \hline S4- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=0\) & C3-Equal Means/Unequa (MC error) & Mean (0.00146) & 0.004 (0.00203) & 0.003 (0.00203) & 0.003 (0.00191) & 0.002 (0.00190) \\ \hline S4- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C2-Equal Means/Unequa (MC error) & Mean (0.00207) & 0.260 (0.00207) & -0.032 (0.00340) & 0.152 (0.00180) & -0.001 (0.00181) \\ \hline S5- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C2-Equal Means/Unequa (MC error) & Mean (0.00207) & 0.207 (0.00146) & 0.003 (0.00140) & 0.180 (0.00181) \\ \hline S6- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C3-Equal Means/Unequa (MC error) & Mean (0.00145) & 0.004 (0.00203) & 0.003 (0.00203) & 0.003 (0.00191) & 0.002 (0.00190) \\ \hline S7- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C3-Equal Means/Unequa (MC error) & Mean (0.00148) & 0.203 (0.00154) & 0.203 (0.00154) & 0.198 (0.00134) \\ \hline S8- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C2-Equal Means/Unequa (MC error) & Mean (0.00207) & 0.260 (0.00207) & 0.152 (0.00340) & 0.152 (0.00180) & 0.181 (0.00181) \\ \hline S8- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C2-Equal Means/Unequa (MC error) & Mean (0.00207) & 0.207 (0.00146) & 0.003 (0.00240) & 0.180 (0.00127) & 0.181 (0.00128) \\ \hline S8- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C3-Equal Means/Equal (MC error) & Mean (0.00204) & 0.004 (0.00203) & 0.003 (0.00203) & 0.003 (0.00191) & 0.002 (0.00190) \\ \hline S8- \(L\) is not an effect modifier; \(U\) is unretirelely unrelated to \(Y\) & \(\beta_{2,\text{BC}}=1\) & C3-Equal Means/Unequa (MC error) & Mean (0.00145) & 0.203 (0.00144) & 0.203 (0.00135) & 0.191 (0.00134) \\ \hline \hline \end{tabular} \end{table} Table 3A - Results from Simulation Study 2 for binary outcome data; for the marginal log-OR. Numbers in black correspond to a method and scenario for which the absolute Mean obtained is less than 0.01 (“likely unbiased”); numbers in orange correspond to absolute Mean more than 0.01 and less than 0.1 (“possibly/slightly biased”); numbers in red correspond to absolute Mean more than 0.1 (“likely/highly biased”). **S2:** When \(L\) is a prognostic but not an effect modifier, we see that the univariate and MAIC with only 1\({}^{\text{st}}\) moment approaches may be biased depending on how the distribution of the covariates differs across studies. Amongst the methods for which results appear consistently unbiased, the STC without interaction method appears to be the most efficient, followed closely by the STC with interaction method. The MAIC with 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) moments method is notably less efficient. **S3:** When \(L\) is a prognostic and an effect modifier, we see that for binary outcomes, unless the effect modifier is perfectly balanced across studies (as in C3), all approaches are biased except for the STC with interaction method and (perhaps) the MAIC with 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) moments method, where the STC is more efficient than the MAIC method. This pattern was consistent for time-to-event outcomes, except that when the effect modifiers were balanced across studies (C3), the std. without interaction approach was biased, whereas other methods were not. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Scenario** & **Covariate** & **Performance** & **univariate** & **MAIC** & **MAIC** & **STC with std.** & **STC with std.** \\ Note that for all scenarios: & **in Study 1 vs. Study 2** & **c measure** & **regression** & **(1\({}^{\text{st}}\))** & **(1\({}^{\text{st}}\) and 2\({}^{\text{st}}\))** & **std.** & **std.** _with without_ & **interacto_ \\ \hline S4- \(U\) is an unmeasure & \(\beta_{1}=0\) & C1-Unequal Means/Equal Enpirical (MC error) & Mean (0.0022) & 0.022 (0.0024) & 0.022 (0.0022) & 0.021 (0.0022) \\ & \(\beta_{2,BC}\) & Mean/Cine Empirical SE & (0.228) & 0.245 (0.00161) & 0.0173 (0.00173) & (0.00161) & (0.00171) \\ \hline S5- \(U\) is entirely unrelated to \(Y\) & \(\beta_{1,BC}\) & C2-Equal Means/Unequal Enpirical SE & Mean (0.119) & 0.119 (0.0022) & 0.129 (0.00348) & 0.117 (0.00222) & 0.112 (0.00222) \\ \hline S6- \(U\) is an unmeasure & \(\beta_{1,BC}\) & C3-Equal Means/Equal (MC error) & Mean (0.00157) & 0.001 (0.00157) & 0.001 (0.00157) & 0.00157) \\ \hline S7- \(U\) is an unmeasure & \(\beta_{1}=0\) & C1-Unequal Means/Equal Enpirical SE & Mean (0.00158) & 0.001 (0.00158) & 0.001 (0.00158) & 0.001 (0.00158) \\ \hline S5- \(U\) is an unmeasure & \(\beta_{1}=0\) & C1-Unequal Means/Equal Enpirical SE & Mean (0.00205) & 0.228 (0.00223) & 0.228 (0.00226) & 0.227 (0.00220) \\ \hline S8- \(U\) is an unmeasure & \(\beta_{2,BC}\) & C2-Equal Means/Unequal Enpirical SE & (0.00145) & 0.00158) & 0.00158) & 0.00145) & 0.00156) \\ \hline S9- \(U\) is entirely unrelated to \(Y\) & \(\beta_{2,BC}=0\) & C2-Equal Means/Unequal Enpirical SE & Mean (0.00208) & 0.263 (0.00280) & 0.272 (0.00325) & 0.262 (0.00208) \\ \hline \end{tabular} \end{table} Table 3B - Results from Simulation Study 2 for binary outcome data; for the marginal log-OR. Numbers in black correspond to a method and scenario for which the absolute Mean obtained is less than 0.01 (“likely unbiased”); numbers in orange correspond to absolute Mean more than 0.01 and less than 0.1 (“possibly/slightly biased”); numbers in red correspond to absolute Mean more than 0.1 (“likely/highly biased”). **S4:** In the presence of an unmeasured prognostic, all methods are slightly biased when covariates have unequal means across studies (C1) and substantially biased when covariates have unequal variances across studies (C2). This is a direct consequence of non-collapsibility. The difference between the conditional and marginal non-collapsible estimands will increase as the effect of the prognostic variable on the outcome variable increases, and as the variance of the prognostic variable increases; see Neuhaus and Jewell (1993)[35] who provide a geometric argument for the bias increasing as the variance of the prognostic variable increases. **S5:** In the presence of an unmeasured effect modifier, for binary outcomes all methods are biased unless the unmeasured effect modifier is perfectly balanced across studies (C3), whereas for time-to-event outcomes all methods are biased even when covariates are perfectly balanced, except for std. with interaction. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline **Scenario** & **Covariate** & **Performance** & **univariate** & **MAIC** & **MAIC** & **STC with std.** & **STC with std.** _with without_ & **STC with std.** _with with_ \\ & Note that for all scenarios: & **in Study 1 vs. Study 2** & & & & **2**–** & **without interactio** & **n** \\ \hline S1- \(L\) is entirely unrelated to \(Y\) & \(\beta_{1}=0\) & C1-Unequal & Mean (MC error) & -0.000 (0.00097) & -0.001 (0.00104) & -0.001 (0.00105) & -0.000 (0.00097) & (0.00103) \\ \hline S1- \(L\) is entirely unrelated to \(Y\) & \(\beta_{2,BC}=0\) & C2-Equal Means/Unequa & Mean (MC error) & 0.000 (0.00098) & 0.001 (0.00099) & 0.004 (0.00149) & -0.001 (0.00099) & -0.002 (0.00099) \\ \hline S1- \(L\) is entirely unrelated to \(Y\) & \(\beta_{1,AC}=0\) & C3-Equal Means/Equal Variances & Mean (MC error) & 0.001 (0.00099) & 0.001 (0.00099) & 0.001 (0.00099) & 0.001 (0.00099) \\ \hline S2- \(L\) is prognostic, but not an effect modifier; \(U\) is entirely unrelated to \(Y\) & \(\beta_{1}=1\) & C1-Unequal Means/Equal Measures/Equal Measures/Equal & Mean (MC error) & 0.000 (0.00098) & 0.001 (0.00094) & -0.001 (0.00099) & -0.002 (0.00082) \\ \hline S2- \(L\) is prognostic, but not an effect modifier; \(U\) is entirely unrelated to \(Y\) & \(\beta_{2,BC}=0\) & C2-Equal Means/Unequa & Mean (MC error) & 0.001 (0.00098) & 0.094 (0.00094) & 0.082 (0.00082) & 0.082 (0.00082) \\ \hline S3- \(L\) is prognostic, but not an effect modifier; \(U\) is entirely unrelated to \(Y\) & \(\beta_{2,BC}=0\) & C2-Equal Means/Unequa & Mean (MC error) & 0.160 (0.00098) & 0.094 (0.00098) & 0.082 (0.00080) & 0.082 (0.00058) \\ \hline S3- \(L\) is prognostic, and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; and an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier and an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; and an effect modifier; an effect modifier; an effect modifier and an effect modifier; an effect modifier; an effect modifier and an effect modifier; an effect modifier; an effect modifier; an effect modifier; and an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier and an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier; an effect modifier an effect modifier **S6:** In the presence of the nonlinear effect modifier, all methods are biased when the variance of the effect modifier is not equal across studies (C2). That being said, the MAIC with 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) moments method is the least biased of the five approaches. Furthermore, when the effect modifier has equal variances but unequal means across studies (C1), both STC approaches are biased whereas both MAIC approaches are unbiased. Finally, all approaches are unbiased when the measured nonlinear effect modifier is perfectly balanced across studies (C3). \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Scenario** & **Covariate distributions** & **Performance measure** & **univariate regression** & **MAIC (1\({}^{\text{st}}\))** & **MAIC (1\({}^{\text{st}}\) and std. & **STC with std. _with without_** & **STC with std. _with_** \\ **s**enarios: & **in Study 1 vs.** & **in Study 2** & & & **2\({}^{\text{nd}}\))** & **1\({}^{\text{st}}\) and interaction** & **interactio** \\ \hline S4- \(U\) is an unmeasure & \(\beta_{1}=0\) & C1-Unequal & Mean (MC error) & 0.299 (0.00340) & 0.282 (0.00323) & -0.007 (0.0094) & -0.007 (0.00097) \\ d & 0 & Variances (MC error) & Empirical SE (0.340) & 0.323 (0.00228) & 0.323 (0.00229) & 0.094 (0.00066) & 0.00068 \\ \(L\) is entirely unrelated & \(\beta_{2,BC}=0\) & C2-Equal Means/Unequa (MC error) & Mean (0.00225) & 0.357 (0.00225) & 0.490 (0.00373) & 0.083 (0.00122) & 0.082 (0.00122) \\ \(V_{1}=1\) & C3-Equal Means/Equal Means/Equal Measures/Equal Measures/Unequa (MC error) & Mean (0.00159) & 0.225 (0.00159) & 0.373 (0.00263) & 0.122 (0.00086) & 0.122 (0.00087) \\ \(V_{2,BC}=0\) & C3-Equal Means/Equal variances & Mean (MC error) & 0.294 (0.00319) & 0.294 (0.00319) & 0.00029 (0.00092) & 0.00093 (0.00093) \\ \(V_{2,BC}=0\) & C3-Equal Means/Equal variances & Empirical SE (0.319) & 0.319 (0.00226) & 0.00226) & 0.00065 (0.00066) & 0.00066 \\ \hline S5- \(U\) is an unmeasure & \(\beta_{1}=0\) & C1-Unequal Means/Equal Measures/Equal Measures/Equal Measures/Equal Measures/Equal Measures/Unequa (MC error) & Mean (0.00216) & 0.410 (0.00264) & 0.411 (0.00265) & 0.090 (0.00107) & 0.090 (0.00128) \\ \(V_{2,BC}=0\) & C3-Equal Means/Equal Measures/Unequa (MC error) & Mean (0.00153) & 0.00187) & 0.00187) & 0.00076) & 0.128 (0.00090) \\ \(V_{2,BC}=0\) & C2-Equal Means/Unequa (MC error) & Mean (0.00262) & 0.402 (0.00262) & 0.528 (0.00404) & 0.183 (0.00091) & 0.086 (0.00091) & 0.08122 (0.00122) \\ \(V_{1}=1\) & C3-Equal Means/Equal Measures/Equal variances & Empirical SE (MC error) & 0.00185) & 0.00185) & 0.00286) & 0.00064) & 0.00086) \\ \(V_{2,BC}=1\) & C3-Equal Means/Equal Measures/Equal variances & Mean (MC error) & 0.334 (0.00360) & 0.334 (0.00360) & 0.00145) & 0.00090) \\ \(V_{2,BC}=1\) & C3-Equal Measures/Equal variances & Empirical SE (MC error) & 0.360 (0.00255) & 0.00254) & 0.00255) & 0.00103) & 0.00064) \\ \hline S6- \(L\) is prognostic, and a nonlinear effect modifier; & \(\beta_{2,BC}=0\) & C1-Unequal Means/Equal Measures/Equal Measures/Unequa (MC error) & Mean (0.00070) & 0.259 (0.00099) & 0.001 (0.00097) & -0.0045 (0.00097) & -0.0031 (0.00090) \\ \(V_{2,BC}=0\) & C3-Equal Means/Equal Measures/Equal Measures/Unequa (MC error) & Mean (0.00070) & 0.00069) & 0.00069) & 0.00065) & 0.00065) & 0.00063) \\ \hline \end{tabular} \end{table} Table 4B Results from Simulation Study 2 for TTE outcome data; for the marginal log-HR. Numbers in black correspond to a method and scenario for which the absolute Mean obtained is less than 0.01 (“likely unbiased”); numbers in orange correspond to absolute Mean more than 0.01 and less than 0.1 (“possibly/slightly biased”); numbers in red correspond to absolute Mean more than 0.1 (“likely/highly biased”). ## 4 Discussion ### Marginalization in the context of a single comparative study The objective of Simulation Study 1 was to determine how IPTW and standardization perform relative to one another in the absence/presence of confounders and effect modifiers. On this point, we have three main observations. First, we observed that the covariate-adjusted marginal estimates (obtained using either IPTW or standardization) were always more efficient than the unadjusted marginal estimate whenever the covariate in question was predictive of the outcome. While this observation has been established previously, it certainly bears repeating. It has long been understood that "accounting for chance imbalances" will result in a more efficient use of data (e.g., Gail et al. (1984), Senn (1989), Wang et al. (2019)). However, a complete understanding of this is still lacking; see Wilkinson et al. (2022). For instance, Hauck et al. (1998) conclude with the recommendation that, in a logistic or Cox regression model one should "adjust for important prognostic covariates in order to come as close as possible to the clinically most relevant subject-specific measure of treatment effect [i.e., the conditional effect]" and also note that: "Additional benefits would be an increase in efficiency of tests for no treatment effect and improved external validity." A more complete reasoning would be that adjusting for prognostic covariates will allow one to come as close as possible to _either_ the marginal _or_ the conditional effect, and that an additional benefit is an increase in efficiency of _any_ test, be it for no treatment effect or otherwise (e.g., a non-inferiority or equivalence test). Furthermore, adjusting for prognostic covariates likely has relatively little costs since important prognostic covariates are typically recorded in a study anyway and the loss in efficiency resulting from mistakenly assuming that an entirely unrelated variable is prognostic may be minimal. As we observed in Simulation Study 1, adjusting for an entirely unrelated covariate will not necessarily lead to a substantial loss of statistical power. Second, the results of Simulation Study 1 indicate that, for estimating the marginal HR from TTE outcome data, standardization (when the outcome model is correctly specified) may be more efficient than IPTW, even in the absence of any effect modifiers or confounders. This result shows that the findings proved by Williamson, Forbes, and White (2014) for linear regression may not hold for Cox PH regression. Notably, this is contrary to what was observed by Daniel et al. (2021), who conclude that, in such scenarios, "the efficiency [of standardization] looks identical to that achieved by IPTW." We attribute the difference in the findings to the different values used for \(m\), the number of simulations used for numerical integration in the novel standardization method. Daniel et al. (2021) implemented the standardization method in their simulation study with three different values (_m_=500, _m_=2500, and _m_=5000) and observed that efficiency increased with increasing values of \(m\). We implemented the standardization method in our simulation studies with a minimum value of _m_=50000 and, perhaps predictably, obtained greater efficiency. Note that the computational time required for running Daniel et al. (2021)'s standardization method increases linearly with \(m\). Since researchers will likely want to run the standardization method with their data several thousand times in order to calculate reliable bootstrap confidence intervals, the computational cost involved may become rather onerous. Daniel et al. (2021) report requiring more than 70 minutes to obtain a bootstrap confidence interval of the adjusted marginal log-HR for a chronic liver disease dataset with 1,000 bootstrap samples and _m_=50,000; our experience with the method was consistent with this. Further research on how best to select an appropriate value of \(m\) and how to reduce the computational running time of bootstrapping the algorithm is needed (Kosko et al., 2023). Third, we found that, when a measured covariate is an effect modifier, the IPTW approach appears unbiased and, for binary outcome data, can be just as efficient as standardization. This would appear to perhaps contradict Morris et al. (2022)'s statement that IPTW "does not handle" treatment-covariate interactions. For time-to-event outcomes, we observed that the standardization with interaction approach is more efficient than the IPTW approach in the presence of effect modifiers, although this comes at the price of having to correctly specify the outcome model. Unless the outcome model is correctly specified, the standardization approach may be biased (for both binary and time-to-event outcomes). The clear advantage of the IPTW approach is that one need not specify the exact functional form of any potential treatment-covariate interactions. Finally, on a minor note, while some of Simulation Study 1 is simply a replication of what is done in Daniel et al. (2021), we stress that there is merit in replicating simulation studies; see Lohmann et al. (2022). Indeed, in our replication, we discovered a coding error impacting the simulation of censoring times in Daniel et al. (2021)'s original work (personal communication, 2022). ### Marginalization in the context of an indirect treatment comparison The results of Simulation Study 2 suggest that using the standardization method of Daniel (2021) as an extension to conventional STC for the estimation of marginal HRs in an anchored ITC of RCTs is unbiased (when the outcome model is correctly specified) and potentially more efficient than using MAIC techniques. The results of Simulation Study 2 also suggest that, unless one can be certain that the distribution of all measured prognostics and measured effect modifiers do not differ across studies, it is necessary to match on both the first and second moments for MAIC. This goes contrary to certain current recommendations (e.g., Hatswell et al., 2020; Weber et al., 2020). Most importantly perhaps, the results of Simulation Study 2 suggest that to obtain unbiased estimates of non-collapsible parameters using STC and MAIC techniques, one must adjust for _all_ unbalanced prognostic variables (in addition to _all_ unbalanced effect modifiers). This is contrary to the recommendations for using MAIC of Vo (2023) (purely prognostic variables can be "safely excluded") and of Phillipo et al. (2016) ("To avoid loss of precision due to over-matching, no prognostic variables which are not also effect modifiers should be adjusted for, as variables which are purely prognostic do not affect the estimated relative treatment effect."). Whether or not adjusting for _all_ unbalanced prognostic variables and _all_ effect modifiers (and correctly specifying the functional form of the various covariate-treatment interactions) is realistic will no doubt depend on the specific context. Until recently, researchers may have been under the impression that such a strong assumption was only necessary for _unanchored_ ITCs (Phillipo et al. (2016): "in all cases where unanchored indirect comparisons are performed, a strong assumption is made that all prognostic variables and all effect modifiers are accounted for and correctly specified - an assumption largely considered to be implausibly strong."). Our results suggest that this "strong assumption" may also be needed for anchored ITCs. Curiously, in two recent simulation studies (Remiro-Azocar, 2022; Remiro-Azocar et al., 2021), MAIC appears to be unbiased "despite not accounting for imbalances in purely prognostic variables." This is likely due to the fact that in both these simulation studies, the purely prognostic variables are imbalanced in terms of their mean but are balanced in terms of their variance (i.e., have unequal means, but equal variances across studies). Indeed, we see a similar result in the C1 setup ("Unequal Means/Equal Variances") in S4 scenario of Simulation Study 2, where the bias obtained with the MAIC approaches (and all methods, for that matter) is relatively minor. In contrast, in the C2 setup ("Equal Means/Unequal Variances") in scenario S4 of Simulation Study 2, the bias obtained was more substantial. This suggests that when the means of a prognostic variable differ across studies it will be less consequential than when the variances differ; Neuhaus and Jewell (1993) provide further insights on this. Finally, while the results of Simulation Study 2 suggest that neither MAIC nor STC approaches are entirely robust to misspecifying the nature of treatment-covariate interactions. When a nonlinear effect modifier is present (as in Scenario S6) and not correctly specified in the weight/outcome model, the MAIC/STC approaches obtained biased estimates. Notably, we did not consider STC with an outcome model that included a nonlinear _XL2_ interaction term, nor did we consider MAIC with matching on 1st, 2nd _and 3rd_ moments. We suspect that these two approaches would be unbiased for the data simulated in Scenario S6. Future research should determine if this is indeed the case, and investigate the cost, in terms of efficiency, of implementing such approaches. ### Limitations and recommendations Our simulation studies have a few notable limitations. First, certain design choices in the simulation study are perhaps unrealistic. Specifically, the follow-up times and study sample sizes which were chosen based on those selected in the simulation study of Daniel et al. (2021), may be overly-optimistic; see Tai et al. (2021). With regards to the sample size, Daniel et al. (2021) note that while, perhaps larger than what is commonly observed in practice, a sample size of \(N\)=1000 per two-arm study ensures that finite sample (or sparse data) bias (Greenland, Manrournia, & Altman, 2016) is negligible. Our simulation setup was also rather unrealistic with respect to the fact that we only considered a single covariate. This was a deliberate choice to make the results as straightforward as possible. We note that Chatton et al. (2020) who compared different methods for estimating the marginal OR in a simulation study with nine covariates (but without any effect modifiers) arrived at a similar conclusion: standardization (with a correctly specified outcome model) is the method with lowest bias and lowest variance. More recently, Chatton et al. (2022) considered data with time-to-event outcomes. Based on a simulation study with six covariates (but without any effect modifiers), they concluded that standardization (with a correctly specified outcome model) is the most efficient approach for estimating the restricted mean survival time (RSMT) and the so-called "average hazard ratio" (AHR) (two alternatives to the marginal HR). A second limitation of our simulation studies is that, while we considered the biasedness and efficiency of point estimates, we did not measure the coverage and width of confidence intervals. The non-parametric bootstrap can be used to calculate confidence intervals for both the IPTW-based approaches (Austin, 2016; Remiro-Azocar et al., 2021) and standardization-based approaches (Daniel et al, 2021; Remiro-Azocar et al., 2022). While there are other, non-bootstrap approaches for calculating confidence intervals when using IPTW-based approaches (e.g., using a model-based variance estimator, or using a robust sandwich-type variance estimator), these alternative approaches may lead to incorrect coverage rates (Austin (2016); Remiro-Azocar et al., 2021). We also note that, when bootstrapping for STC, researchers must be careful to adequately take into account the uncertainty involved in the pseudo-data generation. A simulation study to evaluate the performance of bootstrap-based confidence intervals in the different settings we considered is an objective for future research (and will notably require substantial computational resources). Finally, we only considered the scenario of an anchored ITC and did not investigate what might occur with an unanchored ITC when there is no common comparator arm, or when an external control arm is used to compare against a single-arm trial (Ren et al., 2023). That being said, an unanchored ITC is not unlike a two-arm observational study (which we did consider in Simulation Study 1) where the probability of receiving a specific treatment (or of being in a specific study) may be confounded. Further research should investigate to what extent our findings can be generalized more broadly, for instance to large NMAs, where there are multiple studies of which some are IPD; see Phillippo et al. (2020). IPTW and MAIC approaches are becoming increasingly popular. Webster-Clark et al. (2021) report an approximately 28-fold increase in the number of papers applying IPTW-based methods between 2004 and 2018. By contrast, using standardization-based approaches is relatively rare for single comparator studies (Snowden et al., 2011), and the concept of implementing STC along with standardization in order to obtain a marginal effect estimate has only recently been proposed (Remiro-Azocar et al., 2022). Indeed, Daniel et al. (2021) describe standardization as a "largely ignored procedure." This strikes us as somewhat unfortunate, given the results of our simulation studies. One obvious advantage of IPTW over standardization in a single comparator study is that one need not correctly specify the outcome model to obtain unbiased estimates. While this is by no means a new finding with regards to binary or continuous outcomes -simulations by Drake (1993) "indicate that the value of the propensity score [i.e., IPTW] lies primarily in guarding against model misspecification" - we showed that this advantage of IPTW over standardization also holds for time-to-event outcomes. With ITCs, the advantage of MAIC over STC with respect to potential model misspecification is perhaps less appreciable. Our overall conclusion is that estimating marginal treatment effects when the parameter of interest is non-collapsible can be rather challenging, even in the absence of any unmeasured confounders. This is particularly true in observational studies and in ITCs (which are "essentially observational findings across trials" (Higgins and Green, (2011)). While certain statistical methods can be used to obtain unbiased estimates in certain scenarios, researchers must properly understand their many limitations. **Conflict of Interest** _The authors have declared no conflict of interest_.
2306.02793
Verification of ultrafast spin transfer effects in FeNi alloys
The optical intersite spin transfer (OISTR) effect was recently verified in Fe$_{50}$Ni$_{50}$ using magneto-optical Kerr measurements in the extreme ultraviolet range. However, one of the main experimental signatures analyzed in this work, namely a magnetic moment increase at a specific energy in Ni, was subsequently found also in pure Ni, where no transfer from one element to another is possible. Hence, it is a much-discussed issue whether OISTR in FeNi alloys is real and whether it can be verified experimentally or not. Here, we present a comparative study of spin transfer in Fe$_{50}$Ni$_{50}$, Fe$_{19}$Ni$_{81}$ and pure Ni. We conclusively show that an increase in the magneto-optical signal is indeed insufficient to verify OISTR. However, we also show how an extended data analysis overcomes this problem and allows to unambiguously identify spin transfer effects. Concomitantly, our work solves the long-standing riddle about the origin of delayed demagnetization behavior of Ni in FeNi alloys.
Christina Möller, Henrike Probst, G. S. Matthijs Jansen, Maren Schumacher, Mariana Brede, John Kay Dewhurst, Marcel Reutzel, Daniel Steil, Sangeeta Sharma, Stefan Mathias
2023-06-05T11:40:20Z
http://arxiv.org/abs/2306.02793v2
# Verification of ultrafast spin transfer effects in FeNi alloys ###### Abstract The optical intersite spin transfer (OISTR) effect was recently verified in Fe\({}_{50}\)Ni\({}_{50}\) using magneto-optical Kerr measurements in the extreme ultraviolet range. However, one of the main experimental signatures analyzed in this work, namely a magnetic moment increase at a specific energy in Ni, was subsequently found also in pure Ni, where no transfer from one element to another is possible. Hence, it is a much-discussed issue whether OISTR in FeNi alloys is real and whether it can be verified experimentally or not. Here, we present a comparative study of spin transfer in Fe\({}_{50}\)Ni\({}_{50}\), Fe\({}_{19}\)Ni\({}_{81}\) and pure Ni. We conclusively show that an increase in the magneto-optical signal is indeed insufficient to verify OISTR. However, we also show how an extended data analysis overcomes this problem and allows to unambiguously identify spin transfer effects. Concomitantly, our work solves the long-standing riddle about the origin of delayed demagnetization behavior of Ni in FeNi alloys. The ability to drive spin dynamics by ultrashort laser pulses offers a unique opportunity to bring the field of spintronics into the femtosecond regime, as demonstrated for example by the successful application of superdiffusive spin currents [1; 2; 3; 4] in spintronic emitters [5] and the possibility of direct magnetic phase switching by femtosecond laser pulses [6; 7; 8]. Recently, a fascinating discovery was made in this context: given a suitable material, it is possible to drive spin transfer between sublattices directly by a strong optical field [9; 10; 11; 12; 13; 14]. This process, called optical intersite spin transfer (OISTR), even precedes the ultrafast demagnetization process and thus provides a path to even faster spintronic applications [15]. OISTR was first proposed theoretically by the Sharma group [16] and has meanwhile been experimentally verified in a number of experiments. In the OISTR process, an ultrafast optical excitation drives spin-preserving electronic transitions from below the Fermi-level to above the Fermi-level. Due to the exchange-split bands in ferromagnets, different numbers of spin-up and spin-down electrons are excited by the laser pulse, which leads to an ultrafast spectral redistribution of the electron density as well as to an ultrafast spectral redistribution of the spins and thus to ultrafast dynamics in the spin-polarization. If the initial states for such an excitation are predominantly found in one elementary subsystem, e.g. of an alloy, and the final states are predominantly found in another elementary subsystem, an intersite spin-transfer occurs, which offers the potential for an advanced and extremely fast control of magnetic behaviour. One of the first experiments to verify the OISTR effect was an exp Figure 1: Simplified illustration of the 3d states in Ni, Fe\({}_{19}\)Ni\({}_{81}\) and Fe\({}_{50}\)Ni\({}_{50}\). The addition of Fe in the alloys leads to additional unoccupied states in the minority channel that can be populated through OISTR. TDDFT calculations predict a stronger OISTR effect in Fe\({}_{50}\)Ni\({}_{50}\) compared to Fe\({}_{19}\)Ni\({}_{81}\) due to the larger amount of available Fe states above the Fermi level [17]. On the other hand, in elemental Ni, the OISTR process is not possible. alloy using the ultrafast transverse magneto-optical effect (T-MOKE) in the extreme ultraviolet (EUV) region [9]. The T-MOKE experiment showed a spectrally-dependent increase in magnetic asymmetry in Ni and a concomitant decrease in Fe, and the corresponding transient dynamics of majority and minority spin occupation in the Fe\({}_{50}\)Ni\({}_{50}\) alloy were calculated by TDDFT. However, a similar ultrafast increase in Ni has recently also been observed in pure Ni material [18; 19], where spin transfer to another subsystem is not possible. On the one hand, such a signature is not unexpected, since ultrafast excitation also drives occupation changes in pure materials. On the other hand, the question arises to what extent the observed increase in the magnetic asymmetry is a valid signature to verify spin transfer from Ni to the Fe in Fe\({}_{50}\)Ni\({}_{50}\). The situation is further complicated by the fact that it was found that identical magnetization dynamics in EUV T-MOKE can appear drastically different depending on the precise experimental geometry [19]. The critical question is therefore whether OISTR can experimentally be verified in Fe\({}_{50}\)Ni\({}_{50}\). In this work, we present new data on this topic in a comparative experimental and theoretical study of laser-driven spin transfer processes in Fe\({}_{50}\)Ni\({}_{50}\), Fe\({}_{19}\)Ni\({}_{81}\) (permalloy), and pure Ni (see Fig. 1). We revisit the previously observed signatures of the OISTR effect and, following the Figure 2: Apparently contradictory EUV T-MOKE measurements of OISTR in Fe\({}_{19}\)Ni\({}_{81}\), recorded at different EUV incidence angles and otherwise identical conditions (the data of Fe\({}_{50}\)Ni\({}_{50}\) and Ni can be found in the Supplementary Figure S1). (a) Static T-MOKE asymmetry for two incidence angles. Note that the magnetic asymmetry changes sign around 52 eV and 64 eV for the two different incidence angles. (b, c) Time-resolved magnetic asymmetry traces for 43.4\({}^{\circ}\) and 45.1\({}^{\circ}\) incidence angle, respectively. The analyzed photon energies around the M absorption edges of Fe and Ni are marked as colored bars in (a). At 43.4\({}^{\circ}\), the typical transient increase at 63.9 eV that was previously associated with OISTR is seen [9], while no such increase is visible at 45.1\({}^{\circ}\). approach that we developed in Ref. [19], perform a full time-resolved reconstruction of the dielectric tensor. This allows us to directly compare the same quantity obtained from experiment and theory: namely, the transient dynamics in the dielectric tensor. We find that in all three materials signatures of the ultrafast laser-driven spectral redistribution of spins can be clearly identified. Furthermore, we find that the amplitude of these dynamics at the Ni site increases with increasing Fe content in the alloy, supporting the OISTR description. In the case of Fe\({}_{50}\)Ni\({}_{50}\), we experimentally verify the transfer of minority spins from the Ni subsystem to the Fe subsystem, i.e. the OISTR effect. Finally, we are also able to explain the origin of the much discussed delayed demagnetization behavior of Fe and Ni in these alloys [20; 21; 22; 23; 24; 25]. ## Results We begin with a measurement of Fe\({}_{19}\)Ni\({}_{81}\) that very clearly illustrates that an increase in T-MOKE asymmetry in a specific spectral region may not be sufficient to verify spin transfer effects without further analysis. Fig. 2a shows energy-resolved magnetic asymmetries measured with EUV T-MOKE of the Fe\({}_{19}\)Ni\({}_{81}\) sample for two different incidence angles (\(\theta\)) of the EUV light. While the asymmetries look quite similar, the zero crossings are slightly shifted so that there are spectral regions where the asymmetry is negative for one angle of incidence and positive for the other, most notably at 52 and 64 eV. Most curiously, we find apparently contradictory ultrafast dynamics of the T-MOKE asymmetry in the time-resolved experiment for these spectral regions: Fig. 2b shows the time-resolved change of the asymmetry as a function of EUV energy at \(\theta=43.4^{\circ}\). Here, the signal at 63.9 eV (dark blue) shows a time-resolved increase in the T-MOKE asymmetry, which is exactly the signature used to verify the OISTR effect in our previous work. However, in Fig. 2c, for a slightly different angle of incidence (\(\theta=45.1^{\circ}\)), the same spectral region shows the exact opposite behavior, namely an ultrafast decrease. Note that we observe this behavior for all three samples Ni, Fe\({}_{19}\)Ni\({}_{81}\), and Fe\({}_{50}\)Ni\({}_{50}\) (cf. Supplementary Figure S1). Clearly, no conclusion should be drawn from these data without further knowledge of the OISTR effect and its signatures in the T-MOKE signal. To overcome this problem, we apply an extended analysis, based on the experimental determination of the transient dynamics of the dielectric tensor that we developed in Ref. [19]. This analysis shows that the peculiar behavior of the asymmetry increase and decrease for two different incidence angles is due to a transient rotation of the off-diagonal element of the dielectric tensor (\(\varepsilon_{xy}\)) in the complex plane. If this rotation of \(\varepsilon_{xy}\) happens at photon energies near the zero-crossings of the T-MOKE asymmetry, it can lead to opposite dynamics in the magnetic asymmetry due to the projection of \(\varepsilon_{xy}\) onto an angle-dependent probe vector (see Ref. [19] for the description of the analysis and the influence of the rotation of \(\varepsilon_{xy}\) on the T-MOKE asymmetry spectra; see SI for the additional data needed to perform this analysis). Instead, extracting \(\varepsilon_{xy}\) from the angle-dependent T-MOKE asymmetry data has several advantages: (i) the real part of \(\varepsilon_{xy}\) is related to the spin-polarization of the unoccupied states and can also be measured by other techniques, such as X-ray magnetic circular dichroism (XMCD); (ii) the quantity \(\varepsilon_{xy}\) is now independent of the measurement technique or geometry used; (iii) the transient changes in \(\varepsilon_{xy}\) allow a direct comparison with TDDFT calculations. Fig. 3a shows TDDFT calculations [26; 27] of the transient dynamics of the spin-resolved occupation in Fe and Ni in the Fe\({}_{50}\)Ni\({}_{50}\) sample. A calculation of Fe\({}_{19}\)Ni\({}_{81}\) is computationally very costly due to the large required supercell size and was therefore not performed for the present study. The calculations for Fe\({}_{50}\)Ni\({}_{50}\) are similar to the results presented in Hofherr _et al._[9], but adapted for the pump pulse energy (1.2 eV) and pulse duration (47 fs) of the present experiment [28]. While the Fe and Ni electrons share a common (metallic) band structure in the alloy, the character of the different states can be projected onto bands originating dominantly from the Ni or Fe subsystem. The OISTR effect then manifests itself in transitions from minority states below the Fermi level with predominantly Ni character to minority states above the Fermi level with predominantly Fe character. Of course, other transitions within the subsystems also contribute to the optical response, but they do not transfer spin from one subsystem to the other subsystem and are therefore not discussed further here. Transitions between the subsystems within the majority channel are also possible, but these are only minor changes compared to the strong changes in the minority channel (cf. Supplementary Figure S4). As can be seen from these calculations, there is a spectral region below the Fermi level that shows a depletion of minority spins for states with a predominant Ni character, leading to an increase in the energy-resolved magnetic moment at the Ni site. Conversely, there are spectral regions above the Fermi level of the Fe states where the increase in minority spins leads to a rapid quenching of the energy-resolved magnetic moment. Hofherr _et al._[9] found two similar features in the time-resolved asymmetry and interpreted them as indicators of the OISTR effect. However, for the reasons given above, we now go one step further and calculate from TDDFT the signature of OISTR on the transient dynamics of the dielectric tensor [29; 30]. Fig. 3b shows the real part of \(\varepsilon_{xy}\) from theory (lines) and experiment (points) for two exemplary pump-probe delays (before time zero and at 55 fs). The first important observation from the calculated time- and spectrally-resolved Re(\(\varepsilon_{xy}\)) data is that the spectrally very distinct dynamics in the spin-resolved DOS (Fig. 3a) are strongly broadened. This is due to the intrinsic linewidth of the 3p core levels and their partial overlap. Nevertheless, the second important observation is that it is still possible to pinpoint the OISTR effect from the theory data: For photon energies between 63.0 eV and 64.4 eV, the OISTR effect is also observed in the spin-resolved DOS (Fig. 3b). The OISTR effect is also observed in the spin-resolved DOS (Fig. 3c). The OISTR effect is also observed in the spin-resolved DOS (Fig. 3d). The OISTR effect is also observed in the spin-resolved DOS (Fig. 3e). The OISTR effect is also observed in the spin-resolved DOS (Fig. 3f). The OISTR effect is also observed in the spin-resolved DOS (Fig. 3g). The OISTR effect is also observed in the spin-resolved DOS (Fig. there is a relative increase in \(\mathrm{Re}(\varepsilon_{xy})\) caused by the pump-induced ultrafast loss of minority spins in Ni. Conversely, in the 55 eV energy region, Fe shows a rapid decrease in \(\mathrm{Re}(\varepsilon_{xy})\) on the timescale of the pump pulse due to the addition of transferred minority spins originating from Ni states. Fig. 3c summarizes the expected transient behavior of \(\mathrm{Re}(\varepsilon_{xy})\) from TDDFT for the discussed energies, with the blue curve showing the increase in Ni and the orange dotted curve showing the corresponding rapid decrease in Fe, now approximately referenced to the Fermi levels of Ni and Fe, respectively, by subtracting the respective M-edge energies from the photon energies (cf. Fig. 3b for the corresponding photon energies). In direct comparison, Fig. 3d shows the experimentally extracted transient dynamics of \(\mathrm{Re}(\varepsilon_{xy})\) for respective photon energies. First, we see that we are able to verify the important increase in \(\mathrm{Re}(\varepsilon_{xy})\) in experiment, which is indicative of the loss of minority spins in Ni due to optical pumping (blue line). Second, however, we do not have sufficient EUV intensity in the important spectral region just above the Fe M-edge, which is expected to show the strong and rapid decrease due to the increase of minority spins above the Fermi level (orange dotted line in theory, Fig. 3c, absent in experiment, Fig. 3d). The next available EUV harmonic with sufficient intensity in our experimental data is at 0.2 eV below Fe M-edge, and we find good qualitative agreement with theory (red lines in Fig. 3c,d). However, there is also a qualitative discrepancy with respect to timescales larger than 80 fs, where the theoretical magnetization remains constant while the experimental magnetization decreases. This can be understood by considering the limitations of the TDDFT calculations: TDDFT is well suited to describe the very early magnetization dynamics induced by the pump pulse and in particular the excitation. On longer timescales, however, the well-known demagnetization processes evolve, not all of which are included in TDDFT. In particular, Elliott-Yafet spin-flip electron-phonon scattering Elliott and Yafet (1966) is not included in TDDFT, which explains that TDDFT does not capture the full magnetization decrease for timescales >80 fs. In summary, we can clearly verify the loss of minority spins in the Ni subsystem from these experimental data, but we have no experimental evidence that these spins are transferred to the Fe subsystem. For the present work, we now aim to compare the spectrally-resolved T-MOKE data of pure Ni [32], where only an inter-energy spin-transfer is involved, with data from Fe\({}_{19}\)Ni\({}_{81}\) and Fe\({}_{50}\)Ni\({}_{50}\), where an intersite spin transfer between Ni and Fe becomes possible and is predicted by theory (for the Fe\({}_{50}\)Ni\({}_{50}\) alloy). Fig. 4a shows the experimentally analyzed transient change of \(\mathrm{Re}(\varepsilon_{xy})\) for the spectral region where the OISTR-induced increase was expected and verified in the case of Fe\({}_{50}\)Ni\({}_{50}\) (dashed). Compared to Fe\({}_{50}\)Ni\({}_{50}\), the increase in Fe\({}_{19}\)Ni\({}_{81}\) (dotted) is much less pro nounced and mostly seen as a delay in the case of pure Ni material (line). From these data, we can directly conclude that the OISTR-relevant transition is most efficiently excited in the Fe\({}_{50}\)Ni\({}_{50}\), less efficiently in Fe\({}_{19}\)Ni\({}_{81}\), and even less in Ni. In comparison with theory (Fig. 4b), we again find good qualitative agreement for Fe\({}_{50}\)Ni\({}_{50}\) as discussed above, but also for Ni. Note that the absorbed fluence and the quenching of the transient asymmetry are not the same for all three measurements (cf. SI). To shed light on the origin of the observed differences in excitation efficiencies in the minority channels, we take a closer look at the band-structure of these materials, and possible excitation pathways. Fig. 5a shows the spin-resolved density of states for pure Ni material. In Ni, more (spin-conserving optical) excitations are possible in the minority channel than in the majority channel. These transitions are captured in theory and experiment by a loss of minority spins below the Fermi level, and lead to the observed small increase for Re(\(\varepsilon_{xy}\)) at \(\approx\)2.1 eV below the Ni edge. In the case of Fe\({}_{50}\)Ni\({}_{50}\), as shown in Fig. 5b, the situation is different. Here, the addition of Fe adds additional final states above the Fermi level in the minority channel. Thus, a first glance at the band structure suggests that additional transitions from the Ni minority states below the Fermi Figure 4: Comparison of the transient off-diagonal tensor element Re(\(\varepsilon_{xy}\)) for 63.9 eV for Fe\({}_{50}\)Ni\({}_{50}\), Fe\({}_{19}\)Ni\({}_{81}\) and Ni, (a) extracted from measured EUV T-MOKE data, and (b) calculated by TDDFT. The spin transfer between Ni and Fe is visible by an increase of Re(\(\varepsilon_{xy}\)) at 63.9 eV, which probes Ni minority states below the Fermi level. In experiment, the spin transfer is found to be more efficient in Fe\({}_{50}\)Ni\({}_{50}\) than in Fe\({}_{19}\)Ni\({}_{81}\). In Ni, minority electrons are excited from below to above the Fermi level, which results in a short and comparatively small increase of Re(\(\varepsilon_{xy}\)). level to the Fe minority states above the Fermi level become possible. According to theory, this is indeed the case and leads to the strong relative increase of Re(\(\epsilon_{xy}\)) at 2.1 eV below the Ni edge and, in addition, to a rapid relative decrease of Re(\(\epsilon_{xy}\)) just above the Fe edge. While we cannot probe the rapid decrease with the given EUV light source in our experiment, we very well reproduce the modified efficiency of the OISTR transition and therefore conclude that the experiment also shows that in Fe\({}_{50}\)Ni\({}_{50}\) OISTR is operative, i.e. minority spins from Ni are indeed pumped into minority states of Fe by the ultrafast laser excitation. Next, we see that the Fe\({}_{19}\)Ni\({}_{81}\) alloy, according to the experiment (see Fig. 4a), lies between the situation of Ni, where no intersite spin transfer is possible, and Fe\({}_{50}\)Ni\({}_{50}\), where OISTR is apparently strong. Using the same reasoning as above, one would expect some amount of Fe DOS above the Fermi level, but less than in the case of Fe\({}_{50}\)Ni\({}_{50}\). Indeed, such a series of spin-resolved DOS has already been calculated in Ref. [17] and is in full agreement with the expectations and observations developed above. Finally, we would like to discuss our new results in relation to the results on permalloy obtained in the last decade with EUV T-MOKE (Mathias _et al._[20], Gunther _et al._[21] suppl., Jana _et al._[23]). In these works, Fe and Ni showed a delayed demagnetization behavior with identical demagnetization time constants, which was recently verified in XMCD experiments at the L-edges of Fe and Ni [25]. The identical demagnetization time constants for Fe and Ni aren't surprising: Fe and Ni are strongly exchange-coupled in these alloys. However, the important question has always been what causes the delay between the two subsystems. The interpretation of the OISTR process induced by the pump pulse now makes this clear. OISTR initiates a non-equilibrium between Figure 5: Occupied minority and majority states before (grey) and after the optical excitation (t=55 fs, blue and red) calculated with TDDFT for Ni (a) and Fe\({}_{50}\)Ni\({}_{50}\) (b). the Ni and Fe subsystems via the transfer of minority spins on the very short timescale of the pump excitation. Subsequently, demagnetization occurs via the well-known spin flip and exchange scattering processes, and both subsystems start to demagnetize at the same rate. At the time of the first experiment [20], the small increase due to OISTR in permalloy was not resolved, but the subsequent delayed behaviour in the demagnetization of the Fe and Ni subsystems was clearly identified. However, for the reasons given above, we also revisit the delayed demagnetization of Ni and Fe and perform an energy-integrated analysis of \(\mathrm{Re}(\epsilon_{xy})\). From this extended analysis, we still find a demagnetization delay for Ni in \(\mathrm{Fe}_{19}\mathrm{Ni}_{81}\) of \(12\pm 3\) fs, while \(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\) actually shows both a transient increase in Ni and a relative delay of \(95\pm 7\) fs (see SI for the transient dynamics of \(\mathrm{Re}(\epsilon_{xy})\)). We emphasize that it is really the stronger OISTR effect that enhances the delayed behavior, and not a modification of the exchange coupling that also influences the delayed behaviour, as was previously observed for permalloy alloyed with Cu [20]. In conclusion, we have carried out a combined experimental-theoretical analysis of optically-driven spin-transfer in Ni, \(\mathrm{Fe}_{19}\mathrm{Ni}_{81}\), and \(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\). We have paid special attention to the observed increase and delay of the magnetic asymmetry in T-MOKE, which was found in the alloys, but also in pure Ni material. Through an extended analysis (see Ref. [19]), we are able to directly compare transient dynamics in the real part of the off-diagonal element of the dielectric tensor \(\mathrm{Re}(\epsilon_{xy})\). Crucially, this procedure allows us to make a direct comparison with TDDFT theory calculations, which helps us to identify and explain all the observed signals. In summary, we verify OISTR in \(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\) and elucidate the origin of the previously found delays in these material systems. ## IV Methods ### EUV T-MOKE data for FENI alloys and Ni We measured the transient T-MOKE asymmetry for multiple incidence angles in order to be able to extract the transient off-diagonal tensor element \(\mathrm{Re}(\epsilon_{xy})\). The reflected 100 kHz EUV probe beam spans energies between 30-72 eV. We estimate the resolution of the spectrometer to be better than 0.2 eV, while the photon energy calibration is accurate to \(<2\%\). Details on the angle-resolved measurement and analysis are given in Ref. [2], while general information on the experimental setup can be found in Ref. [24]. The experimental T-MOKE asymmetries and their dynamics for \(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\), \(\mathrm{Fe}_{19}\mathrm{Ni}_{81}\) and Ni are shown in Supplementary Figure 1. Here, we pumped the samples with a \(47\pm 5\) fs pulses (Gauss FWHM) with a photon energy of 1.2 eV. The absorbed fluence is slightly different for each measurement: \(0.8\pm 0.2\ \nicefrac{{\mathrm{mJ}}}{{\mathrm{cm}^{2}}}\) for Ni, \(1.1\pm 0.2\ \nicefrac{{\mathrm{mJ}}}{{\mathrm{cm}^{2}}}\) for Fe\({}_{50}\)Ni\({}_{50}\) and \(0.8\pm 0.2\ \nicefrac{{\mathrm{mJ}}}{{\mathrm{cm}^{2}}}\) for Fe\({}_{19}\)Ni\({}_{81}\). ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References * Malinowski _et al._ (2008)G. Malinowski, F. D. Longa, J. H. H. Rietjens, P. V. Paluskar, R. Huijink, H. J. M. Swagten, and B. Koopmans, Nature Physics **4**, 855 (2008). * Rudolf _et al._ (2012)D. Rudolf, C. La-O-Vorakiat, M. Battiato, R. Adam, J. M. Shaw, E. Turgut, P. Maldonado, S. Mathias, P. Grychtol, H. T. Nembach, T. J. Silva, M. Aeschlimann, H. C. Kapteyn, M. M. Murnane, C. M. Schneider, and P. M. Oppeneer, Nat Commun **3**, 1037 (2012). * Battiato _et al._ (2010)M. Battiato, K. Carva, and P. M. Oppeneer, Physical Review Letters **105**, 027203 (2010). * Melnikov _et al._ (2011)A. Melnikov, I. Razdolski, T. O. Wehling, E. T. Papaioannou, V. Roddatis, P. Fumagalli, O. Aktsipetrov, A. I. Lichtenstein, and U. Bovensiepen, Physical Review Letters **107**, 076601 (2011). * Seifert _et al._ (2016)T. Seifert, S. Jaiswal, U. Martens, J. Hannegan, L. Braun, P. Maldonado, F. Freimuth, A. Kronenberg, J. Henrizi, I. Radu, E. Beaurepaire, Y. Mokrousov, P. M. Oppeneer, M. Jourdan, G. Jakob, D. Turchinovich, L. M. Hayden, M. Wolf, M. Munzenberg, M. Klaui, and T. Kampfrath, Nature Photon **10**, 483 (2016). * Stanciu _et al._ (2007)C. D. Stanciu, F. Hansteen, A. V. Kimel, A. Kirilyuk, A. Tsukamoto, A. Itoh, and T. Rasing, Phys. Rev. Lett. **99**, 047601 (2007). * Lambert _et al._ (2014)C.-H. Lambert, S. Mangin, C. B. Varaprasad, Y. Takahashi, M. Hehn, M. Cinchetti, G. Malinowski, K. Hono, Y. Fainman, M. Aeschlimann, and E. E. Fullerton, Science **345**, 1337 (2014), 1403.0784. * Schlauderer _et al._ (2019)S. Schlauderer, C. Lange, S. Baierl, T. Ebnet, C. P. Schmid, D. C. Valovcin, A. K. Zvezdin, A. V. Kimel, R. V. Mikhaylovskiy, and R. Huber, Nature **569**, 383 (2019). * Hofherr _et al._ (2020)M. Hofherr, S. Hauser, J. K. Dewhurst, P. Tengdin, S. Sakshath, H. T. Nembach, S. T. Weber, J. M. Shaw, T. J. Silva, H. C. Kapteyn, M. Cinchetti, B. Rethfeld, M. M. Murnane, D. Steil, B. Stadtmuller, S. Sharma, M. Aeschlimann, and S. Mathias, Science Advances **6**, eaay8717 (2020). * Tengdin _et al._ (2020)P. Tengdin, C. Gentry, A. Blonsky, D. Zusin, M. Gerrity, L. Hellbruck, M. Hofherr, J. Shaw, Y. Kvashnin, E. K. Delczeg-Czirjak, M. Arora, H. Nembach, T. J. Silva, S. Mathias, M. Aeschlimann, H. C. Kapteyn, D. Thonig, K. Koumpouras, O. Eriksson, and M. M. Murnane, Science Advances **6**, eaaz1100 (2020). * Willems _et al._ (2020)F. Willems, C. von Korff Schmising, C. Struber, D. Schick, D. W. Engel, J. K. Dewhurst, P. Elliott, S. Sharma, and S. Eisebitt, Nat Commun **11**, 871 (2020). * Siegrist _et al._ (2019)F. Siegrist, J. A. Gessner, M. Ossiander, C. Denker, Y.-P. Chang, M. C. Schroder, A. Guggenmos, Y. Cui, J. Walowski, U. Martens, J. K. Dewhurst, U. Kleineberg, M. Munzenberg, S. Sharma, and M. Schultze, Nature **571**, 240 (2019). * Steil _et al._ (2020)D. Steil, J. Walowski, F. Gerhard, T. Kiessling, D. Ebke, A. Thomas, T. Kubota, M. Oogane, Y. Ando, J. Otto, A. Mann, M. Hofherr, P. Elliott, J. K. Dewhurst, G. Reiss, L. Molenkamp, M. Aeschlimann, M. Cinchetti, M. Munzenberg, S. Sharma, and S. Mathias, Phys. Rev. Research **2**, 023199 (2020). * Ryan _et al._ (2023)S. A. Ryan, P. C. Johnsen, M. F. Elhanoty, A. Grafov, N. Li, A. Delin, A. Markou, E. Lesne, C. Felser, O. Eriksson, H. C. Kapteyn, O. Granas, and M. M. Murnane, arXiv (2023), 2305.16455. * El-Ghazaly _et al._ (2020)A. El-Ghazaly, J. Gorchon, R. B. Wilson, A. Pattabi, and J. Bokor, Journal of Magnetism and Magnetic Materials **502**, 166478 (2020). * Dewhurst _et al._ (2018)J. K. Dewhurst, P. Elliott, S. Shallcross, E. K. U. Gross, and S. Sharma, Nano Letters **18**, 1842 (2018). * Minar _et al._ (2014)J. Minar, S. Mankovsky, O. Sipr, D. Benea, and H. Ebert, Journal of Physics: Condensed Matter **26**, 274206 (2014). * Hennes _et al._ (2021)M. Hennes, B. Rosner, V. Chardonnet, G. S. Chiuzbaian, R. Delaunay, F. Doring, V. A. Guzenko, M. Hehn, R. Jarrier, A. Kleibert, M. Lebugle, J. Luning, G. Malinowski, A. Merhe, D. Naumenko, I. P. Nikolov, I. Lopez-Quintas, E. Pedersoli, T. Savchenko, B. Watts, M. Zangrando, C. David, F. Capotondi, B. Vodungbo, and E. Jal, Applied Sciences **11**, 325 (2021). * Probst _et al._ (2023)H. Probst, C. Moller, M. Schumacher, T. Brede, J. K. Dewhurst, M. Reutzel, D. Steil, S. Sharma, G. S. M. Jansen, and S. Mathias, "Unraveling Femtosecond Spin and Charge Dynamics with EUV T-MOKE Spectroscopy," (2023), arXiv:2306.02783 [cond-mat]. * Mathias _et al._ (2012)S. Mathias, C. La-O-Vorakiat, P. Grychtol, P. Granitzka, E. Turgut, J. M. Shaw, R. Adam, H. T. Nembach, M. E. Siemens, S. Eich, C. M. Schneider, T. J. Silva, M. Aeschlimann, M. M. Murnane, and H. C. Kapteyn, Proceedings of the National Academy of Sciences **109**, 4792 (2012). * Gunther _et al._ (2014)S. Gunther, C. Spezzani, R. Ciprian, C. Grazioli, B. Ressel, M. Coreno, L. Poletto, P. Miotti, M. Sacchi, G. Panaccione, V. c. v. Uhlir, E. E. Fullerton, G. De Ninno, and C. H. Back, Phys. Rev. B **90**, 180407 (2014). * Yao _et al._ (2020)K. Yao, F. Willems, C. von Korff Schmising, I. Radu, C. Struber, D. Schick, D. Engel, A. Tsukamoto, J. Dewhurst, S. Sharma, and others, Physical Review B **102**, 100405 (2020). * Jana _et al._ (2017)S. Jana, J. A. Terschlusen, R. Stefanuik, S. Plogmaker, S. Troisi, R. S. Malik, M. Svanqvist, R. Knut, J. Soderstrom, and O. Karis, Review of Scientific Instruments **88**, 033113 (2017). * Moller _et al._ (2021)C. Moller, H. Probst, J. Otto, K. Stroh, C. Mahn, S. Steil, V. Moshnyaga, G. M. Jansen, D. Steil, and S. Mathias, Review of Scientific Instruments **92**, 065107 (2021). * Jana _et al._ (2022)S. Jana, R. Knut, S. Muralidhar, R. S. Malik, R. Stefanuik, J. Akerman, O. Karis, C. Schussler-Langeheine, and N. Pontius, Applied Physics Letters **120**, 102404 (2022). * Dewhurst _et al._ (2016)J. Dewhurst, K. Krieger, S. Sharma, and E. Gross, Computer Physics Communications **209**, 92 (2016). * Krieger _et al._ (2015)K. Krieger, J. Dewhurst, P. Elliott, S. Sharma, and E. Gross, Journal of chemical theory and computation **11**, 4870 (2015). * Bierbrauer _et al._ (2017)U. Bierbrauer, S. T. Weber, D. Schummer, M. Barkowski, A.-K. Mahro, S. Mathias, H. C. Schneider, B. Stadtmuller, M. Aeschlimann, and B. Rethfeld, Journal of Physics: Condensed Matter **29**, 244002 (2017). * Dewhurst _et al._ (2020)J. K. Dewhurst, F. Willems, P. Elliott, Q. Z. Li, C. v. K. Schmising, C. Struber, D. W. Engel, S. Eisebitt, and S. Sharma, Physical Review Letters **124**, 077203 (2020), 1909.00199. * Dewhurst _et al._ (2018)J. K. Dewhurst, S. Sharma, and et. al., elk.sourceforge.net. * Koopmans _et al._ (2010)B. Koopmans, G. Malinowski, F. D. Longa, D. Steiauf, M. Fahnle, T. Roth, M. Cinchetti, and M. Aeschlimann, Nature Materials **9**, 259 (2010). * (32)This data is reproduced from Ref. [19], where we describe the \(\epsilon_{xy}\) analysis procedure in detail. ## Acknowledgements We thank Mario Fix and Manfred Albrecht from the University of Augsburg for the fabrication of the FeNi samples. We also thank Thomas Brede from the Institut fur Materialphysik Gottingen for the fabrication of the Ni sample. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project IDs 399572199 and 432680300/SFB 1456. G.S.M.J. acknowledges financial support by the Alexander von Humboldt Foundation. S.S. and J.K.D. would like to thank the DFG for funding through project-ID 328545488 TRR227 (project A04). ## Author contributions G.S.M.J., M.R., D.S., S.S. and S.M. conceived the research. C.M., H.P., M.S. and M.B. carried out the measurements. C.M., H.P. and G.S.M.J. performed the data analysis. J.K.D. and S.S. performed the TDDFT calculations. All authors discussed the results. G.S.M.J., D.S. and S.M. were responsible for the overall project direction. C.M., H.P., G.S.M.J. and S.M. wrote the manuscript with contributions from all co-authors. ## Supplementary information ### EUV T-MOKE data for FeNI alloys and Ni Supplementary Figure S1 shows the complete data sets that were used to fit \(\mathrm{Re}(\epsilon_{xy})\). ## VI Delayed dynamics in FeNi alloys In order to extract the relative demagnetization delay, we evaluate the FeNi asymmetries for large energy regions around the M edge, as has been done in earlier T-MOKE studies [20; 21; 23; 24]. In contrast to the spectrally-resolved analysis in the main paper, we would call this an element-specific analysis of the data. Supplementary Figure S2 shows the experimental transient evolution of the T-MOKE asymmetry in the marked energy regions. We note that the dynamics strongly differ for different incidence angles \(\theta\). Integrating over extended energy regions results in a "mixing" of the spectral features: at the smaller incidence angle, the integral of the ultrafast increase around 64 eV and the decrease around 66 eV is measured. This leads to the previously reported delay of the ultrafast dynamics of Ni compared to Fe. In contrast, the delay in the energy-integrated signal vanishes at larger incidence angles, because the increase at \(h\nu=64\) eV is not present in the transient asymmetry (cf. Fig. 2 in the main text). Therefore, the transient evolution of the T-MOKE asymmetry is not always a good indication of the magnetic moment and depends strongly on the incidence angle, as described in the previous section and in detail in Ref. [10]. In consequence, we performed an element-specific evaluation of the off-diagonal tensor element Re\((\epsilon_{xy})\), which is shown in Supplementary Figure S3. We fitted the transient evolution of \(\mathrm{Re}(\epsilon_{xy})\) with a simple exponential decay fit given by \[\frac{A(t)}{A(t<t_{0})}=G(t)\,\circled{\otimes}\left[1-\Theta(t-t_{0})\cdot \Delta A_{m}\cdot\left(1-e^{-\frac{t-t_{0}}{\tau_{m}}}\right)\,\right]. \tag{1}\] Here, \(t_{0}\) defines the onset of the dynamics, \(\tau_{m}\) is the demagnetization constant, and \(\Delta A_{m}\) is proportional to the maximum demagnetization. The fit results are shown in Table. We observe a delayed onset of the dynamics of \(\mathrm{Re}(\epsilon_{xy})\) for Ni compared to Fe that amounts to \(12\pm 3\) fs for \(\mathrm{Fe}_{19}\mathrm{Ni}_{81}\) and to \(95\pm 7\) fs for \(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\). \begin{table} \begin{tabular}{l c c|c c} & \multicolumn{2}{c|}{\(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\)} & \multicolumn{2}{c}{\(\mathrm{Fe}_{19}\mathrm{Ni}_{81}\)} \\ & Fe & Ni & Fe & Ni \\ \hline \(t_{0}\) (fs) & \(0\pm 2\) & \(95\pm 6\) & \(0\pm 2\) & \(12\pm 2\) \\ \(\Delta A_{m}\) & \(0.286\pm 0.003\) & \(0.234\pm 0.008\) & \(0.525\pm 0.004\) & \(0.498\pm 0.004\) \\ \(\tau_{m}\) (fs) & \(215\pm 6\) & \(184\pm 19\) & \(147\pm 4\) & \(153\pm 3\) \\ \hline \end{tabular} \end{table} Table 1: Fit results of the exponential fits to the transient evolution of \(\mathrm{Re}(\epsilon_{xy})\) (Supplementary Figure S3) for Fe and Ni in both FeNi-alloys. ## I Tddft: Oist and Spin Flips in FeNi Fig. S4 shows the transient changes in the occupation of the minority and majority channels in Fe\({}_{50}\)Ni\({}_{50}\) calculated by TDDFT. As expected, optical excitation with a 1.2 eV pump pulse will induce a decrease of carriers below the Fermi level and an increase above the Fermi level. The incident fluence is 18 \(\nicefrac{{}^{\mathrm{mJ}}}{{}_{\mathrm{cm}}^{2}}\). The optical intersite spin transfer (OISTR) can be seen by the strong loss of minority Ni carriers around 1 eV below E\({}_{F}\) and by a strong increase of Fe minority carriers \(\approx\) 1 eV above E\({}_{f}\). This process happens directly at the onset of the optical laser pulse (47 fs FWHM). At a time of t=120 fs we find that scattering processes already dominate the dynamics. Spin-orbit mediated spin flips below the Fermi level lead to the strong losses in the majority channel both for Ni and Fe. **Supplementary Information** **Verification of ultrafast spin transfer effects in FeNi alloys** Christina Moller,\({}^{1}\) Henrike Probst,\({}^{1}\) G. S. Matthijs Jansen,\({}^{1}\) Maren Schumacher,\({}^{1}\) Mariana Brede,\({}^{1}\) John Kay Dewhurst,\({}^{2}\) Marcel Reutzel,\({}^{1}\) Daniel Steil,\({}^{1}\) Sangeeta Sharma,\({}^{3}\) and Stefan Mathias\({}^{1}\) \({}^{1}\)_I. Physikalisches Institut, Georg-August-Universitat Gottingen, Friedrich-Hund-Platz 1, 37077 Gottingen, Germany_ \({}^{2}\)_Max Planck Institute of Microstructure Physics, Weinberg 2, 06120 Halle, Germany_ \({}^{3}\)_Max-Born-Institute for Non-linear Optics and Short Pulse Spectroscopy, Max-Born Strasse 2A, 12489 Berlin, Germany_ ## EUV T-MOKE data for FeNi alloys and Ni Supplementary Figure 1 shows the complete data sets that were used to fit \(\mathrm{Re}(\varepsilon_{xy})\). Supplementary Figure 1. All experimental results for Ni, Fe\({}_{19}\)Ni\({}_{81}\) and Fe\({}_{50}\)Ni\({}_{50}\). We measured the transient T-MOKE asymmetry for three different incidence angles for Ni (a-d), and for two different incidence angles for the two FeNi alloys (e-j). The asymmetry around 63.9 eV photon energy shows dramatically different dynamics for different incidence angles. ## III Delayed dynamics in FeNi alloys In order to extract the relative demagnetization delay, we evaluate the FeNi asymmetries for large energy regions around the M edge, as has been done in earlier T-MOKE studies [2 ; 3 ; 4 ; 5 ; 6 ; 7 ; 8 ; 9 ; 10 ; 11 ; 12 ; 13 ; 14 ; 15 ; 16 ; 17 ; 18 ; 19 ; 20 ; 21 ; 22 ; 23 ; 24 ; 25 ; 26 ; 27 ; 28 ; 29 ; 30 ; 31 ; 32 ; 33 ; 34 ; 35 ; 36 ; 37 ; 38 ; 39 ; 40 ; 41 ; 42 ; 43 ; 44 ; 45 ; 46 ; 47 ; 48 ; 49 ; 50 ; 51 ; 52 ; 53 ; 54 ; 55 ; 56 ; 57 ; 58 ; 59 ; 60 ; 61 ; 62 ; 63 ; 64 ; 65 ; 66 ; 67 ; 68 ; 69 ; 70 ; 71 ; 72 ; 73 ; 74 ; 75 ; 76 ; 77 ; 78 ; 79 ; 80 ; 81 82 ; 83 84 ; 84 85 ; 85 86 ; 86 87 ; 87 88 ; 88 89 ; 90 91 92 93 94 95 96 97 98 99 100 99 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 1000 100 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 10000 1000 1000 1000 10000 1000 10000 1000 10000 10000 10000 1000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 100000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000 100000 100000 10000 10000 10000 10000 10000 100000 100000 10000 100000 10000 10000 100000 10000 10000 10000 10000 10000 100000 100000 100000 10000 100000 100000 100000 100000 100000 100000 100000 10000 100000 100000 100000 100000 100000 10000 100000 100000 10000 10000 100000 100000 100000 100000 100000 100000 100000 100000 100000 10000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 1000000 1000000 100000 1000000 100000 100000 100000 100000 100000 100000 100000 100000 100000 1000000 100000 100000 1000000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 100000 1000000 100000 100000 100000 100000 100000 100000 1000000 1000000 100000 100000 1000000 1000000 1000000 100000 1000000 1000000 100000 100000 100000 1000000 1000000 1000000 1000000 1000000 1000000 100000 1000000 100000 1000000 100000 1000000 100000 1000000 1000000 1000000 1000000 1000000 1000000 100000 1000000 1000000 1000000 1000000 1000000 100000 10000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 1000000 100000 1000000 1000000 1000000 1000000 1000000 100000 1000000 100000 1000000 1000000 100000 1000000 100000 1000000 1000000 1000000 1000000 100000 1000000 1000000 1000000 1000000 100000 1000000 1000000 1000000 1000000 1000000 1000000 100000 1000000 100000 1000000 1000000 100000 1000000 1000000 1000000 1000000 1000000 100000 1000000 100000 1000000 1000000 100000 100000 1000000 1000000 100000 100000 1000000 10000 100000 100000 1000000 100000 1000000 100000 100000 100000 100000 100000 100000 1000000 100000 100000 100000 100000 1000000 100000 100000 100000 100000 100000 100000 100000 10000 100000 10000 100000 100000 100000 100000 10000 100000 100000 100000 10000 100000 100000 100000 10000 10000 100000 10000 10000 100000 100000 10000 100000 100000 10000 100000 10000 10000 100000 100000 100000 10000 10000 10000 10000 10000 10000 100000 100000 of the magnetic moment and depends strongly on the incidence angle, as described in the previous section and in detail in Ref. [10]. In consequence, we performed an element-specific evaluation of the off-diagonal tensor element \(\mathrm{Re}(\epsilon_{xy})\), which is shown in Supplementary Figure 3. We fitted the transient evolution of \(\mathrm{Re}(\epsilon_{xy})\) with a simple exponential decay fit given by \[\frac{A(t)}{A(t<t_{0})}=G(t)\otimes\left[1-\Theta(t-t_{0})\cdot\Delta A_{m} \cdot\left(1-e^{-\frac{t-t_{0}}{\tau_{m}}}\right)\right]. \tag{1}\] Here, \(t_{0}\) defines the onset of the dynamics, \(\tau_{m}\) is the demagnetization constant, and \(\Delta A_{m}\) is proportional to the maximum demagnetization. The fit results are shown in Table. We observe a delayed onset of the dynamics of \(\mathrm{Re}(\epsilon_{xy})\) for Ni compared to Fe that amounts to \(12\pm 3\) fs for \(\mathrm{Fe}_{19}\mathrm{Ni}_{81}\) and to \(95\pm 7\) fs for \(\mathrm{Fe}_{50}\mathrm{Ni}_{50}\). ## Tddft: Oistr and Spin Flips in Feni Fig. 4 shows the transient changes in the occupation of the minority and majority channels in Fe\({}_{50}\)Ni\({}_{50}\) calculated by TDDFT. As expected, optical excitation with a 1.2 eV pump pulse will induce a decrease of carriers below the Fermi level and an increase above the Fermi level. The incident fluence is 18 \(\nicefrac{{\mathrm{m}}}{{\mathrm{c}m^{2}}}\). The optical intersite spin transfer (OISTR) can be seen by the strong loss of minority Ni carriers around 1 eV below E\({}_{F}\) and by a strong increase of Fe minority carriers \(\approx\) 1 eV above E\({}_{f}\). This process happens directly at the onset of the optical laser pulse (47 fs FWHM). At a time of t=120 fs we find that scattering processes already dominate the dynamics. Spin-orbit mediated spin flips below the Fermi level lead to the strong losses in the majority channel both for Ni and Fe.
2303.07035
FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning
In recent decades, wildfires, as widespread and extremely destructive natural disasters, have caused tremendous property losses and fatalities, as well as extensive damage to forest ecosystems. Many fire risk assessment projects have been proposed to prevent wildfires, but GIS-based methods are inherently challenging to scale to different geographic areas due to variations in data collection and local conditions. Inspired by the abundance of publicly available remote sensing projects and the burgeoning development of deep learning in computer vision, our research focuses on assessing fire risk using remote sensing imagery. In this work, we propose a novel remote sensing dataset, FireRisk, consisting of 7 fire risk classes with a total of 91872 labelled images for fire risk assessment. This remote sensing dataset is labelled with the fire risk classes supplied by the Wildfire Hazard Potential (WHP) raster dataset, and remote sensing images are collected using the National Agriculture Imagery Program (NAIP), a high-resolution remote sensing imagery program. On FireRisk, we present benchmark performance for supervised and self-supervised representations, with Masked Autoencoders (MAE) pre-trained on ImageNet1k achieving the highest classification accuracy, 65.29%. This remote sensing dataset, FireRisk, provides a new direction for fire risk assessment, and we make it publicly available on https://github.com/CharmonyShen/FireRisk.
Shuchang Shen, Sachith Seneviratne, Xinye Wanyan, Michael Kirley
2023-03-13T11:54:16Z
http://arxiv.org/abs/2303.07035v2
FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning ###### Abstract In recent decades, wildfires, as widespread and extremely destructive natural disasters, have caused tremendous property losses and fatalities, as well as extensive damage to forest ecosystems. Many fire risk assessment projects have been proposed to prevent wildfires, but GIS-based methods are inherently challenging to scale to different geographic areas due to variations in data collection and local conditions. Inspired by the abundance of publicly available remote sensing projects and the burgeoning development of deep learning in computer vision, our research focuses on assessing fire risk using remote sensing imagery. In this work, we propose a novel remote sensing dataset, FireRisk, consisting of 7 fire risk classes with a total of 91 872 labelled images for fire risk assessment. This remote sensing dataset is labelled with the fire risk classes supplied by the Wildfire Hazard Potential (WHP) raster dataset [9], and remote sensing images are collected using the National Agriculture Imagery Program (NAIP) [27], a high-resolution remote sensing imagery program. On FireRisk, we present benchmark performance for supervised and self-supervised representations, with Masked Autoencoders (MAE) [16] pre-trained on ImageNet1k achieving the highest classification accuracy, 65.29%. This remote sensing dataset, FireRisk, provides a new direction for fire risk assessment, and we make it publicly available on [https://github.com/CharmonyShen/FireRisk](https://github.com/CharmonyShen/FireRisk). ## 1 Introduction Forests have always been a crucial part of ecosystems because of their capacity to filter air, preserve the quality of the soil, and hold onto precipitation [28]. In the same breath, woodlands serve as sources of raw materials for human industry, possible areas for human employment, and homes for a broad diversity of animal species. Therefore, if a wildfire extensively burns the forest, it will cause irreparable economic losses as well as harm to the biological ecology. For instance, Australia has been ravaged by bushfires for more than six months, starting in September 2019. Property loss from these fires is estimated to be worth over $100 billion. More serious is the deterioration of soil and air quality, and the loss of numerous animals as a result of this bushfire [15]. Many studies have been conducted to address the harmful effects of this natural hazard using a multitude of techniques. Most existing fire risk modelling is founded on geoscientific knowledge. A research conducted by [20] found a strong connection between fire risk models based on geospatial data and actual fire incidences. Based on this theoretical investigation, many traditional fire risk models are developed from fire-related parameters utilizing various data analysis techniques. For example, [1] generated forest fire risk maps using the multiple-criteria decision analysis approach. Combining geographic information system (GIS) and an analytical hierarchy process (AHP), [29] evaluated seven fire risk elements, including climate, topography, and human effect. Due to the proliferation of satellite and aerial remote sensing projects that are available to the public, enormous remote sensing images are now more accessible. Moreover, as a result of the development of optical sensors, the resolution of remote sensing photographs has grown substantially, allowing surface features to be differentiated more clearly. In past few years, remote sensing images have been widely used in many practical tasks [6, 11, 24]. Therefore, several research have examined incorporating remote sensing images into fire risk assessments. As in an earlier study by [19], seasonal remote sensing data from MODIS satellite imagery, climate data and fuel type were combined to discuss the seasonal fire potential in different regions using the fire potential index (FPI). Some recent studies utilize machine learning methods for remote sensing imagery and related geographic variables. Using geographical information given by Landsat 8 satellite images such as land surface temperature (LST), normalized differential moisture index (NDMI), and land use and land cover (LULC), [22] predicted the vulnerability to forest fires with three basic machine learning models. In addition, [31] suggested a topography-, weather-, and fuel-based fire assessment approach in which the fuel variables were derived from the MODIS remote sensing project and random forests were used to investigate the association between variables and wildfire in order to build a dataset of wildfire potential. However, these solutions still rely to some extent on other geoscientific features, which need specialised knowledge. Although the use of geospatial data can improve the accuracy of assessing fire risk as much as possible, these models lack adequate generality due to inconsistencies in fire risk features between models. Thus, we question whether a simpler method exists for linking solely remote sensing images to fire risk while still achieving satisfactory results. Because of these motivations, it is assumed that remote sensing images contain geographic information that can reflect the degree of fire risk, e.g., the species of trees in the forest and the proximity to human activity areas identified from remote sensing images can indirectly mirror the difficulty of fire occurrence. The objective of this work is to develop a simple scheme to describe a mapping between remote sensing imagery and fire risk on the Earth's surface. Using data provided by Wildfire Hazard Potential (WHP) [9] and National Agriculture Imagery Program (NAIP) [27], we construct a remote sensing dataset, FireRisk, for fire risk assessment. This remote sensing dataset for fire risk assessment consists of 91 872 labelled images, where each remote sensing image corresponds to a fire risk class, with a total of seven fire risk classes. Figure 1 depicts an example image for each of these classes, from which it is intuitively evident that forest cover and fire danger are correlated. In addition, we collect the unlabelled dataset from NAIP used to pre-train for the following self-supervised benchmark models. The labelled fire risk dataset, FireRisk, and the unlabelled dataset for pre-training, UnlabelledNAIP, are publicly available on [https://github.com/CharmonyShen/FireRisk](https://github.com/CharmonyShen/FireRisk). Further, we provide benchmark evaluations of supervised and self-supervised learning on FireRisk. Using transfer learning, we fine-tune ResNet-50 [17], ViT-B/16 [10], as well as DINO [5] and MAE [16] with ViT-B/16 as the backbone, all of which were pre-trained on ImageNet [8], using our FireRisk to evaluate the classification accuracy and F1-score of different benchmarks. For the self-supervised learning models, DINO [5] and MAE [16], we additionally measure the end-to-end performance of benchmark models pre-trained on our UnlabelledNAIP. Our main contributions in this work are: * We propose FireRisk, a remote sensing dataset for fire risk assessment, and offer a novel method for constructing a mapping between 7 fire risk classes and remote sensing images. * To investigate the performance of supervised and self-supervised learning on our FireRisk, we employ ResNet [17], ViT [10], DINO [5], and MAE [16] as benchmark models. With the use of transfer learning, we obtain the results of these models pre-trained on ImageNet [8] and then fine-tuned on our FireRisk. * Using the performance of our benchmarks on 20% and 50% of the training data from the original FireRisk, we illustrate the efficiency of data labelling in FireRisk as well as the sensitivity of various benchmark models to the amount of labelled data. * We gather an unlabelled dataset, UnlabelledNAIP, from the NAIP remote sensing project and utilize it to pre-train novel latent representations of DINO [5] and MAE [16]. The results of fine-tuning on FireRisk using these two representations demonstrate the potential of different self-supervised benchmarks for enhancement in fire risk assessment. ## 2 Related Work Our work focuses on proposing a remote sensing image classification dataset for fire risk assessment. In this context, we first review the related remote sensing datasets. In addition, 4 advanced supervised and self-supervised learning benchmarks are implemented to evaluate our FireRisk, hence some similar computer vision approaches are also presented in this section. ### Remote Sensing Classification Datasets Using satellites or aeroplanes, remote sensing is the technique of detecting and monitoring the physical features of a region by measuring its optical radiation from a distance. These remote sensing images, collected using spe Figure 1: This overview shows sample images of all 7 labels in our FireRisk. The images measure \(270\times 270\) pixels, with a total of 91,872 images. cial cameras, can help researchers'sense' what is occurring on the earth [3]. Since remote sensing images contain some implicit geographical features, they are frequently utilized to address practical classification tasks, such as land-use classification [6], climatic zone classification [24], and tree species classification [11]. Thus, several labelled remote sensing datasets, including BigEarthNet [33], EuroSAT [18], AID [38], So2Sat [40], and UC Merced Land Use [39], have emerged to train deep learning models in computer vision to solve various classification tasks of remote sensing images. The majority of the existing remote sensing image datasets for wildfires focus on identifying fires that have occurred from remote sensing images, for example, [7] created a remote sensing dataset for fires based on Landsat-8 images and used the Convolutional Neural Network (CNN) based U-Net to detect active fires on the surface. Regarding fire risk assessment tasks, most of the explored approaches [19, 22, 31] emphasise the combination of remote sensing images and geographic information; there are no publicly available datasets for fire risk classification using only remote sensing images. ### Computer Vision Models Based on whether labelled training data is required, deep learning models in computer vision can be broadly classified into, supervised and unsupervised models, where self-supervised learning is also a sort of unsupervised learning. **Supervised Learning.** The key to supervised learning is the extraction of features from image information, which can be divided into two main schools of thought as follows: **a) CNNs Architecture.** Convolutional structures have been introduced into computer vision for image classification since they use dimensionality reduction to lower the number of parameters while preserving the relative locations of pixels. [23] first introduced, AlexNet, a deep CNN architecture, which is more effective compared to traditional manual extraction features on ImageNet. Theoretically, boosting the CNN's depth can improve its performance. This is because deep networks incorporate features and classifiers of multiple dimensions in a multi-layered end-to-end manner, and the deeper the network structure, the richer the level of features. Nevertheless, increasing the network's depth may lead to problems such as vanishing gradients, exploding gradients and network degradation. To address these problems, ResNet was proposed by [17] to enable the training of deeper networks by introducing residual blocks. **b) Transformers Architecture.** In recent years, as self-attention-based mechanisms have become more prevalent in natural language processing [35], numerous Transformer-based systems have created waves in computer vision because to the efficiency and scalability of Transformers. Some studies, such as Detection Transformer (DETR) [4], attempt to merge CNN with Transformer, while others have completely replaced the CNN framework with Transformer, such as Vision Transformer (ViT) citedosovitskiy2020image, which utilize the encoder structure comprised of multi-headed self-attention blocks in the Transformer to extract features and then employ MLP for image classification. **Self-supervised Learning.** As the complexity of deep learning models increases, data hunger has been a hurdle for supervised models to overcome, while self-supervised learning methods can automatically learn latent representations from unlabelled data. In computer vision, it can be categorized into predictive, generative and contrastive methods, and the latter two of which are explained in detail below: **a) Generative Methods.** Generative methods learn latent representations for self-supervised learning by reconstructing or generating input data [37]. [2] initially presented the notion of autoencoder, which employs an encoder to map the input to a latent vector and then a decoder to reconstruct the input from the vector. To improve the robustness to noise in the data, [36] suggested a denoising autoencoder. Inspired by the outstanding performance of ViT [10] for feature extraction, MAE [16] reconstructed randomly masked patches using the structure of denoising autoencoder. **b) Contrastive Methods.** Contrastive methods train the model by comparing inputs that are semantically equivalent, such as two augmented views of the same image. Yet, over emphasis on the similarity between input pairs may result in model collapse. The most intuitive approach is to introduce negative samples [30, 34], while other approaches use teacher-student networks to transfer knowledge in both network structures without negative samples. [14] proposed BYOL, the first method for self-supervised learning based on knowledge distillation. DINO [5] further explored the introduction of ViT backbone into knowledge distillation, they built a teacher network with a dynamic encoder and avoided model collapse by centering the output of the teacher network. Although many advanced supervised and self-supervised deep learning models have advanced performance, they are difficult to train on a relatively small amount of training data due to their large parameter size. A common solution is to use transfer learning, which pre-trains a model on a generic or well-studied dataset and then transferring its acquired knowledge to a new downstream task. In the field of remote sensing images, transfer learning has strong applicability for both supervised and self-supervised models, with its ability to fine-tune the models, pre-trained on generic well-studied datasets, on specific remote sensing datasets. ## 3 Datasets Our work focuses on constructing a remote sensing dataset, FireRisk, for fire risk assessment based on the fire risk levels provided by the WHP project [9]. In addition, we also supply an unlabelled dataset, UnlabelledNAIP, that contains remote sensing images for pre-training self-supervised benchmarks. ### Construction of Our FireRisk **Extracting Labels From the WHP.** The construction of our FireRisk is inspired by the WHP [9], a raster dataset developed by the U.S. Department of Agriculture that provides a relatively authoritative geographic assessment of fire risk and the intensity of wildfires in the United States. Their model was developed using a series of geostatistical data, including spatial estimates of wildfire susceptibility and intensity generated by FSim [12], spatial fuels and vegetation data from LAND-FIRE [32], and fire occurrence point locations from the FPA [12]. Based on their investigation, we utilize their classified 2020 version of the WHP raster dataset [9], containing 7 fire risk levels, from which we extract fire risk labels for the area represented by each raster. We download the data from the Missoula Fire Sciences Laboratory's official website 1 in.gdb format, a geodatabase format that divides the United States, including Hawaii, Alaska, and the continental United States, into grids with 270-meter sides and provides information on the fire risk level of the area represented by each grid. Since the coordinate information of the raster dataset exists implicitly, we use the geoprocessing tools in ArcGIS, a geospatial information processing program, to transform the raster dataset into point features that are exported as tabular data containing only the coordinates of each geographic grid and its corresponding labels. Footnote 1: [https://www.firelab.org/project/wildfire-hazard-potential](https://www.firelab.org/project/wildfire-hazard-potential) Because the WHP [9] contains a large number of point features and geographically adjacent areas may have similar fire risk features, only a subset of the WHP with 110 data intervals of equally spaced sampling for all data points is employed in this study. **Construction Our Dataset Using the NAIP.** After obtaining the fire risk labels for each grid, sufficient remote sensing imagery is required in order to construct our FireRisk remote sensing dataset. With the large number of satellite image projects publicly available on the Google Earth Engine platform2[13], we can easily access remote sensing projects from different time periods. However, since each grid in the WHP raster dataset covers only a 270-meter-square area, the lower resolution remote sensing images in commonly used satellite remote sensing projects, such as Landsat [25], MODIS [21], and Sentinel [26], do not provide sufficient geographic information within each grid area. Footnote 2: [https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQO](https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQO) By the utilization of an aerial platform to capture orthorectified remote sensing images of the Earth's surface, the NAIP project, presented by [27], with its use of an aerial platform to acquire orthorectified remote sensing images of the Earth's surface can achieve high spatial resolution, compared with these projects. Although the images are collected independently by each state and their spatial attributes may vary, all images in the current NAIP project are available at a minimum resolution of 1 meter. To optimize image quality and restrict shadow length, the sun must be 30 degrees above the horizon during image capture, and the cloud cover cannot exceed 10% per quarter of the remote sensing image patches. In addition, because primary purpose of NAIP [27] is agricultural mapping, the images are collected during the plant growing season, with no snow or flood coverage allowed. Since there is a high overlap in coverage and resolution with the point features in the WHP dataset [9], our work utilizes NAIP [27] to collect remote sensing images. First, the coordinates of the four vertices of the grid with a fire risk label are derived from the center coordinates of each grid in the WHP dataset [9] using a simple geographic coordinate transformation in Eq. (1). \[Grid_{NW} =(C_{Lng}-135\times\gamma_{1},C_{Lat}+135\times\gamma_{2}) \tag{1}\] \[Grid_{NE} =(C_{Lng}+135\times\gamma_{1},C_{Lat}+135\times\gamma_{2})\] \[Grid_{SW} =(C_{Lng}-135\times\gamma_{1},C_{Lat}-135\times\gamma_{2})\] \[Grid_{SE} =(C_{Lng}+135\times\gamma_{1},C_{Lat}-135\times\gamma_{2})\] where: \[Grid =\] the geographic coordinates of the four vertices of the square grid, whose corner markers represent the position of the vertices \[C_{Lng} =\text{longitude of the geographic coordinates of the grid centroids}\] \[C_{Lat} =\text{latitude of the geographic coordinates of the grid centroids}\] \[\gamma_{1} =\text{the conversion factor for meters and longitude}\] \[\gamma_{2} =\text{the conversion factor for meters and latitude}\] Subsequently, we access the remote sensing images in the NAIP [27] dataset based on the grid coordinates using Google Earth Engine [13]. In the spectral configuration, only the R, G and B channels of each remote sensing image are extracted, with each channel represented by an 8-bit unsigned integer. In the temporal configuration, since we use the 2020 version of WHP data [9] and the NAIP project [27] inherently has a relatively long revisit period, we adjust the time span from January 1, 2019 to December 1, 2020 in order to obtain valid remote sensing images. In theory, the remote sensing images obtained from the NAIP project [27] should be square. However, in practice, the majority of remote sensing images are near-square rectangles due to slight angles between the aircraft sensors and the ground, and inconsistent longitude and latitude resolutions in some states. So finally, we crop the center of each remote sensing image to a square image of \(320\times 320\) pixels. Our work, as seen in Figure 2, follows this workflow to construct a remote sensing dataset for a fire risk assessment. Thus, the fire risk label of a grid derived from the WHP raster dataset [9] can be linked by geographic coordinates to the remote sensing image in the NAIP remote sensing project [27], resulting in a mapping connection between the remote sensing image and its corresponding fire risk label. As illustrated in Figure 3, our FireRisk contains 7 classes according to the fire risk levels. In the next experiments, our FireRisk is divided into a training set and a validation set in the ratio of approximately 10:3. ### Our UnlabelledNAIP for Pre-training In addition, in order to improve the applicability of our self-supervised learning benchmark to our FireRisk, we gather 199 976 NAIP [27] remote sensing images from Google Earth Engine [13] for pre-training to generate latent representations. This unlabelled dataset, Unlabelled-NAIP, has the same image size and time period filtering as our FireRisk to allow the self-supervised models to learn as many features as possible from the same style of remote sensing imagery. ## 4 Benchmarks To evaluate the benchmark performance of our FireRisk for the fire risk assessment task, we validate it in two dimensions: supervised and self-supervised learning. Figure 4 describes the overall workflow for assessing fire risk in our work, where for the supervised learning benchmark we use the methods of ResNet [17] and ViT [10], while for the self-supervised learning, we select two representative self-supervised models for their performance, namely DINO [5] and MAE [16]. ### Supervised Learning Our dataset provides a remote sensing image classification task labelled with fire risk levels, which, like other classification tasks in the field of computer vision, can usually be predicted with labelled data using supervised learning. Digital images often consist of a vast number of pixels, and each pixel comprises several channels, making it difficult to generalize the original features when dealing with a multitude of parameters in an image task. One option for extracting image features is to employ a convolutional structure. Considering that ResNet [17] is widely used in deep learning due to its effectiveness in mitigating network degradation, we choose it as one of our supervised benchmarks. In addition, other studies introduce Transformer structures to overcomes the CNN's lack of holistic grasp of the input data itself due to inductive bias, making it easier to extract long-distance spatial dependencies between global data, the most famous of which, ViT [10], represents another direction of development in computer vision. Thus, we use it as our second supervised benchmarks. Due to the limited size of our FireRisk training set, it is difficult to support the training of a large number of model parameters. Influenced by the concept of transfer learning, we transfer the parameters trained on large real-world image datasets to our remote sensing dataset. As shown in the supervised learning workflow in Figure 4, we fine-tune the ResNet [17] and ViT [10], pre-trained on ImageNet [8], on Figure 3: Data distribution of the training and validation sets of our FireRisk on the labels of 7 fire risk levels. Figure 2: The diagram illustrates the workflow of establishing the FireRisk. To build our FireRisk, the center coordinates of a square area of \(270m\times 270m\) and the fire risk labels are collected from the WHP raster dataset [9], followed by obtaining the remote sensing images from the NAIP project [27] based on the coordinates. our FireRisk, respectively, to produce a supervised benchmark performance for fire risk assessment. ### Self-supervised Learning In contrast, self-supervised learning, as a form of representation learning, can learn visual features from a large number of unlabelled images without the involvement of labelled data. To determine the benchmark performance of the self-supervised learning on our dataset, we investigate the performance of two representative self-supervised models based on the ViT architecture as the backbone, the contrastive learning DINO [5] based on knowledge distillation and the generative model MAE [16] based on autoencoder, on our FireRisk. For the self-supervised learning process in Figure 4, we adopt two processing schemes to produce the self-supervised benchmark performance, which are: **Pre-trained on ImageNet [8].** using the latent representation obtained by pre-training on ImageNet [8], and then fine-tuning it on the labelled FireRisk, by which the generalized knowledge in the latent representations can be transferred to the remote sensing imagery domain; **Pre-trained on Our UnlabelledNAIP.** first pre-training on our UnlabelledNAIP, the unlabelled remote sensing dataset we collect, to obtain the latent representation based on remote sensing imagery, and then fine-tuning it on our labelled FireRisk. Compared to the former, the latter processing scheme has an additional step of constructing our unique latent representations, that will reflect features of remote sensing imagery more similar to FireRisk. In generating the latent representations, we use ViT-B/16 as as the backbone architecture for MAE [16], pre-trained for 80 epoches on our UnlabelledNAIP, while we train DINO [5] for 100 epochs because of the slow convergence of DINO on our pre-trained dataset. ## 5 Experiments and Evaluation We apply the pre-training weights of different supervised and self-supervised models on our constructed FireRisk and its subsets, and we examine the performance differences between these various benchmark models in two main ways: one is to validate the efficiency of labelling in FireRisk by evaluating their robustness to datasets of varying sizes by comparing their performance on different subsets; the other is to compare the impact of the latent representations generated on different pre-trained datasets on the self-supervised models for fire risk assessment. Table 1 shows the results of this series of experiments. **Evaluation Metrics.** As in most multiclass classification tasks, we mainly use accuracy and macro F1-score as evaluation metrics. While macro F1-score is influenced by a smaller number of classes, it is employed as a complement to accuracy so that the contributions of _High_ and _Very High_, which are smaller in number but more significant in reality, are not ignored. **Experimental Configurations.** In our implementation, for Figure 4: Workflow of our supervised and self-supervised benchmarks. The supervised process represented by the blue arrow trains the models on our FireRisk using pre-trained weights. The self-supervised process denoted by the red arrow is split into two alternatives: fine-tuning the models on our FireRisk using existing latent representations or using pre-trained latent representations on our UnlabelledNAIP. the supervised models, we use ViT-B/16 [10] and ResNet-50 [17] pre-trained on ImageNet1k [8] respectively to fine-tune the models on FireRisk. For the self-supervised architectures, MAE [16] and DINO [5], we use ViT-B/16 [10] as the backbone and fine-tune on FireRisk using latent representations pre-trained on ImageNet1k [8] and our UnlabelledNAIP, respectively. We use 2 GPUs and batch size per GPU of 16 for training, and take the results of the highest accuracy out of 100 epoches. ### Overall Analysis The Table 1 shows that the performance of the self-supervised benchmark on FireRisk outperforms the supervised benchmark in general, while the performance of MAE [16] is significantly higher than the other models. This is because the performance of supervised models is overly dependent on labelled information, while in the image domain, images contain much richer internal information than that labels provide. For fire risk assessment tasks, the self-supervised learning approach is better at extracting implicit features in remote sensing images. Our optimal baseline model is the MAE [16] pre-trained on ImageNet [8], whose confusion matrix is shown in Figure 5. The confusion matrices of the remaining benchmarks have similar features. Taking the confusion matrix of MAE as an example, we can see that, for the classification accuracy of each class, the highest is _Water_ which can reach 87.50%, while _Non-burable_ and _Very Low_ also have high accuracy, but _Low_, _Moderate_ and _High_ have lower accuracy. For the label _Low_, where most of the misclassifications are found on the two fire risk classes _Low_ and _Very Low_, which can demonstrate that our FireRisk is prone to misleading on these two labels. The same problem of similar feature ambiguity exists for _Moderate_ and _High_. In practical fire risk assessment tasks, one usually pays more attention to the recall of high fire risk because one needs to screen out the high-risk areas in remote sensing images as accurately as possible in order to prevent wildfires. For example, in Figure 5, for all remote sensing images labelled _Very High_ in the validation set, the MAE [16] predicts only 713 images correctly, which accounts for 49.58%. However, considering that the images labelled _High_ and _Very High_ usually have some similar features, people also focus on the area with the fire risk level of _High_ in practice. If the misclassification of _Very High_ as _High_ is included, the'recall' of _Very High_ can reach 62.24%. Through this analysis, our FireRisk can reflect the actual role of FireRisk to some extent. ### Label Efficiency Evaluation In this experiment, in order to investigate label efficiency, we focus on the robustness of these benchmark models on FireRisk obtained with this processing method of ours when the amount of data is reduced. We sample the training set at 50% and 20% from the full FireRisk's training set to obtain two subsets, 50% FireRisk and 20% FireRisk, respectively. For the supervised and self \begin{table} \begin{tabular}{l l l l l} \hline \hline Dataset & Model & pre-trained & Acc. & F1. \\ \hline FireRisk & ResNet & ImageNet1k & 63.20 & 52.56 \\ & ViT & ImageNet1k & 63.31 & 52.18 \\ & DINO & ImageNet1k & 63.36 & 52.60 \\ & DINO & UnlabelledNAIP & 63.44 & 52.37 \\ \cline{2-5} & MAE & ImageNet1k & **65.29** & **55.49** \\ \cline{2-5} & MAE & UnlabelledNAIP & 63.54 & 52.04 \\ 50\% & ResNet & ImageNet1k & 62.09 & 50.27 \\ FireRisk & ViT & ImageNet1k & 62.22 & 50.07 \\ & DINO & ImageNet1k & 61.75 & 51.21 \\ & DINO & UnlabelledNAIP & 62.49 & 51.35 \\ & MAE & ImageNet1k & 63.70 & 50.23 \\ \cline{2-5} & MAE & UnlabelledNAIP & 62.68 & 52.05 \\ 20\% & ResNet & ImageNet1k & 61.37 & 49.53 \\ FireRisk & ViT & ImageNet1k & 61.43 & 48.80 \\ & DINO & ImageNet1k & 60.95 & 50.72 \\ & DINO & UnlabelledNAIP & 61.96 & 50.83 \\ & MAE & ImageNet1k & 62.51 & 51.13 \\ \cline{2-5} & MAE & UnlabelledNAIP & 61.80 & 50.07 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of several supervised models and self-supervised models on FireRisk with different sizes. In our implementation, our supervised benchmark uses ResNet-50 and ViT-B/16 and the self-supervised models, DINO and MAE, both use ViT-B/16 as backbone. Figure 5: The confusion matrix of the optimal benchmark model, the MAE pre-trained on ImageNet. supervised benchmarks we investigate, we apply the same model configurations to fine-tune on this series of datasets and then evaluate on the same validation set. Integrating all the models pre-trained on ImageNet1k [8] in Table 1 according to the results of the same model on different size datasets, we can obtain Figure 5(a). It can be seen that all 4 benchmarks increase as the size of the dataset increases. The two self-supervised models, DINO [5] and MAE [16], have a higher increasing trend and are more sensitive to the size of the training data than the supervised benchmarks. This indicates that more training is required to fit the latent representations generated by self-supervised learning to the features of fire risk of remote sensing images. ### Fine-tuning of Self-supervised Representations For the self-supervised benchmark, we compare the differences in latent representations of these two benchmark models, DINO [5] and MAE [16], for feature extraction of remote sensing images in FireRisk. Compared to ImageNet [8], UnlabelledNAIP is smaller in size but has more similar features of remote sensing image to those on FireRisk. As shown in Figure 5(b), for DINO [5], the model pre-trained on UnlabelledNAIP is better than that pre-trained on ImageNet, while for MAE [16], the result is the opposite. Thus, DINO [5] has higher pre-training efficiency to achieve better performance on our FireRisk, while MAE [16] usually requires more data pre-training to achieve its optimal results. ## 6 Conclusion In this paper, to demonstrate the feasibility of using only remote sensing images as fire risk features, we develop a novel dataset, FireRisk, to assess fire risk. To construct this remote sensing dataset, we obtain the label of fire risk level and their corresponding geographic coordinates for a 270m square area from the WHP [9] raster dataset, and then gather the remote sensing image at this geographic location using the NAIP remote sensing project. The proposed dataset contains 19 782 labelled remote sensing images, with 70 331 serving as the training set and 21 541 as the validation set. To investigate the benchmark performance of our FireRisk for supervised and self-supervised learning, we employ the advanced CNN-based ResNet [17] and attention-mechanism-based ViT [10] as our supervised benchmarks, and DINO [5] and MAE [16] as our self-supervised benchmarks, respectively. We also explore the potential of the self-supervised benchmark model by generating new latent representations on UnlabelledNAIP, an unlabelled image dataset we gather. Furthermore, we validate the efficiency of the labels by analyzing the differences in the robustness of our benchmarks to variations in the amount of training data using sub-datasets generated by randomly sampling 20% and 50% from the training set. On our FireRisk, the maximum accuracy for the supervised benchmarks can reach 63.31%, while for the self-supervised benchmarks, the MAE [16] pre-trained on ImageNet1k [8] can achieve the optimal accuracy of all models at 65.29%. It is demonstrated that our self-supervised learning benchmarks outperform supervised learning on FireRisk, although their improvement on less training data is limited. Our new pre-trained latent representations are also complementary to the self-supervised representations, and our representation of DINO [5] has a considerable increase, which can reach 63.44% compared to 63.36% for the DINO pre-trained on ImageNet [8]. The FireRisk proposed in this work confirms the validity of using only remote sensing data for fire assessment, and has a simpler implementation process and better generalization than traditional fire assessment approaches. Figure 6: Accuracy of different benchmarks and different self-supervised latent representations on different sizes of FireRisk.. **Acknowledgements.** This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.
2310.00662
Classification of High-Ordered Topological Nodes towards Moiré Flat Bands in Twisted Bilayers
At magic twisted angles, Dirac cones in twisted bilayer graphene (TBG) can evolve into flat bands, serving as a critical playground for the study of strongly correlated physics. When chiral symmetry is introduced, rigorous mathematical proof confirms that the flat bands are locked at zero energy in the entire Moir\'e Brillouin zone (BZ). Yet, TBG is not the sole platform that exhibits this absolute band flatness. Central to this flatness phenomenon are topological nodes and their specific locations in the BZ. In this study, considering twisted bilayer systems that preserve chiral symmetry, we classify various ordered topological nodes in base layers and all possible node locations across different BZs. Specifically, we constrain the node locations to rotational centers, such as {\Gamma} and M points, to ensure the interlayer coupling retains equal strength in all directions. Using this classification as a foundation, we systematically identify the conditions under which Moir\'e flat bands emerge. Additionally, through the extension of holomorphic functions, we provide proof that flat bands are locked at zero energy, shedding light on the origin of the band flatness. Remarkably, beyond Dirac cones, numerous twisted bilayer nodal platforms can host flat bands with a degeneracy number of more than two, such as four-fold, six-fold, and eight-fold. This multiplicity of degeneracy in flat bands might unveil more complex and enriched correlation physics.
Fan Cui, Congcong Le, Qiang Zhang, Xianxin Wu, Jiangping Hu, Ching-Kai Chiu
2023-10-01T13:18:31Z
http://arxiv.org/abs/2310.00662v2
# Classification of High-Ordered Topological Nodes towards Moire Flat Bands in Twisted Bilayers ###### Abstract At magic twisted angles, Dirac cones in twisted bilayer graphene (TBG) can evolve into flat bands, serving as a critical playground for the study of strongly correlated physics. When chiral symmetry is introduced, rigorous mathematical proof confirms that the flat bands are locked at zero energy in the entire Moire Brillouin zone (BZ). Yet, TBG is not the sole platform that exhibits this absolute band flatness. Central to this flatness phenomenon are topological nodes and their specific locations in the BZ. In this study, considering twisted bilayer systems that preserve chiral symmetry, we classify various ordered topological nodes in base layers and all possible node locations across different BZs. Specifically, we constrain the node locations to rotational centers, such as \(\Gamma\) and M points, to ensure the interlayer coupling retains equal strength in all directions. Using this classification as a foundation, we systematically identify the conditions under which Moire flat bands emerge. Additionally, through the extension of holomorphic functions, we provide proof that flat bands are locked at zero energy, shedding light on the origin of the band flatness. Remarkably, beyond Dirac cones, numerous twisted bilayer nodal platforms can host flat bands with a degeneracy number of more than two, such as four-fold, six-fold, and eight-fold. This multiplicity of degeneracy in flat bands might unveil more complex and enriched correlation physics. ## I Introduction Twisted bilayer graphene (TBG) is a well-recognized system to manifest Moire flat bands (MFBs) [1; 2; 3], where electron-electron correlations are strongly enhanced. Consequently, TBG has become a compelling platform to explore diverse strong correlation phenomena, such as Mott insulating states [1; 2; 3], unconventional high-temperature superconductivity [3; 4; 5; 6; 7; 8], and quantum anomalous Hall effects [9; 10; 11; 12]. The origin of the MFBs is closely related to the Dirac cones in graphene. Graphene hosts two Dirac cones at the corners of the Brillouin zone (BZ). In the twisted bilayer platform, the Dirac cones evolve to flat bands through the interlayer coupling at magic angles, exhibiting an intriguing characteristic of two-fold degeneracy per valley per spin. This connection has been further demonstrated in other similar systems with Dirac cones, such as twisted double bilayer graphene [13; 14; 15; 16], twisted trilayer graphene [17; 18; 19; 20; 21], monolayer graphene with specific periodic potential [22], and twisted few-layer graphite [23; 24], in which the similar type of two-fold degenerate MFBs from the Dirac cones can also be found. In contrast, literature evidence suggests that MFBs cannot originate from the twisted surfaces of 3D topological insulators possessing Dirac nodes [25; 26]. Extending beyond the Dirac cone, a quadratic node at the M point of the square lattice can evolve into two-fold degenerate MFBs within the twisted bilayer framework [27]. This paves the way to seek alternative twisted bilayer platforms exhibiting MFBs. On the other hand, the introduction of a spatially alternating magnetic field in TBG can lead to four-fold degenerate MFBs per valley per spin [28], thereby leading to the possibility of realizing higher degeneracy of MFBs. This manuscript aims to identify new systems that host MFBs, explore their degeneracy greater than two, and confirm the state of absolute flatness throughout the entire Moire BZ. In TBG, due to the preservation of chiral symmetry, it has been proved by holomorphic function extension [29] that the two-fold degenerate flat bands are absolutely flat at zero energy across the entire Moire BZ. Inspired by the band flatness deriving from Dirac and quadratic nodes, we naturally extrapolate topological nodes to \(n-\)ordered topological nodes protected by chiral symmetry at zero energy, such as quadratic and cubic nodes. This approach is warranted as chiral symmetry prohibits the interlayer coupling from destroying the two topological nodes. An additional requisite for TBG band flatness is the location of the Dirac cones at a \(C_{3}\) rotation center, which ensures the interlayer momentum hopping between the two Dirac cones respects the \(C_{3}\) rotation symmetry. In this context, we propose the extension of this rotation center condition to \(C_{n}\), where \(n\geq 3\); here, the topological nodes are positioned at the \(C_{n}\) rotation center, and the interlayer coupling retains identical strength under the rotation. Consequently, we can classify four possible locales for an n-ordered node -- 1. \(\Gamma\) in a square lattice 2. \(M\) in a square lattice 3. \(\Gamma\) in a hexagonal lattice 4. \(\text{K},~{}\text{K}^{\prime}\) in a hexagonal lattice. Using these topological \(n\)-ordered semimetals as a base layer, we pair two such identical layers with a minor twist to evaluate the band flatness within low-energy physics. This manuscript primarily delves into the intricacies of twisted bilayer physics for \(n-\)ordered topological nodes situated at \(\Gamma\) in both square and hexagonal lattices. Furthermore, we impose chiral symmetry to protect the stability of the topological nodes and lock MFBs at zero energy. These proposed twisted bilayer platforms can be realized in non-trivial 3D time-reversal symmetric topological superconductors in class DIII and CI [30]. The combination of particle-hole symmetry and time-reversal symmetry directly leads to chiral symmetry. With chiral symmetry, the 3D winding number \(W_{\rm 3D}\) as a topological invariant correspond to the number (order) of the topological nodes on the surface of the superconductor. An \(n\)-ordered node at \(\Gamma/{\rm M}\) on the surface BZ is one of the possible cases. Class DIII limits \(n\) to be odd, while class CI limits \(n\) to be even. To realize the twisted bilayer, we consider two topological superconductors with \(W_{\rm 3D}=n,\ -n\), and their two surfaces in contact with each other with a twist, as shown in Fig. 1. Since these two topological superconductors possess the opposite winding numbers, the two topological nodes are robust against the interlayer coupling. The remainder of this paper is organized as follows. In Sec. II, we offer a concise overview of the criteria for band flatness, omitting detailed derivations for the brevity. In Sec. III, we develop a generalized Hamiltonian for twisted bilayer platforms hosting generic topological nodes. We also introduce the inversion of the Birman-Schwinger operator, whose spectrum determines the magic angles. Subsequently, we provide explicit Hamiltonian expressions for both square and hexagonal lattices when a topological node is located at the \(\Gamma\) point. In Sec. IV, we calculate the spectrum of the inverted Birman-Schwinger operator to identify these magic angles, and illustrate the energy spectrum at these angles to showcase the emergence of MFBs. In Sec. V, we categorize TBSs based on the order and locations of their topological nodes, listing those that do and do not host MFBs, as well as their corresponding degeneracy numbers. In Sec. VI, we construct the wavefunctions of these flat bands using holomorphic functions, thereby proving that these bands remain flat throughout the entire Moire BZ. Lastly, we conclude with a summary in Sec. VII. Some technical details have been relegated to supplementary materials. ## II Flatness conditions at first glance Before delving into the technical derivations, we provide an overview of the main findings identifying the classified conditions for band flatness. As indicated by a 'Y' in Table 1, specific orders \(n\) and locations of topological nodes can give rise to MFBs. While the form of interlayer coupling can influence twisted bilayer physics, we simplify our model by imposing chiral symmetry and assuming that the strength of the interlayer coupling decays exponentially with distance. Moreover, we assume that the directional dependence of the interlayer coupling directly inherits characteristics from the \(n\)-th ordered node. These assumptions guarantee the conditions of the band flatness is independent of the detailed form of the interlayer coupling. To contextualize our findings, we compare Table 1 with existing literature and highlight our novel contributions. Firstly, we confirm that Dirac nodes located at only \({\rm K},\ {\rm K^{\prime}}\) can result in flat bands, consistent with the guiding principles established for Dirac fermions [25]. Secondly, we corroborate previous work [27] showing that a twisted bilayer with a quadratic node at the \(\rm M\) point can also produce flat bands. Beyond these scenarios, all cases marked with a 'Y' in Table 1 represent new results. Specifically, in the square lattice, quadratic and cubic nodes at either \(\Gamma\) or \(\rm M\) can induce flat bands. In the hexagonal lattice, nodes with orders ranging from quadratic to quintic at the \(\Gamma\) point can also lead to flat bands, while at \({\rm K},\ {\rm K^{\prime}}\), only Dirac and quadratic nodes yield flat bands. In the remainder of the manuscript, we begin by constructing a Hamiltonian for TBSs with \(n\)-th ordered nodes and progressively show the emergence of Moire band flatness. ## III General method ### General Hamiltonian for TBSs We investigate twisted bilayer systems (TBSs) composed of two monolayers with a slightly twisted angle on square and Figure 1: Two topological superconductors in contact with each other with a twist form a twisted bilayer platform in the interface of the surfaces. Due to the non-zero 3D winding number \(W_{\rm 3D}\), a \(n\)-th ordered topological node emerges on the surface. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Position Order & n=1 & n=2 & n=3 & n=4 & n=5 \\ \hline Square & \(\mathbf{\Gamma}\) & N & Y & Y & N & N \\ \cline{2-7} Lattice & \(\mathbf{\rm M}\) & N & Y & Y & N & N \\ \hline Hexagonal & \(\mathbf{\Gamma}\) & N & Y & Y & Y & Y \\ \cline{2-7} Lattice & \(\mathbf{\rm K}/{\rm K^{\prime}}\) & Y & Y & N & N & N \\ \hline \end{tabular} \end{table} Table 1: The emergence of flat bands is determined by the conditions of the topological nodes, including the order from linear to quintc and the node locations. Label ’Y/N’ indicates the presence/absence of flat bands. hexagonal lattices. Consider the monolayers possess nodes of \(n\)-order at \(C_{n}\) rotation centers \(\mathbf{H}\) (\(n>2\)), including \(\Gamma,\,\mathbf{M}\) in the square lattice and \(\Gamma,\mathrm{K},\mathrm{K}^{\prime}\) in the hexagonal lattice. The low-energy Hamiltonians near \(\mathbf{H}\) can be expressed as: \[h(\mathbf{k})= v_{0}\begin{pmatrix}0&(k_{x}-ik_{y})^{n}\\ (k_{x}+ik_{y})^{n}&0\end{pmatrix}, \tag{1}\] where \(\mathbf{k}\) is a small quantity expanded from the rotation center \(\mathbf{H}\), \(v_{0}\) is a scaling parameter, and \(n=1,2,3,\cdots\) for linear, quadratic, cubic or higher order nodes respectively. Different from TBG, the index of this \(2\times 2\) node Hamiltonian indicates internal degree freedom, such as particle-hole, spin, orbitals or mixture [31]. On the contrary, the Dirac Hamiltonian in graphene is described by spatial dependent A and B bases. The general Hamiltonian for the TBS can be constructed as \[\hat{H}=\hat{H}^{t}+\hat{H}^{b}+\hat{H}^{\perp}. \tag{2}\] Here, \(\hat{H}^{t/b}\) represents the monolayer Hamiltonian for the top/bottom layer (the rotation of the monolayer Hamiltonian at a small twist angle is neglected for convenience), and \(\hat{H}^{\perp}\) is the interlayer coupling term. The monolayer Hamiltonian with the \(n\)-ordered node can be written as: \[\hat{H}^{l}=\sum_{\mathbf{k}^{l},\alpha,\beta}\hat{\psi}_{\mathbf{k}^{l}, \alpha}^{\dagger}h^{\alpha\beta}(\mathbf{k}^{l})\hat{\psi}_{\mathbf{k}^{l}, \beta}, \tag{3}\] where the subscript \(l=t/b\) denotes the top/bottom layer, \(\alpha/\beta\) indicates the internal index. The interlayer coupling term is given by: \[\hat{H}^{\perp}=\sum_{\mathbf{k}^{l},\mathbf{k}^{b}}^{\alpha,\beta}\hat{\psi} _{\mathbf{k}^{l},\alpha}^{\dagger}T_{\mathbf{k}^{l},\mathbf{k}^{b}}^{\alpha, \beta}\hat{\psi}_{\mathbf{k}^{b},\beta}+h.c.\;. \tag{4}\] The Bloch states for the different layers are defined as: \[|\mathbf{k}^{l},\alpha\rangle=\frac{1}{\sqrt{N}}\sum_{\mathbf{R}^{l}}e^{i( \mathbf{H}^{t}+\mathbf{k}^{l})\cdot\mathbf{R}^{l}}\left|\mathbf{R}^{l},\alpha \right\rangle. \tag{5}\] Here \(N\) denotes the total number of lattice sites, \(\mathbf{H}^{l}\) indicates the rotation center located in layer \(l\), and \(\mathbf{R}^{l}\) is the monolayer lattice vectors. Then the matrix element \(T_{\mathbf{k}^{l},\mathbf{k}^{b}}^{\alpha,\beta}=\langle\mathbf{k}^{t},\alpha |\hat{H}|\mathbf{k}^{b},\beta\rangle\) in Eq. 4 can be written as: \[T_{\mathbf{k}^{l},\mathbf{k}^{b}}^{\alpha,\beta}=\sum_{\mathbf{b}^{l}, \mathbf{b}^{b}}\frac{\tilde{t}_{\alpha\beta}(\mathbf{H}^{t}+\mathbf{k}^{t}+ \mathbf{b}^{t})}{A_{\text{u.c.}}}\delta_{\mathbf{H}^{t}+\mathbf{k}^{t}+ \mathbf{b}^{t},\mathbf{H}^{b}+\mathbf{k}^{b}+\mathbf{b}^{b}}, \tag{6}\] which stems from the Fourier transformation of the real space interlayer hopping \(\langle\mathbf{R}^{t},\alpha|\hat{H}|\mathbf{R}^{b},\beta\rangle\) (see the detailed derivation in the supplementary materials Sec.I). The explicit expression, which depends on the form of the node, will be discussed in Sec. III.3 and Sec. III.4. Here, \(\mathbf{b}^{l}\) indicates the \(l\) monolayer reciprocal lattice vector, \(A_{\text{u.c.}}\) indicates the unit cell area, and \(\tilde{t}_{\alpha\beta}(\mathbf{H}^{t}+\mathbf{k}^{t}+\mathbf{b}^{t})\) indicates the Fourier transform coefficient of the real-space interlayer hopping. The \(\delta\) function in Eq. 6 imposes the constraint of the hopping between the two momentum states \[\mathbf{k}^{t}=\mathbf{k}^{b}+(\mathbf{H}^{b}+\mathbf{b}^{b})-(\mathbf{H}^{t} +\mathbf{b}^{t}). \tag{7}\] Therefore, this momentum hopping constraint depends on the location of the rotation center \(\mathbf{H}^{l}\), where the topological node is located. In the main text, we focus on models with the node of different order located at \(\Gamma\) in the square and hexagonal lattices, then we have \(\mathbf{H}^{l}=\mathbf{\Gamma}^{l}=\mathbf{0}\), which gives \[\mathbf{k}^{t}=\mathbf{k}^{b}+\mathbf{b}^{b}-\mathbf{b}^{t}. \tag{8}\] We leave the discussions of M point in the square lattice and \(\mathrm{K},\,\,\mathrm{K}^{\prime}\) points in the hexagonal lattice in the supplementary materials Sec.II. As we focus on the low energy physics at a small twisted angle, we take an approximation by considering the hopping distance \(|\mathbf{k}^{t}-\mathbf{k}^{b}|\) less than the length of the primitive reciprocal lattice vector. That is, the reciprocal lattice vectors in the two layers are close \(\mathbf{b}^{t}\sim\mathbf{b}^{b}\) so that we can define \(\delta\mathbf{b}\equiv\mathbf{b}^{b}-\mathbf{b}^{t}\). Although all of the reciprocal lattice vectors should be included in the summation, we include only \(\mathbf{b}^{t},\mathbf{b}^{b}=\) primitive reciprocal lattice vectors as the nearest neighbor hopping and \(\mathbf{b}^{t},\mathbf{b}^{b}=\) combination of the two primitive reciprocal lattice vectors as the next nearest neighbor hopping. We note that as \(\mathbf{b}^{t},\mathbf{b}^{b}=0\), the strength of the on-site coupling vanishes due to the chiral symmetry (see the explanation in the next subsection). Since \(\mathbf{k}^{l}\) is small near the node, we can approximately take \(\tilde{t}_{\alpha\beta}(\mathbf{H}^{t}+\mathbf{k}^{t}+\mathbf{b}^{t})\approx \tilde{t}_{\alpha\beta}(\mathbf{H}^{t}+\mathbf{b}^{t})\). The Fourier coefficient in Eq. 6 for the models with a node at \(\Gamma\) point becomes: \[\tilde{t}_{\alpha\beta}(\mathbf{H}^{t}+\mathbf{k}^{t}+\mathbf{b}^{t})=\tilde {t}_{\alpha\beta}(\mathbf{k}^{t}+\mathbf{b}^{t})\approx\tilde{t}_{\alpha \beta}(\mathbf{b}), \tag{9}\] Figure 2: (color online) (a) and (b) The monolayer square and hexagonal BZs with the twist form Moiré BZs. The red and green hexagons represent the BZs of the bottom and top layers, each rotated by an angle \(\pm\theta/2\), and the small black square and hexagonal indicates the Moiré BZs. (c) and (d) The corresponding square and hexagonal reciprocal lattice structures. The red and green atoms indicate Dirac cones of the top and bottom layers. where the vector \(\mathbf{b}\) without a superscript denotes the untwisted reciprocal lattice vectors, we take the approximation because of the small twist angle. Then the interlayer hopping matrix can be written in an economic form \[T^{\alpha,\beta}_{\mathbf{k}^{\prime},\mathbf{k}^{\prime}}=T^{\alpha\beta}_{ \mathbf{k}^{\prime}+\delta\mathbf{b},\mathbf{k}^{\prime}}=\frac{\tilde{t}( \mathbf{b})}{A_{u.c.}}\equiv T^{\alpha\beta}(\delta\mathbf{b}). \tag{10}\] The second equality shows that under the approximation above, the interlayer hopping matrix is independent of \(\mathbf{k}^{l}\), and only depends on the choice of \(\mathbf{b}\), which gives \(\delta\mathbf{b}\) correspondingly. The repeated hopping forms a momentum space lattice, as shown in Fig. 2 (c) and (d) and the difference \(\delta\mathbf{b}\) between the primitive reciprocal lattice vectors in the two layers forms a much smaller primitive reciprocal lattice vector of the Moire BZ, which are denoted by \(\mathbf{g}_{i}\), as shown in Fig. 2 (a) and (b). The size of the Moire BZ can be characterized by the magnitude of the vectors \(\mathbf{g}_{i}\), which is written as \[|\mathbf{g}_{i}|=2|\mathbf{b}_{i}|\sin\frac{\theta}{2}, \tag{11}\] where \(|\mathbf{b}_{i}|\) is the magnitude of the monolayer primitive reciprocal lattice vectors. For momentum \(\mathbf{k}^{l}\) outside the Moire BZ, the momentum can be expressed as \(\mathbf{k}^{l}=\mathbf{k}+\mathbf{g}\), where \(\mathbf{k}\) is defined in the Moire BZ, and \(\mathbf{g}=m\mathbf{g}_{1}+n\mathbf{g}_{2}\) is a Moire reciprocal lattice vector, as illustrated in Fig. 2 (c) and (d). Then the general Hamiltonian for the twisted bilayer system can be rewritten in the form of the BZ folding \[\begin{split}\hat{H}=&\sum_{\mathbf{k},\mathbf{g}} ^{l,\alpha,\beta}\hat{\psi}_{\mathbf{k}+\mathbf{g},\alpha}^{l\dagger}h^{ \alpha\beta}(\mathbf{k}+\mathbf{g})\hat{\psi}_{\mathbf{k}+\mathbf{g},\beta}^ {l}\\ &+\sum_{\mathbf{k},\mathbf{g},\delta\mathbf{b}}^{\alpha,\beta} \hat{\psi}_{\mathbf{k}+\mathbf{g}+\delta\mathbf{b},\alpha}^{t\dagger}T^{ \alpha\beta}(\delta\mathbf{b})\hat{\psi}_{\mathbf{k}+\mathbf{g},\beta}^{b}+h.c.\.\end{split} \tag{12}\] Furthermore, the general Hamiltonian \(\hat{H}\) can be economically written as \(\hat{H}=\sum_{\mathbf{k}}\hat{H}(\mathbf{k})\), where \(\hat{H}(\mathbf{k})\) reads \[\begin{split}\hat{H}(\mathbf{k})&=\hat{H}^{t}( \mathbf{k})+\hat{H}^{b}(\mathbf{k})+\hat{H}^{\perp}(\mathbf{k}),\\ \hat{H}^{l}(\mathbf{k})&=\sum_{\mathbf{g}}^{\alpha, \beta}\hat{\psi}_{\mathbf{k}+\mathbf{g},\alpha}^{l\dagger}h^{\alpha\beta}( \mathbf{k}+\mathbf{g})\hat{\psi}_{\mathbf{k}+\mathbf{g},\beta}^{l},\\ \hat{H}^{\perp}(\mathbf{k})&=\sum_{\mathbf{g},\delta \mathbf{b}}^{\alpha,\beta}\hat{\psi}_{\mathbf{k}+\mathbf{g}+\delta\mathbf{b}, \alpha}^{t\dagger}T^{\alpha\beta}(\delta\mathbf{b})\hat{\psi}_{\mathbf{k}+ \mathbf{g},\beta}^{b}+h.c.\,\end{split} \tag{13}\] where \(H^{l}(\mathbf{k})\) represents the monolayer Hamiltonian or the on-site terms in the momentum space lattice, while \(H_{\perp}(\mathbf{k})\) represents the hopping term between momentum space lattice sites with the hopping matrix \(T(\delta\mathbf{b})\). As the nodal points of the top and bottom layers are located at the \(\Gamma\) point, they have the same sites in the momentum space, denoted by the red and green dots as shown in Fig. 2 (c,d). Since the unit cell is enlarged to the Moire super unit cell and \(\mathbf{k}\) shrinks to the Moire BZ, we use \(\mathbf{r}\) to describe the real space position vector in a super unit cell. The annihilation operator in this basis can be written as \[\hat{\psi}_{\mathbf{k},\alpha}^{l}(\mathbf{r})=\frac{1}{\sqrt{N}}\sum_{ \mathbf{g}}e^{i\mathbf{g}\cdot\mathbf{r}}\hat{\psi}_{\mathbf{k}+\mathbf{g}, \alpha}^{l}, \tag{14}\] where \(N\) is the total number of momentum sites. In this regard, the Hamiltonian can be expressed in terms of \(\mathbf{r}\) \[\begin{split}\hat{H}_{\mathbf{k}}^{l}(\mathbf{r})&= \sum_{\alpha,\beta}\hat{\psi}_{\mathbf{k},\alpha}^{l\dagger}(\mathbf{r})h^{ \alpha\beta}\left(\mathbf{k}-i\nabla_{\mathbf{r}}\right)\hat{\psi}_{\mathbf{k },\beta}^{l}(\mathbf{r}),\\ \hat{H}_{\mathbf{k}}^{\perp}(\mathbf{r})&=\sum_{ \alpha,\beta}\hat{\psi}_{\mathbf{k},\alpha}^{\dagger\dagger}(\mathbf{r})T^{ \alpha\beta}(\mathbf{r})\hat{\psi}_{\mathbf{k},\beta}^{b}(\mathbf{r})+h.c.\,\end{split} \tag{15}\] where the matrix forms of \(h(\mathbf{k}-i\nabla_{\mathbf{r}})\) and \(T(\mathbf{r})\) are given by \[\begin{split}& h(\mathbf{k}-i\nabla_{\mathbf{r}})=\begin{pmatrix}0&(k-i2 \partial)^{n}\\ (\tilde{k}-i2\bar{\partial})^{n}&0\end{pmatrix},\\ & T(\mathbf{r})=\sum_{\delta\mathbf{b}}e^{-i\delta\mathbf{b} \cdot\mathbf{r}}\begin{pmatrix}0&T_{12}(\delta\mathbf{b})\\ T_{21}(\delta\mathbf{b})&0\end{pmatrix}.\end{split} \tag{16}\] To simplify the notations, we define \(\partial\equiv\frac{1}{2}(\partial_{x}-i\partial_{y}),\bar{\partial}\equiv\frac{ 1}{2}\left(\partial_{x}+i\partial_{y}\right)\) and \(k\equiv\mathbf{k}_{x}-i\mathbf{k}_{y},\tilde{k}\equiv\mathbf{k}_{x}+i \mathbf{k}_{y}\). Due to chiral symmetry, the diagonal terms in \(T(\mathbf{r})\) must vanish. Hence, in the real space, the chirally symmetric Hamiltonian \(H_{\mathbf{k}}(\mathbf{r})\) in the basis of reads: \[H_{\mathbf{k}}(\mathbf{r})=\begin{pmatrix}h(\mathbf{k}-i\nabla_{\mathbf{r}})& T(\mathbf{r})\\ T^{\dagger}(\mathbf{r})&h(\mathbf{k}-i\nabla_{\mathbf{r}})\end{pmatrix}. \tag{17}\] Then, by reshuffling the basis to \(\Phi_{\mathbf{k}}(\mathbf{r})=\begin{pmatrix}\phi_{\mathbf{k}}^{t}(\mathbf{r}), \phi_{\mathbf{k}}^{b}(\mathbf{r}),\chi_{\mathbf{k}}^{t}(\mathbf{r}),\chi_{ \mathbf{k}}^{b}(\mathbf{r})\end{pmatrix}^{\top}\), the chirally symmetric Hamiltonian becomes \[\begin{split} H_{\mathbf{k}}(\mathbf{r})&=\begin{pmatrix}0&D_{ \mathbf{k}}^{*}(-\mathbf{r})\\ D_{\mathbf{k}}(\mathbf{r})&0\end{pmatrix},\\ D_{\mathbf{k}}(\mathbf{r})&=\begin{pmatrix}(\tilde{k}-i2\bar{\partial})^{n }&\sum_{\delta\mathbf{b}}e^{-i\delta\mathbf{b}\cdot\mathbf{r}}T_{21}(\delta \mathbf{b})\\ \sum_{\delta\mathbf{b}}e^{i\delta\mathbf{b}\cdot\mathbf{r}}T_{12}^{*}(\delta \mathbf{b})&(\tilde{k}-i2\bar{\partial})^{n}\end{pmatrix}.\end{split} \tag{18}\] We obtain the chirally symmetric general Hamiltonian for TBSs with the node located at the \(\Gamma\) point. Once the interlayer hopping matrix is settled for specific models, the energy spectrum can be calculated. Now we proceed to discuss the interlayer hopping terms. ### Momentum hopping matrix \(T(\delta\mathbf{b})\) Finding the explicit matrix expression of the interlayer momentum hopping \(T(\delta\mathbf{b})\) is the key to compute the low energy physics of the TBSs. The momentum hopping matrix \(T(\delta\mathbf{b})\) can be obtained via Fourier transformation of the interlayer hopping in real space, as expressed by the following equation: \[\tilde{t}_{\alpha\beta}(\mathbf{p})=\int e^{-i\mathbf{p}\cdot\mathbf{r}}t_{ \alpha\beta}(\mathbf{r})d\mathbf{r}. \tag{19}\] Here, we assume the real space interlayer hopping \(t_{\alpha\beta}(\mathbf{r})\) depends on the direction and inherits the rotation symmetry from the \(n\)-order nodes, which can be written as \[t_{\alpha\beta}(\mathbf{r})=t(r)(U_{\theta_{\mathbf{r}}}^{\dagger}\sigma U_{ \theta_{\mathbf{r}}})_{\alpha\beta}, \tag{20}\] where \(U_{\theta_{\mathbf{r}}}=e^{i\theta_{\mathbf{r}}J_{z}\sigma_{z}}\) denotes the rotation symmetry involving angular momentum \(J_{z}\equiv n/2\), and (\(r\), \(\theta_{\mathbf{r}}\)) representing the polar coordinates of vector \(\mathbf{r}\). We assume that the strength of the interlayer coupling exhibits an exponential decay \(t(r)=e^{-r/r_{0}}\), with \(r_{0}\) set to 1 for convenience. (We will discuss later that different decay rates may affect the values of the magic angles but cannot make magic angles vanish or appear. That is, the form of the decay does not affect the emergence of MFBs.). In general, \(\sigma=a\sigma_{x}+b\sigma_{y}+c\sigma_{z}+d\sigma_{0}\) represents the hopping matrix in the \(\theta_{\mathbf{r}}=0\) direction. Using the expression of Eq. 20, the hopping matrix in momentum space can be written in the form of \[\tilde{t}(\mathbf{p})=\begin{pmatrix}(d+c)D(\mathbf{p})&(a+bi)e^{-2iJ_{z} \theta_{\mathbf{r}}}C_{J_{z}}(\mathbf{p})\\ (a-bi)e^{2iJ_{z}\theta_{\mathbf{r}}}C_{J_{z}}(\mathbf{p})&(d-c)D(\mathbf{p}) \end{pmatrix}, \tag{21}\] where \[D(\mathbf{p})=\int e^{-ipr\cos\theta}t(r)rdrd\theta, \tag{22}\] \[C_{J_{z}}(\mathbf{p})=\int e^{-ipr\cos\theta}t(r)e^{\pm 2iJ_{z} \theta}rdrd\theta,\] with \(r\)/\(\theta\) ranging from 0/0 to \(+\infty\)/2\(\pi\). Fig. 5(a) illustrates the function \(D(\mathbf{p})\) labeled by black line. Because chiral symmetry is preserved, we neglect this diagonal term. At \(\theta_{\mathbf{r}}=0\), the phase of the off-diagonal term aligns with the \(n\)-ordered node Eq. 1. Since \(C_{J_{z}}(\mathbf{p})\) is real, \(b\) must vanish. Then the interlayer hopping matrix in momentum space becomes \[\tilde{t}(\mathbf{p})= aC_{J_{z}}(\mathbf{p})\begin{pmatrix}0&e^{-2iJ_{z}\theta_{ \mathbf{p}}}\\ e^{2iJ_{z}\theta_{\mathbf{p}}}&0\end{pmatrix}. \tag{23}\] Additionally, Fig. 3(a) also shows \(C_{J_{z}}(\mathbf{p})\) on momentum \(|\mathbf{p}|\) for linear node (\(J_{z}=\frac{1}{2}\)), quadratic node (\(J_{z}=1\)) and cubic node (\(J_{z}=\frac{3}{2}\)) denoted by blue, red and orange lines, respectively. These lines exhibit zero values at \(|\mathbf{p}|=0\), indicating that the on-site momentum hopping matrix \(T(0)\) in Hamiltonian Eq. 13 vanishes due to chiral symmetry. Furthermore, \(C_{J_{z}}(\mathbf{p})\) grow rapidly then decay slowly, with relatively large values around \(|\mathbf{p}|=|\mathbf{b}_{1}|\) and \(\sqrt{2}|\mathbf{b}_{1}|/\sqrt{3}\mathbf{b}_{1}|\) (for the square/hexagonal lattice respectively). Hence, besides nearest-neighbor (NN), we also need to consider the next nearest-neighbor (NNN) momentum hoppings in the square and hexagonal lattices for a good approximation. ### Chirally symmetric Hamiltonian in the square lattice Knowing the strength of the interlayer coupling as a function of momentum, we can derive an explicit expression for the twisted bilayer Hamiltonian. We focus on dominant momentum hoppings to construct the interlayer coupling, while constraining the momenta by \(\mathbf{k}^{t}=\mathbf{k}^{b}+\mathbf{b}^{b}-\mathbf{b}^{t}\). In the square lattice, four primitive reciprocal lattice vectors \(\mathbf{b}^{l}=\pm\mathbf{b}^{l}_{1}\), \(\pm\mathbf{b}^{l}_{2}\) respectively form four NN momentum hoppings, while the four combinations of the primitive reciprocal lattice vectors \(\mathbf{b}^{l}=\mathbf{b}^{l}_{1}\pm\mathbf{b}^{l}_{2}\), \(-\mathbf{b}^{l}_{1}\pm\mathbf{b}^{l}_{2}\) form four NNN momentum hoppings. Due to the small twisted angle, we disregard layer label \(l\) and mark these two types of the four vectors by \(\mathbf{p}_{i}\) and \(\tilde{\mathbf{p}}_{j}\) respectively. Furthermore, the corresponding momentum Figure 3: (a) The functions \(D(\mathbf{p})\) and \(C_{J_{z}}(\mathbf{p})\) on \(|\mathbf{p}|\) for linear (\(J_{z}=\frac{1}{2}\)), quadratic (\(J_{z}=1\)) and cubic node (\(J_{z}=\frac{3}{2}\)) describing the strength of the interlayer coupling. (b) In the presence of the NN momentum hopping (dashed arrows), two decoupled momentum lattices in the square lattice. (c) Momentum lattice connected by NN hopping in the hexagonal lattice. differences \(\delta\mathbf{b}\) between the two layers denoted as \(\mathbf{q}_{i}\) and \(\tilde{\mathbf{q}}_{j}\) as shown in Fig. 2(c) and Fig. 4(a). Since angles \(\theta_{\mathbf{p}_{i}}=(i-1)\frac{\pi}{2}\) and \(\theta_{\tilde{\mathbf{p}}_{j}}=\frac{\pi}{4}+(i-1)\frac{\pi}{2}\) for \(\mathbf{q}_{i}\) and \(\tilde{\mathbf{q}}_{j}\), the explicit expression \(\tilde{t}(\mathbf{p})\) of the interlayer coupling can be written in Eq. 23 as \(\mathbf{p}=\mathbf{p}_{i},\;\tilde{\mathbf{p}}_{j}\). As we assume the lattice constant is the unity and the decay length of the interlayer coupling is also the unity, the ratio of the NN hopping and NNN hopping \(\lambda^{s}_{J_{z}}\equiv C_{J_{z}}(|\mathbf{b}^{t}_{1}+\mathbf{b}^{t}_{2}|)/ C_{J_{z}}(|\mathbf{b}^{t}_{1}|)\) can be fixed. In particular, \(\lambda^{s}_{1}\approx 0.54\) for the interlayer coupling inheriting the direction dependence of the quadratic node. Again, it will be shown later that changing the ratio, which only moves the values of the magic values, does not affect the emergence of the flat bands. Based on real space Hamiltonian Eq. 18, the chirally symmetric Hamiltonian \(H^{\eta}_{\mathbf{k},J_{z}}(\mathbf{r})\) can be given by \[\begin{split} H^{\eta}_{\mathbf{k},J_{z}}(\mathbf{r})& =\begin{pmatrix}0&D^{\eta*}_{\mathbf{k},J_{z}}(-\mathbf{r})\\ D^{\eta}_{\mathbf{k},J_{z}}(\mathbf{r})&0\end{pmatrix},\\ D^{\eta}_{\mathbf{k},J_{z}}(\mathbf{r})&=\begin{pmatrix}(\bar{k}-i2\bar{ \partial})^{n}&\alpha^{\eta}_{J_{z}}U^{\eta}_{J_{z}}(\mathbf{r})\\ \alpha^{l}_{J_{z}}U^{\eta}_{J_{z}}(-\mathbf{r})&(\bar{k}-i2\bar{\partial})^{n} \end{pmatrix},\end{split} \tag{24}\] where we define normalized coupling strength \(\alpha^{\eta}_{J_{z}}\equiv\frac{aC(|\mathbf{b}^{t}_{1}|)}{v_{0}k^{2}_{\theta}}\) and \(k_{\theta}=2|\mathbf{b}_{1}|\sin\frac{\theta}{2}\). Only the NN hoppings and the NNN hoppings are included to describe the interlayer coupling \[U^{\eta}_{J_{z}}(\mathbf{r})=2U^{\eta\text{NN}}_{J_{z}}(\mathbf{r})+2U^{\eta \text{NNN}}_{J_{z}}(\mathbf{r}). \tag{25}\] We let \(\eta=s\) to represent the square lattice and later \(\eta=h\) is used for the hexagonal lattice. For the quadratic node (\(J_{z}=1\)), the explicit forms of the momentum hoppings are written as \[\begin{split}& U^{\text{sNN}}_{1}(\mathbf{r})=\cos(\mathbf{q}_{1} \cdot\mathbf{r})-\cos(\mathbf{q}_{2}\cdot\mathbf{r}),\\ & U^{\text{sNN}}_{1}(\mathbf{r})=0.54i(\cos(\tilde{\mathbf{q}}_{1} \cdot\mathbf{r})-\cos(\tilde{\mathbf{q}}_{2}\cdot\mathbf{r})).\end{split} \tag{26}\] If the NNN hopping potential \(U^{\text{sNN}}_{1}(\mathbf{r})\) is ignored and only the \(U^{\text{sNN}}_{1}(\mathbf{r})\) is preserved in the Hamiltonian \(H^{s}(\mathbf{r})\) shown in Fig. 2(c), the momentum lattice describing this system can be divided into two decoupled momentum space lattices. One is linked by blue dotted lines with arrows, while the other one is connected by pink dotted lines with arrows, as shown in Fig. 3(b). Hence, the twisted bilayer system always exhibits at least 2-fold degeneracy. When the NNN hoppings, the 2-fold degeneracy is lifted. ### Chirally symmetric Hamiltonian in the hexagonal lattice Following the similar discussions in the square lattice, we can obtain an explicit form of the Hamiltonian in the hexagonal lattice. The six primitive reciprocal lattice vectors \(\pm\mathbf{b}^{l}_{1}\), \(\pm\mathbf{b}^{l}_{2}\), \(\pm(\mathbf{b}^{l}_{1}+\mathbf{b}^{l}_{2})\) give rise to six nearest-neighbor (NN) momentum hoppings. Meanwhile, the six combinations of these primitive vectors, \(\pm(\mathbf{b}^{l}_{1}-\mathbf{b}^{l}_{2})\), \(\pm(2\mathbf{b}^{l}_{1}+\mathbf{b}^{l}_{2})\), \(\pm(\mathbf{b}^{l}_{1}+2\mathbf{b}^{l}_{2})\) form six next-nearest-neighbor (NNN) momentum hoppings. Likewise, we denote these two sets of six vectors as \(\mathbf{p}_{i}\) and \(\tilde{\mathbf{p}}_{j}\), respectively. Moreover, the corresponding momentum differences \(\delta\mathbf{b}\) between the two layers are denoted as \(\mathbf{q}_{i}\) and \(\tilde{\mathbf{q}}_{j}\), as illustrated in Fig. 2(d) and Fig. 4(b). For \(\mathbf{q}_{i}\) and \(\tilde{\mathbf{q}}_{j}\), the angular variables \(\theta_{\mathbf{p}_{i}}=(i-1)\frac{\pi}{2}\) and \(\theta_{\tilde{\mathbf{p}}_{j}}=\frac{\pi}{4}+(i-1)\frac{\pi}{2}\) allow us to express the interlayer coupling \(\tilde{t}(\mathbf{p})\) explicitly, as given in Eq. 23 for \(\mathbf{p}=\mathbf{p}_{i},\tilde{\mathbf{p}}_{j}\). Fixing the lattice constant and the decay length of the interlayer coupling as the unity, we can fix the ratio \(\lambda^{h}_{J_{z}}\equiv C_{J_{z}}(|\mathbf{b}^{t}_{1}-\mathbf{b}^{t}_{2}|)/ C_{J_{z}}(|\mathbf{b}^{t}_{1}|)\). Particularly, for the quadratic node we have, \(\lambda^{h}_{1}=C_{1}(|\mathbf{b}^{t}_{1}-\mathbf{b}^{t}_{2}|)/C_{1}(|\mathbf{b }^{t}_{1}|)\approx 0.355\), for the cubic node we have \(\lambda^{h}_{\frac{1}{2}}=C_{\frac{1}{2}}(|\mathbf{b}^{t}_{1}-\mathbf{b}^{t}_{2 }|)/C_{\frac{1}{2}}(|\mathbf{b}^{t}_{1}|)\approx 0.37\). The Hamiltonian \(H^{h}_{\mathbf{k},J_{z}}(\mathbf{r})\) for quadratic and cubic nodes in the hexagonal lattice has the same form as \(H^{\eta}_{\mathbf{k},J_{z}}(\mathbf{r})\) in Eq. 24, where \(\eta=h\) represents the hexagonal lattice. Particularly, for the quadratic node, the two dominant momentum hopping can be written as \[\begin{split} U^{h\text{NN}}_{1}(\mathbf{r})=&(\frac{ 1}{2}+i\frac{\sqrt{3}}{2})\cos(\mathbf{q}_{1}\cdot\mathbf{r})-\cos(\mathbf{q} _{2}\cdot\mathbf{r})\\ &+(\frac{1}{2}-i\frac{\sqrt{3}}{2})\cos(\mathbf{q}_{3}\cdot \mathbf{r}),\\ U^{h\text{NNN}}_{1}(\mathbf{r})=& 0.37(\cos(\tilde{\mathbf{q}}_{1}\cdot \mathbf{r})-(\frac{1}{2}-i\frac{\sqrt{3}}{2})\cos(\tilde{\mathbf{q}}_{2}\cdot \mathbf{r})\\ &-(\frac{1}{2}+i\frac{\sqrt{3}}{2})\cos(\tilde{\mathbf{q}}_{3} \cdot\mathbf{r})).\end{split} \tag{27}\] Likewise, for the cubic node, we have \[\begin{split}& U^{h\text{NN}}_{\frac{1}{2}}(\mathbf{r})=-i(\sin( \mathbf{q}_{1}\cdot\mathbf{r})-\sin(\mathbf{q}_{2}\cdot\mathbf{r})+\sin( \mathbf{q}_{3}\cdot\mathbf{r})),\\ & U^{h\text{NNN}}_{\frac{1}{2}}(\mathbf{r})=-0.355(\sin(\tilde{ \mathbf{q}}_{1}\cdot\mathbf{r})-\sin(\tilde{\mathbf{q}}_{2}\cdot\mathbf{r})+ \sin(\tilde{\mathbf{q}}_{3}\cdot\mathbf{r})).\end{split} \tag{28}\] Unlike the twisted bilayer system of the square lattice, the hexagonal lattice does not exhibit 2-fold degeneracy as the NNN hoppings vanish. As the twisted bilayer Hamiltonian is established, we can introduce an approach to identify magic angles. ### The spectrum of magic angles We investigate the conditions for the existence of absolutely zero-energy MFBs in the above three chirally symmetric Hamiltonians. For the convenience, we express these Hamiltonians in a unified and economic form \[H_{\mathbf{k}}(\mathbf{r})=\begin{pmatrix}0&D^{*}_{\mathbf{k}}(- \mathbf{r})\\ D_{\mathbf{k}}(\mathbf{r})&0\end{pmatrix}, \tag{29}\] where \[D_{\mathbf{k}}(\mathbf{r})=\begin{array}{cc}\left(\bar{k}-i2\bar{\partial} \right)^{n}&\alpha U(\mathbf{r})\\ \alpha U(-\mathbf{r})&(\bar{k}-i2\bar{\partial})^{n}\end{array}. \tag{30}\] Our goal is to look for the magic values of \(\alpha\) leading to the MFBs, since \(\alpha\) is a tunable parameter controlled by the twisted angle. We rewrite the off-diagonal term \[D_{\mathbf{k}}(\mathbf{r})=(\bar{k}-i2\bar{\partial})^{n}(I-\alpha T_{\mathbf{k}}),\] where we define \[T_{\mathbf{k}}\equiv-(\bar{k}-i2\bar{\partial})^{-n}\begin{pmatrix}0&U(\mathbf{r}) \\ U(-\mathbf{r})&0\end{pmatrix}, \tag{31}\] which is known as the Birman-Schwinger operator [32]. The emergence of absolutely zero-energy MFBs in these systems is characterized by the determinant of its Hamiltonian \(H(\mathbf{r})\) being zero for any \(\mathbf{k}\), i.e. \(\det(D_{\mathbf{k}}(\mathbf{r}))=0\). For non-zero \(\mathbf{k}\), the matrix \(\left[I-\alpha T_{\mathbf{k}}\right]\) must have at least one zero eigenvalue due to \(\det\!\left(\bar{k}-i2\bar{\partial}\right)^{n}\neq 0\), and two low-energy states at \(\mathbf{k}=0\) are fixed at zero energy. According to Ref. [32; 33], for TBG the eigenvalues of \(T_{\mathbf{k}}\) are independent of \(\mathbf{k}\). Presumably, we extend this \(\mathbf{k}\)-independence to our generic bilayer systems, and then define a spectrum \(\mathcal{A}=1/\operatorname{Spec}\left(T_{\mathbf{k}}\right)\) and a corresponding two-component eigenstate \(\psi^{0}_{\mathbf{k}}(\mathbf{r})\). Consequently, by tuning the parameter \(\alpha\) to one of the real eigenvalues in \(\mathcal{A}\), we obtain \(D_{\mathbf{k}}(\mathbf{r})\psi^{0}_{\mathbf{k}}(\mathbf{r})=0\) for any \(\mathbf{k}\), which leads to the emergence of zero-energy MFBs. In this regard, we can compute complex spectrum \(\mathcal{A}\) to determine the magic values of \(\alpha\). Although by introducing alternative magnetic field \(\alpha\) can gain a magnetic phase to become a complex number [28], we focus on real \(\alpha\) here. Furthermore, we demonstrate that the degeneracy of MFB is twice that of the corresponding real eigenvalue for spectrum \(\mathcal{A}\). To prove this, we investigate the eigenfunction of Birman-Schwinger operator \(T_{\mathbf{k}}\) with a real eigenvalue \(\eta\) independent of \(\mathbf{k}\), which reads \[T_{\mathbf{k}}\phi^{m}_{\mathbf{k}}(\mathbf{r})=\eta\phi^{m}_{\mathbf{k}}( \mathbf{r}), \tag{32}\] where \(\phi^{m}_{\mathbf{k}}(\mathbf{r})\) are corresponding eigenvectors with m-fold degeneracy. Then, by using Eq. 31 and \(\alpha=\frac{1}{\eta}\), we derive the eigenfunction of \(D_{\mathbf{k}}(\mathbf{r})\) to be \[D_{\mathbf{k}}(\mathbf{r})\phi^{m}_{\mathbf{k}}(\mathbf{r})=(\bar{k}-i2\bar{ \partial})^{n}(I-\frac{1}{\eta}T_{\mathbf{k}})\phi^{m}_{\mathbf{k}}(\mathbf{ r})=0, \tag{33}\] which imply that \(\phi^{m}_{\mathbf{k}}(\mathbf{r})\) are zero-energy eigenvectors of \(D_{\mathbf{k}}(\mathbf{r})\) with \(m\)-fold degeneracy. Similarly, the zero energy eigenfunction of \(D^{*}_{\mathbf{k}}(-\mathbf{r})\) with \(m\)-fold degeneracy is given by \(D^{*}_{\mathbf{k}}(-\mathbf{r})\phi^{*m}_{\mathbf{k}}(-\mathbf{r})=0\). By exploiting the relation between \(D_{\mathbf{k}}(\mathbf{r})\) and Hamiltonian \(H_{\mathbf{k}}(\mathbf{r})\) in Eq. 30, we obtain the zero energy eigenstates of Hamiltonian \(H_{\mathbf{k}}(\mathbf{r})\) with \(2m\)-fold degeneracy in the form of \[\Psi^{m}_{\mathbf{k},1}(\mathbf{r})=\begin{pmatrix}\phi^{m}_{\mathbf{k}}( \mathbf{r})\\ 0\end{pmatrix},\ \ \Psi^{m}_{\mathbf{k},2}(\mathbf{r})=\begin{pmatrix}0\\ \phi^{*m}_{\mathbf{k}}(-\mathbf{r})\end{pmatrix}. \tag{34}\] Consequently, based on Eq. 32 and Eq. 34, we conclude that the degeneracy of the MFB is twice that of the corresponding real eigenvalue for spectrum \(\mathcal{A}\). ## IV Search for flatness To pinpoint the magic values of \(\alpha\), we initially calculate \(\mathcal{A}=\frac{1}{\operatorname{Spec}\left(T_{\mathbf{k}}\right)}\) for a specified momentum in the Moire BZ using Eq. 31. The real spectrum of \(\mathcal{A}\) should correspond to \(\alpha\) values that yield MFBs, assuming that the spectrum \(\mathcal{A}\) is invariant with respect to \(\mathbf{k}\). To validate this assumption and scrutinize the band flatness, we use the real eigenvalues of \(\mathcal{A}\) as \(\alpha\) in the twisted bilayer Hamiltonian described by Eq. 18. This methodology enables us to plot the energy spectrum as a function of momentum \(\mathbf{k}\) within the Moire BZ. Employing this approach, we demonstrate that these real eigenvalues lead to MFBs at zero energy in twisted bilayers for both square and hexagonal lattices. Later, we will offer a rigorous proof that substantiates the complete flatness of these bands across the entire Moire BZ, achieved through the extension of holomorphic functions. ### Quadratic nodes in the square lattice We start with a quadratic node at the \(\Gamma\) point of a square lattice. For simplification, we initially neglect the second nearest momentum hopping (\(U^{\mathrm{sNNN}}_{1}(\mathbf{r})=0\)). This allows the Hamiltonian \(H^{s}(\mathbf{r})\) to be cleanly divided into two identical sub-Hamiltonians, as depicted in Fig. 3(b). As a result, the system inherently possesses a two-fold degeneracy. This characteristic leads the eigenvalues \(\alpha^{s}_{1,i}\) of the spectrum \(\mathcal{A}\) to ex Figure 5: (color online) Spectra \(\mathcal{A}\) and band structures of the Hamiltonian \(H^{s}(\mathbf{r})\) in the square lattice. (a) and (b) The eigenvalues \(\alpha^{s}_{1}\) and \(\alpha^{\prime s}_{1}\) with and without the NNN hopping term. The band structures without the NNN hopping term at (c) \(\alpha^{s}_{1}=0.2\); (d) First magic value \(\alpha^{s}_{1,1}=0.264\) marked by black arrow in (a). The band structures with the NNN hopping term at (c) \(\alpha^{\prime s}_{1}=0.2\); (d) First magic value \(\alpha^{\prime s}_{1,1}=0.197\) marked by black arrow in (b). hibit a two-fold degeneracy, represented by blue circles in Fig. 5(a). Since the eigenvalues continuously appear on the real axis, the MFBs emerge at extensive magic values of \(\alpha_{1,i}^{s}\). By Eq. 34, the MFBs have four-fold degeneracy for each magic values, even when presented as complex numbers. Fig. 5(c) shows near the zero energy two two-fold degenerate dispersive bands, as indicated by band No.1-4 in the black dotted box; remarkably, these four bands evolve four-fold degenerate (quadruple) flat bands (magenta line) in the energy gap at the first magic value (\(\alpha_{1,1}^{s}=0.264\)) as shown in Fig. 5(d). Non-trivial topology emerges in the degenerate flat bands. As discussed in supplementary materials Sec.III, within the quadruple flat bands, the two occupied bands No.(1\(\sim\)2) have a total Chern number of \(+2\) while the two unoccupied bands No.(3\(\sim\)4) possess an opposite Chern number in total. Now we recover the second nearest momentum hopping, which is determined by the ratio between the NNN and NN hoping strengths (\(\lambda_{1}^{s}=0.54\)); the two identical sub-Hamiltonians merges to one. Each two-fold degenerate state in spectrum \(\mathcal{A}\) splits up into two non-degenerate states \(\alpha_{1,i}^{\prime s}\) indicated red circles in Fig. 5(b) and the real eigenvalues still stay on the real axis. Meanwhile no additional eigenvalues appear on the real axis. The corresponding magic values lead to two-fold degenerate (double) MFBs. Similarly, the splitting from two-fold degenerate states to two non-degenerate states also occurs at energy bands. For example, the energy band degeneracy splitting is demonstrated by the transition from Fig. 5(c) to (e) with non-zero \(U_{1}^{\text{NNN}}(\mathbf{r})\). In Fig. 5(e), two (No.1 and 2) of the four non-degenerate dispersive bands near Fermi level evolve to double MFBs at the first magic value (\(\alpha_{1,1}^{\prime s}=0.192\)), as shown by the blue line in Fig. 5 (f). Furthermore, the MFBs, which connects to other energy bands, are gapless. ### Quadratic and cubic nodes in the hexagonal lattice In this section, we examine the spectrum \(\mathcal{A}\) and the emergence of highly degenerate MFBs for the chirally symmetric Hamiltonians \(H_{1}^{h}(\mathbf{r})\) and \(H_{\frac{1}{2}}^{h}(\mathbf{r})\) in the hexagonal lattice. For the twisted bilayer of the quadratic nodes (\(J_{z}=1\)), we present eigenvalues \(\alpha_{1,i}^{h}\) in spectrum \(\mathcal{A}\) without the NNN hopping as shown in Fig. 6(a). Non-degenerate states (red circles) and two-fold degenerate states (blue circles) scatter particularly on the real axis; they respectively indicates two-fold and four-fold degenerate MFBs. After turning on the NNN hopping (\(\lambda_{1}^{h}=0.355\)), we plot eigenvalues \(\alpha_{1,i}^{\prime h}\) in spectrum \(\mathcal{A}\) in Fig. 6(b). Three-fold degenerate states (cyan circles) appear on the real axis and represents six-fold degenerate MFBs. Introducing the NNN hopping leads to the movement of spectrum \(\mathcal{A}\), especially on the real axis. First, as \(\lambda_{1}^{h}\) increases from zero, the eigenvalue of the triple degenerate states move from a large real number to a small number as demonstrated in Fig. 6(c,d). Second, with further increase to \(\lambda_{1}^{h}=0.18\), the eigenvalues of the two-fold degenerate states collide on the real axis and move away from the real axis as the complex conjugation partners as shown in Fig. 6(e,f). We pick up the interlayer strength (\(\alpha_{1}^{h}\)) near and at eigenvalues in spectrum \(\mathcal{A}\) to draw energy spectra in the Moire BZ. For a non-magic value of \(\alpha_{1}^{h}=0.15\), Fig. 7(a) showcases four dispersive bands proximate to the Fermi level. Notably, two of these bands (No.1 and 2) flatten completely at the first magic value, \(\alpha_{1,1}^{h}=0.156\), as indicated by the blue line in Fig. 7(b). Upon increasing \(\alpha_{1}^{h}\) to \(0.6\), Fig. 7(c) depicts six dispersive bands near the Fermi level, with four of these bands (No.1-4) becoming absolutely flat at the second magic value, \(\alpha_{1,2}^{h}=0.719\), highlighted by the magenta line in Fig. 7(d). When \(\alpha_{1}^{\prime h}=1\), Fig. 7(e) reveals eight dispersive bands close to the Fermi level. At the third magic value, \(\alpha_{1,3}^{\prime h}=1.162\), six of these bands (No.1-6) are entirely flat, as demonstrated in Fig. 7(f). In all three scenarios, a gapless feature is evident, as the dispersive bands connect the zero-energy flat bands at \(\Gamma\). Considering cubic nodes (\(J_{z}=3/2\)) in the hexagonal lattice, we respectively plot eigenvalues \(\alpha_{\frac{1}{2},i}^{h}\) and \(\alpha_{\frac{1}{2},i}^{\prime h}\) in spectrum \(\mathcal{A}\) with and without the NNN hoping term. For the NNN hoping term, we choose the ratio between the NNN and the NN hopping to be \(\lambda_{\frac{1}{2}}^{h}=0.37\). In both cases, the eigenvalues corresponding to only double (blue circles) and quadruple (magenta circles) degenerate states can be located on the real axis; therefore, the MFBs can emerge at magic values (\(\alpha_{\frac{1}{2},i}^{h}\), \(\alpha_{\frac{1}{2},i}^{\prime h}\)) with four-fold and eight-fold degeneracies. That is, the number of the MFB degeneracy is always multiple of 4. The reason is that the entire twisted bilayer system preserves an effective space-time symmetry and the operator obey \((PT)^{2}=-1\) (see the details in supplementary materials Sec.IV); therefore, the Kramers' theorem leads to the two-fold degenerate spectrum in the Moire BZ in general. On the other hand, the presence of the NNN hopping just rearrange and move from the eigenvalues \(\alpha_{\frac{1}{2},i}^{h}\) to \(\alpha_{\frac{1}{2},i}^{\prime h}\) on the real axis. In other words, considering the NN hopping can determine the existence of the MFBs. Let us demonstrate two examples for the evolutions of the highly degenerate MFBs. At the non-magic value \(\alpha_{\frac{3}{2}}^{h}=0.1\), Fig. 8(c) shows two dispersive bands with two-fold degeneracy near the Fermi level (black dotted box). Interestingly, these four bands (No.1-4) become absolutely flat at the first magic value \(\alpha_{\frac{3}{2},1}^{h}=0.127\), as shown in Fig. 8(d). By further increasing \(\alpha_{\frac{3}{2}}^{h}=0.7\), Fig. 8(e) displays four dispersive bands with two-fold degeneracy near the Fermi level. At the second magic value \(\alpha_{\frac{3}{2},2}^{h}=0.846\), all eight bands (No.1-8) become absolutely flat, as shown in Fig. 8(f). In both instances, the MFBs reside within energy gaps. Similarly, when accounting for NNN hopping, the gaps are still open with the MFBs inside for the first two magic values. The flat bands at the first two magic values inherent the non-trivial topology from the cubic node (\(n=3\)). As discussed in supplementary materials Sec. III, within the four-fold and eight-fold degenerate flat bands, the occupied bands No.(1\(\sim\)2) and No.(1\(\sim\)4) have the same total Chern numbers of +3, while the unoccupied bands No.(3\(\sim\)4) and No.(5\(\sim\)8) possess the opposite total Chern numbers. Moreover, a \(\mathbb{Z}_{2}\) topological invariant emerges in class DIII in a 2D system. Due to the odd Chern number, the \(\mathbb{Z}_{2}\) topology is non-trivial. ## V Summary table for band flatness A fundamental insight from the spectrum calculation reveals that considering only NN hopping is sufficient to identify the emergence of the MFBs. It should be noted that for higher-ordered nodes, the momentum hopping decays at a notably slower rate as the momentum distance increases (refer to Eq. 22 and Fig. 3(a)). While extending the analysis to higher-ordered nodes may necessitate the inclusion of longer-range momentum hopping for accurate approximations, it is reasonable to posit that taking into account merely the NN hopping will be enough to determine the existence of the MFBs for different ordered nodes. Moreover, we also integrate the consideration of the NNN hopping to further confirm the MFB criteria. Building upon the calculation approach above, we employ the forms of the Hamiltonian (Eq. 18) and the interlayer hopping (Eq. 23) to catalog the properties of the MFBs for higher ordered nodes at \(\Gamma/\text{M}\) point in square and \(\Gamma/\text{K}/\text{K}^{\prime}\) in the hexagonal lattice. Table 2 shows the MFBs emerge as \(n=2,3\) for the square lattice and as \(n=1,2,3,4,5\) for the hexagonal lattice. Here we consider the first four magic values for each case, which is sufficient to show the characteristics of each model. Within the table, the numerical values signify the numbers of MFB degeneracies, while the accompanying subscripts categorize the MFBs according to whether they are gapped from or connected to other bands, denoted by \(g/c\), respectively. There are several features of the MFBs we would like to point out. First, the square lattice with only the NN hopping can be described by two identical sub-Hamiltonians, leading to two-fold degeneracy for a generic energy band. Hence, the degeneracy number of the MFBs in the Moire BZ is always a multiple of four. After the NNN hopping is introduced, the degeneracy symmetry is broken. The MFB degeneracy is reduced to two-fold but when \(n\) is odd, the degeneracy is still four-fold. Second, As \(n\) is odd, space-time inversion symmetry is preserved with \((PT)^{2}=-1\). According to Kramer's theorem, each energy band is two-fold degenerate so that the number of the MFB degeneracy is always a multiple of four, too. Third, even without symmetries, MFBs with higher degeneracy can also arise at some magic values, such as the second magic value in the hexagonal lattice. The flat bands inherit their topological properties from the order of the topological nodes. Existing literature primarily focuses on the physics of the two-fold degenerate Moire flat bands. In this context, each flat band in TBG (\(n=1\)) possesses a Chern number of \(\pm 1\)[34], and for the quadratic node (\(n=2\)) located at \(M\), each flat band possesses a Chern number of \(\pm 2\)[27]. Notably, one of our findings indicates that the degeneracy can exceed two. Specifically, when discussing \(n\)-ordered nodes at \(\Gamma\) with \(n=2\) in the square lattice and \(n=3\) in the hexagonal lattice, we demonstrate that by appropriately selecting half degenerate flat bands, the maximum total Chern number can be \(\pm n\). Since the flat bands inherit their topology from the \(n\)-ordered nodes, Table 2 showcases these topological features in a general context. Additionally, for odd values of \(n\), class III in 2D leads to a well-defined \(\mathbb{Z}_{2}\) topological invariant. Due to the presence of odd Chern numbers, the \(\mathbb{Z}_{2}\) topology is non-trivial. ## VI The origin of the MFBs Previously we numerically show MFBs appear at magic values. We take a step further to rigorously prove that the MFBs are exactly located at zero energy in the entire Moire BZ. Let us go back to TBG with chiral symmetry preserved. The key factor driving the emergence of the MFBs is that at the magic angles the zero-energy wave functions at the \(\text{K}/\text{K}^{\prime}\) points exhibiting nodes in the Moire unit cell. Using this wave function at K as a base, by attaching holomorphic functions we can generate zero-energy eigenfunctions for MFBs at any momentum in the Moire BZ [29]. In this study, we use the same approach to demonstrate the absolute flatness of multi-fold degenerate bands at the magic values in TBSs of the square and hexagonal lattices. To simplify this problem, we focus on solving \(D_{\mathbf{k}}(\mathbf{r})\psi_{\mathbf{k}}(\mathbf{r})=0\) for all \(\mathbf{k}\). We numerically obtain the two-component wave functions \(\psi_{\text{M}}(\mathbf{r})\) pinned at zero energy at \(\mathbf{M}\), which serves as a starting point to construct wave functions of the MFBs for other \(\mathbf{k}\). The conjectural wave function at \(\mathbf{k}\neq\mathbf{M}\) can be expressed as \(\psi_{\mathbf{k}}(\mathbf{r})\equiv f_{\mathbf{k}}(z)\psi_{\text{M}}(\mathbf{ r})\) with \(z=x+iy\) since \(D_{\mathbf{k}}(\mathbf{r})\psi_{\mathbf{k}}(\mathbf{k})=f_{\mathbf{k}}(z)[D_{ \mathbf{k}}(\mathbf{r})\psi_{\text{M}}(\mathbf{r})]=0\). The holomorphic function \(f_{\mathbf{k}}(z)\) is either a constant or unbounded by Liouville's theorem, signifying that \(f_{\mathbf{k}}(z)\) is meromorphic and thus has poles as \(\mathbf{k}\neq\mathbf{M}\). The eigenfunctions \(\psi_{\mathbf{k}}(\mathbf{r})\) are valid when the poles of \(f_{\mathbf{k}}(z)\) are smoothed out by the nodes of \(\psi_{\mathbf{M}}(\mathbf{r})\) in the Moire unit cell. Here, we present the analytical expression for wave function of double MFBs, while further details on other multi-fold degenerate bands are provided in the supplementary materials Sec.VI. To systematically show the band flatness, we investigate the first magic value in both square lattice and hexagonal lattice each. Specifically, we choose the quadratic node in square lattice at \(\alpha_{1,1}^{\prime s}=0.197\) with the NNN hopping and the quadratic node in hexagonal lattice at \(\alpha_{1,1}^{h}=0.156\) as shown in Fig. 5(f) and Fig. 7(b) respectively. To distinguish the degenerate bands, we make slight shifts in the values to \(\alpha_{s}^{\prime}=0.2\) and \(\alpha_{1}^{h}=0.15\), presenting the corresponding band structures in Fig. 5(e) and Fig. 7(a). These two MFBs are labeled as No.1 and No. 2, and are connected to each other by the chiral symmetry operator. In our subsequent analysis, we focus on the wave function of MFB No.1, while the wave function for MFB No.2 can be obtained by applying the chiral symmetry operator to the wave function of MFB No.1. ### Origin of double MFBs in the square lattice Here, we show that at first magic value \(\alpha_{1,1}^{\prime s}=0.197\) in the square lattice the wave function of the double MFBs are exactly zero-energy states in the Moire BZ. Our analysis begins by using Fig. 9(c), which illustrates the norm of the wave function \(\psi_{\mathbf{M},1}^{\prime s}(\mathbf{r})\) for MFB No.1. There are two nodal points with a linear real-space dependence, which are situated at \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Order Of The Node} & 1 & 2 & 3 & 4 & 5 & \multicolumn{1}{c}{\(\cdots\)} \\ \hline \multirow{4}{*}{Square Lattice} & \(\Gamma\) & NN & - & \(4_{g}\) & \(4_{c}\) & - & - & - \\ \cline{2-9} & & NNN & - & \(2_{e}\) & \(4_{g}\) & - & - & - \\ \cline{2-9} & \multirow{2}{*}{M} & NN & - & \(2_{g}\) & \(2_{c}\) & - & - & - \\ \cline{2-9} & & NNN & - & \(2_{g}\) & \(2_{e}\) & - & - & - \\ \hline \multirow{4}{*}{Hexagonal Lattice} & \(\Gamma\) & NN & - & \(2_{c},4_{c}\) & \(4_{g},8_{g}\) & \(2_{c},4_{c}\) & \(4_{g}\) & - \\ \cline{2-9} & \multirow{2}{*}{\(\Gamma\) Lattice} & NNN & - & \(2_{c},6_{c}\) & \(4_{g},8_{g}\) & \(2_{c},4_{c}\) & \(4_{g}\) & - \\ \cline{2-9} & & NNN & \(2_{g},2_{g}\) & - & - & - & - & - \\ \hline \multirow{4}{*}{Chern Number} & \multirow{2}{*}{\(\pm 1\)} & \multirow{2}{*}{\(\pm 1\)} & \multirow{2}{*}{\(\pm 2\)} & \multirow{2}{*}{\(\pm 3\)} & \multirow{2}{*}{\(\pm 4\)} & \multirow{2}{*}{\(\pm 5\)} & \multirow{2}{*}{-} \\ & & & & & & & \\ \cline{1-1} \cline{2-9} & \multirow{2}{*}{\(\mathbb{Z}_{2}\) Number} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{-} \\ \cline{1-1} \cline{2-9} & & & & & & & \\ \cline{1-1} \cline{2-9} & \multirow{2}{*}{\(\mathbb{Z}_{2}\) Number} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{-} \\ \cline{1-1} \cline{2-9} & & & & & & & \\ \cline{1-1} \cline{2-9} & \multirow{2}{*}{\(\mathbb{Z}_{2}\) Number} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{-} \\ \cline{1-1} \cline{2-9} & & & & & & & \\ \cline{1-1} \cline{2-9} & \multirow{2}{*}{\(\mathbb{Z}_{2}\) Number} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{-} \\ \cline{1-1} \cline{2-9} & \multirow{2}{*}{\(\mathbb{Z}_{2}\) Number} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{-} \\ \cline{1-2} \cline{2-9} & \multirow{2}{*}{\(\mathbb{Z}_{2}\) Number} & \multirow{2}{*}{\( and \(\rm R_{M}\) in Fig. 9(b). Our numerical calculations reveal that the node at \(\rm R_{M}\) moves with varying momentum \(\bf k\) (labeled as '\(\rm N_{s}\)' in Fig. 9(d)), while the node at \(\rm R_{\Gamma}\) remains stationary. To create the movement of the \(\psi^{s}_{\bf k,1}(\bf r)\) node, we introduce a theta function in the denominator of \(f_{\bf k}(z)\) eliminating the node at \(\rm R_{M}\) and another function in the numerator creating a new node. Moreover, we adjust the parameter values of the theta functions in order to ensure the two-component wave function \(\psi^{s}_{\bf k}(\bf r)\) obeying the Bloch boundary conditions on Moire lattice vectors \(\bf L_{\rm III}\), namely \(\psi^{s}_{\bf k}(\bf r+\bf L_{\rm III})=e^{i\bf k\cdot L_{\rm III}}\psi^{s}_{ \bf k}(\bf r)\). Finally, the analytical expression for wave function \(\Psi^{s}_{\bf k,1}(\bf r)\) of MFB No.1 is given by \[\Psi^{s}_{\bf k,1}(\bf r)=\frac{\vartheta_{a,b}(\nu|\omega)}{\vartheta_{\frac {1}{2},\frac{1}{2}}(\nu|\omega)}\Psi^{s}_{\bf M,1}(\bf r), \tag{35}\] with \(\nu=z/L_{\rm I}\), \(\omega=\frac{L_{\rm II}}{L_{\rm I}}\) and \(L_{\rm III}=(\bf L_{\rm III})_{x}+i(\bf L_{\rm III})_{y}\). To satisfy the Bloch boundary conditions, the rational characteristics \(a\) and \(b\) must satisfy \[\begin{split} a=& n_{\rm I}+\frac{1}{2}-\frac{(\bf k -M)\cdot L_{\rm I}}{2\pi},\\ b=& n_{\rm II}+\frac{1}{2}+\frac{(\bf k-M)\cdot L_{ \rm II}}{2\pi},\end{split} \tag{36}\] where \(n_{\rm I}\) and \(n_{\rm II}\) are arbitrary integers due to the lattice transnational symmetry. The definition of the theta function \(\vartheta_{a,b}(\nu|\omega)\) and the detailed derivations of Eq. 35 are provided in the supplementary materials Sec.V. By definition, \(f(z)=\vartheta_{a,b}/\vartheta_{\frac{1}{2},\frac{1}{2}}\) is a holomorphic function and the wave function can be extended to the entire Moire BZ with zero energy. This is a proof for the emergence of the MFBs. We can check if the numerical calculation is consistent with this analytic wave function. The nodal location of \(\Psi_{\bf k,1}(\bf r)\) is denoted as \(l\bf L_{\rm I}+m\bf L_{\rm II}\) in the Moire unit cell, abbreviated as (\(l\),\(m\)). Since the node of the \(\vartheta_{a,b}(\nu|\omega)\) function is at \((\frac{1}{2}-b,\frac{1}{2}-a)\), we can deduce that \(l=\frac{1}{2}-b\) and \(m=\frac{1}{2}-a\). With \(\bf k\) varying from \(\bf M\) to \(\Gamma\), the nodal point moves continuously from \(\rm R_{M}\) to \(\rm R_{\Gamma}\), as demonstrated by the arrows in Fig. 9 (a) and (b). The node locations (\(l\),\(m\)) from the analysis coincide with the ones from the numerical solutions. In addition, the wave functions from Eq. 35 and the numerics are exactly identical as shown in Fig. 9 (d). From Eq. 36 and Fig. 9 we can find that, when the momentum points stay on the \(\Gamma-\rm M\) line, the zeros of the corresponding momentum stay on the \(\rm R_{\Gamma}-\rm R_{M}\) line, which is \(\frac{\pi}{2}\) rotated relative to the momentum space trajectory. This result can also be comprehended from symmetry analysis, that is, when the state in momentum space is symmetric to the \(\Gamma-\rm M\) line, the wavefunction in real space must be symmetric to the \(\rm R_{\Gamma}-\rm R_{M}\) line, which is \(\frac{\pi}{2}\) rotated relative to the momentum space mirror line. So if there is only one zero point, it must stay on the mirror line. The detailed derivation can be found in the supplementary materials Sec.IV. ### Origin of double MFBs in the hexagonal lattice Here, we show that at the first magic value \(\alpha^{h}_{1,1}=0.156\) in the hexagonal lattice with quadratic node, the wave function of the two-fold degenerate MFBs are exactly zero-energy states in the Moire BZ. Figure 10: (a) and (b) Moiré BZ and Moiré unit cell in the hexagonal lattice. The colored dots in (a) denote the momentum we consider, the colored dots in (b) shows the nodal point of the wavefunction with the momentum having the same color in (a). (c) and (d) The red lines shows the norm of the 2-component wave functions \(\psi_{\bf k,1}(\bf r)\) at \(\bf k=M\) and \(\bf k=0.6\rm M\) for MFB No.1 in Moire unit cell at the magic value \(\alpha^{h}_{1,1}=0.156\), ’\(\rm N_{h}\)’ denotes the moving node. The black circles show the norm of the wavefunction obtaind from Eq. 37, which exactly coincides with the numerical value. Figure 9: (a) and (b) Moiré BZ and Moiré unit cell in the square lattice. The colored dots in (a) denote the momentum we consider, the colored dots in (b) shows the nodal point of the wavefunction with the momentum having the same color in (a). (c) and (d) The red lines shows the norm of the 2-component wave functions \(\psi_{\bf k,1}(\bf r)\) at \(\bf k=M\) and \(\bf k=0.6\rm M\) for MFB No.1 in Moire unit cell at the magic value \(\alpha^{s}_{1,1}=0.197\). ’\(\rm N_{s}\)’ denotes the moving node. The black dots in (d) are the norm of the two-component wavefunction obtained from Eq. 35, which exactly coincides with the numerical value. Similarly we begin by observing the norm of the wave function \(\psi^{h}_{\mathbf{M},1}(\mathbf{r})\) for MFB No.1 at the \(\mathbf{M}\) point, as shown in Fig. 10 (c), which exhibits two linear nodal points located at \(\mathbf{R}_{\Gamma}\) and \(\mathbf{R}^{\prime}_{\mathbf{M}_{3}}\) in the real space Moire unit cell. Our numerical calculations reveal that the node at \(\mathbf{R}^{\prime}_{\mathbf{M}_{3}}\) would move as the momentum \(\mathbf{k}\) varies (denoted as '\(\mathbf{N}_{\text{h}}\)' in Fig. 10(d)), while the node at \(\mathbf{R}_{\Gamma}\) remains stationary. So again we introduce a theta function in the numerator of \(f_{\mathbf{k}}(z)\) to create the moving node of \(\psi^{h}_{\mathbf{k},1}(\mathbf{r})\), and another theta function in the denominator to cancel the node of \(\psi^{h}_{\mathbf{M},1}(\mathbf{r})\). We can adjust the parameters of the theta wavefunctions to ensure that the two-component wave function \(\Psi^{h}_{\mathbf{k}}(\mathbf{r})\) obeys the Bloch boundary conditions on the Moire lattice vectors \(\mathbf{I}_{\mathbf{\mu}\text{II}}\), namely \(\psi^{h}_{\mathbf{k}}(\mathbf{r}+\mathbf{I}_{\mathbf{\mu}\text{II}})=e^{i \mathbf{k}\cdot\mathbf{I}_{\mathbf{\mu}\text{II}}}\psi^{h}_{\mathbf{k}}( \mathbf{r})\). Finally, the analytical expression for the wave function \(\Psi^{h}_{\mathbf{k},1}(\mathbf{r})\) of MFB No.1 is given by: \[\Psi^{h}_{\mathbf{k},1}(\mathbf{r})=\frac{\vartheta_{a,b}(\nu|\omega)}{ \vartheta_{0,\frac{1}{2}}(\nu|\omega)}\Psi^{h}_{\mathbf{M},1}(\mathbf{r}), \tag{37}\] where \(\nu=z/L_{\text{I}}\), \(\omega=\frac{L_{\text{II}}}{L_{\text{I}}}\) and \(L_{\text{III}}=(\mathbf{I}_{\mathbf{\mu}\text{II}})_{x}+i(\mathbf{L}_{\mathbf{ \mu}\text{II}})_{y}\). The rational characteristics \(a\) and \(b\) that satisfy the Bloch boundary conditions are given by: \[a=n_{\text{I}}+\frac{(\mathbf{k}-\mathbf{M})\cdot\mathbf{L}_{\text{I}}}{2\pi},b=n_{\text{II}}+\frac{1}{2}-\frac{(\mathbf{k}-\mathbf{M})\cdot\mathbf{L}_{ \text{II}}}{2\pi}. \tag{38}\] where \(n_{\text{I}}\) and \(n_{\text{II}}\) are arbitrary integers due to the lattice transnational symmetry. By definition, \(f_{\mathbf{k}}(z)\) is a holomorphic function and the wave function can be extended to the entire Moire BZ with zero energy. As the momentum \(\mathbf{k}\) varies in the Moire BZ, the nodal point of \(\psi^{h}_{\mathbf{M},1}(\mathbf{r})\) moves correspondingly in real space. Specifically, with \(\mathbf{k}\) varying from \(\mathbf{M}\) to \(\Gamma\), the nodal point moves continuously from \(\mathbf{R}^{\prime}_{\mathbf{M}_{3}}\) to \(\mathbf{R}_{\Gamma}\), as demonstrated by the arrows in Fig. 10(a) and (b). For example, taking \(\mathbf{k}=0.6\mathbf{M}\), the moving node is located at \(N_{t}\) with (\(l=0\),\(m=-0.3\)), which agrees with the analytical expressions Eq. 38. In addition, the wave functions from Eq. 37 and the numerics are exactly identical as shown in Fig. 10 (d). As we can find in Eq. 38 and Fig. 10, the trajectory of the momentum points and the zeros in real space has the similar \(\frac{\pi}{2}\) rotation as in the square lattice. We can apply similar symmetry analysis as in the square lattice, the detailed derivation can be found in the supplementary materials Sec.IV. ## VII Conclusion The emergence of absolutely flat bands in TBG is rooted in the evolution of Dirac cones through the interlayer coupling tuned by twisted angles. We extend our investigation beyond linear Dirac nodes by considering higher-ordered topological nodes at various locations within the BZ. In order to search for flat bands, the locations of topological nodes can be classified into \(\Gamma,~{}\text{M}\) points in the square lattice and \(\Gamma,~{}\text{K},~{}\text{K}^{\prime}\) points in the hexagonal lattice. Specifically, we find that quadratic and cubic nodes at the \(\Gamma\) point in both square and hexagonal lattices can lead to MFBs at zero energy across the entire Moire BZ. Hexagonal lattices with quartic and quintic nodes at the \(\Gamma\) point also exhibit flat bands. Interestingly, for odd-ordered nodes, these flat bands exist within energy gaps, offering rich platforms for exploring strongly correlated phenomena. Furthermore, the square lattice with a quadratic or cubic node at M point, and the hexagonal lattice with a linear or quadratic node at \(\text{K},~{}\text{K}^{\prime}\) point can also exhibit MFBs. the linear node at \(\text{K},~{}\text{K}^{\prime}\) point case is identical to the TBG. TBSs include \(n\)-ordered topological nodes in their base layers, described by a \(2\times 2\) off-diagonal Hamiltonian with a directional-dependent \(n\)-phase winding. After assuming that the interlayer coupling retains the same directional dependence as the topological nodes, we note that the detailed hopping form of the interlayer coupling has no impact on the criteria for flat band emergence; rather, the flat bands are primarily determined by the orders and locations of the topological nodes. In our analysis, taking into account nearest-neighbor momentum hopping suffices for predicting the emergence of MFBs. To identify conditions for band flatness, we compute the spectrum of the Birman-Schwinger operator's inverse. Real eigenvalues in this spectrum correspond to magic values (normalized angles) leading to flat bands. Since a state of the spectrum can be extended to two zero-energy states of the bilayer Hamiltonian, the degeneracy number of the flat bands is precisely twice the degeneracy number of the operator spectrum. For cubic and quintic nodes in the hexagonal lattice the degeneracy number of the flat bands is invariably a multiple of four due to space-time inversion symmetry. In addition, the non-trivial topology of the MFBs inherits from the non-zero orders of the topological nodes. In many scenarios, such as quadratic and cubic nodes at \(\Gamma\) in square and hexagonal lattices, some magic values yield flat bands with degeneracy greater than two-fold. The high degeneracy with the delocalization from the non-trivial topology might potentially amplify strongly correlated effects [35]. Strictly speaking, the Moire spectrum plots are not enough to show band flatness in the entire Moire BZ, since the spectra are shown only along high symmetry lines. To fix this, we attach holomorphic functions on the state at zero energy and adjust the parameters of the holomorphic functions to satisfy the periodic boundary condition of the superlattice. This extended state is an eigenstate of the twisted bilayer Hamiltonian with zero energy and the flatness covers the entire Moire BZ. The development of topological nodal semimetals and topological superconductors opens up several promising avenues for band flatness in twisted bilayers. Recent discoveries include quadratic-node semimetals in photonic ring lattices [36], where the quadratic node benefits from symmetry protection. The bilayer of these semimetals can simply form a twisted platform with quadratic semimetals. Additionally, time-reversal symmetric topological superconductors [31; 37] naturally preserve chiral symmetry. In the case of non-trivial topological superconductors with a 3D winding number greater than one, stable high-order nodes can appear on the superconductor surface. Intriguingly, the interface between the two surfaces with the opposite 3D winding numbers can lead to twisted bilayer platforms hosting high-ordered nodes. At magic twisted angles, these two types of twisted platforms can give rise to MFBs. In particular, interesting correlated physics might emerge in these flat bands with superconductivity. _Acknowledgement:_ We thank Zhida Song and Simon Becker for the helpful discussions and comments. J.H. is supported by the Ministry of Science and Technology (Grant No. 2022YFA1403901), the National Natural Science Foundation of China (Grant No. NSFC-11888101), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB28000000, XDB33000000), the New Cornerstone Investigator Program. X.W. is supported by the National Natural Science Foundation of China (Grant No. 12047503). C.-K.C. was supported by JST Presto Grant No. JPMJPR2357.
2309.02116
Homotopification and categorification of Leibniz conformal algebras
Bakalov, Kac and Voronov introduced Leibniz conformal algebras (and their cohomology) as a non-commutative analogue of Lie conformal algebras. Leibniz conformal algebras are closely related to field algebras which are non-skew-symmetric generalizations of vertex algebras. In this paper, we first introduce $Leib_\infty$-conformal algebras (also called strongly homotopy Leibniz conformal algebras) where the Leibniz conformal identity holds up to homotopy. We give some equivalent descriptions of $Leib_\infty$-conformal algebras and characterize some particular classes of $Leib_\infty$-conformal algebras in terms of the cohomology of Leibniz conformal algebras and crossed modules of Leibniz conformal algebras. On the other hand, we also introduce Leibniz conformal $2$-algebras that can be realized as the categorification of Leibniz conformal algebras. Finally, we observe that the category of Leibniz conformal $2$-algebras is equivalent to the category of $2$-term $Leib_\infty$-conformal algebras.
Apurba Das, Anupam Sahoo
2023-09-05T10:50:25Z
http://arxiv.org/abs/2309.02116v1
# Homotopification and categorification of Leibniz conformal algebras ###### Abstract. Bakalov, Kac and Voronov introduced Leibniz conformal algebras (and their cohomology) as a non-commutative analogue of Lie conformal algebras. Leibniz conformal algebras are closely related to field algebras which are non-skew-symmetric generalizations of vertex algebras. In this paper, we first introduce \(Leib_{\infty}\)-conformal algebras (also called strongly homotopy Leibniz conformal algebras) where the Leibniz conformal identity holds up to homotopy. We give some equivalent descriptions of \(Leib_{\infty}\)-conformal algebras and characterize some particular classes of \(Leib_{\infty}\)-conformal algebras in terms of the cohomology of Leibniz conformal algebras and crossed modules of Leibniz conformal algebras. On the other hand, we also introduce Leibniz conformal \(2\)-algebras that can be realized as the categorification of Leibniz conformal algebras. Finally, we observe that the category of Leibniz conformal \(2\)-algebras is equivalent to the category of \(2\)-term \(Leib_{\infty}\)-conformal algebras. 2020 MSC classifications: 17A32, 17B69, 18N40, 18N25. Keywords: Leibniz algebras, \(Leib_{\infty}\)-algebras, Leibniz \(2\)-algebras, Leibniz conformal algebras. ###### Contents * 1 Introduction * 2 Background on Leibniz conformal algebras * 3 Strongly homotopy Leibniz conformal algebras * 4 Skeletal and strict homotopy Leibniz conformal algebras * 5 Leibniz conformal \(2\)-algebras ## 1. Introduction ### Origin of conformal algebras After the seminal work of Belavin, Polyakov and Zamolodchikov [6], conformal field theory has become a fundamental subject with many remarkable connections to mathematics and physics. A rigorous description of chiral algebras (also called vertex algebras by mathematicians) in conformal field theory was proposed by Borcherds [8] and subsequently studied in [11, 13, 14, 17]. The study of Lie conformal algebras was initiated by Kac [17] in view of their intimate connections with \(2\)-dimensional quantum field theory, vertex algebras and infinite dimensional Lie algebras. Namely, a Lie conformal algebra encodes the singular part of the operator product expansion of chiral fields in conformal field theory. To some extent, Lie conformal algebras are related to vertex algebras in the same way Lie algebras are related to their universal enveloping algebras. Structures of finite and simple conformal algebras were extensively studied in [10, 15, 19]. On the other hand, in the study of representations of Lie conformal algebras, the authors in [4] introduced the notion of associative conformal algebras. It is important to remark that representations and cohomology of finite Lie and associative conformal algebras have significant development in the last twenty-five years, see [4, 20, 27] and the references therein. ### Higher structures: homotopy algebras and categorifications Higher structures, such as homotopy algebras (also called \(\infty\)-algebras) and categorifications of algebras play important roles in higher Lie theory, higher gauge theory and conformal field theory. Note that, higher structures appear when we increase the flexibility of an algebraic structure. This can be done in two ways, namely, by homotopification and categorification of given algebraic identity/identities. Homotopy algebras are precisely homotopification of algebras. They can be realized as homotopy invariant extensions of differential graded algebras. The notion of a strongly homotopy Lie algebra or an \(L_{\infty}\)-algebra is the homotopification of Lie algebra [22], i.e. Jacobi identity holds up to homotopy. They play a prominent role in deformations of algebraic structures [12], quantization of Poisson manifolds [21] and higher symplectic geometry [24]. On the other hand, 2-algebras are categorifications of algebras. They are obtained when we replace sets (resp. maps, equalities) with categories (resp. functors, natural isomorphisms). Unlike ordinary algebras (where we consider two maps to be equal), here we consider the corresponding functors to be naturally isomorphic. The notion of Lie 2-algebras was introduced by Baez and Crans [2] as the categorification of Lie algebras. They are closely related to the Zamolodchikov tetrahedron equation which can be considered as the categorification of the set-theoretical solutions of the Yang-Baxter equation. It is important to remark that the concept of categorification leads to many surprising results in topological field theory and string theory. It has been observed in [2] that homotopy algebras and 2-algebras are closely connected. More precisely, they showed that the category of 2-term \(L_{\infty}\)-algebras and the category of Lie 2-algebras are equivalent. Among others, they studied skeletal and strict \(L_{\infty}\)-algebras and gave their characterizations. Homotopy algebras and 2-algebras can be generalized to other types of algebraic structures. The notion of a Leibniz algebra (first introduced by Bloh [7] and popularized by Loday [23]) is a non-skew-symmetric generalization of a Lie algebra. Many results that are true for Lie algebras can be generalized to Leibniz algebras. In [1] Ammar and Poncin first introduced the notion of a \(Leib_{\infty}\)-algebra in their study of graded free zinbiel coalgebra. A more concrete description of a \(Leib_{\infty}\)-algebra is given in [18] (see also [9, 28]). On the other hand, Leibniz 2-algebras are considered and their relations with \(Leib_{\infty}\)-algebras are discussed in [26]. ### Layout of the paper Like Leibniz algebras are a non-skew-symmetric analogue of Lie algebras, the concept of Leibniz conformal algebras introduced in [4] forms a non-skew-symmetric analogue of Lie conformal algebras. Leibniz conformal algebras are closely related to field algebras [3], which are non-commutative generalizations of vertex algebras. It has been proved by Zhang [29] that the category of Leibniz conformal algebras is equivalent to the category of equivalence classes of formal distribution Leibniz algebras. Representations and cohomology of Leibniz conformal algebras are studied extensively in [4, 29]. Quadratic Leibniz conformal algebras are considered by Zhou and Hong [31]. In [16] the authors defined a unified product (that simultaneously generalizes twisted product, crossed product and bicrossed product) of Leibniz conformal algebras and introduced some cohomological objects to classify such unified products. Our main aim in this paper is to study homotopification and categorification of Leibniz conformal algebras. Namely, we provide two generalizations of Leibniz conformal algebras by extending flexibility to the Leibniz conformal identity. (i) At first, we introduce \(Leib_{\infty}\)_-conformal algebras_ (also called _strongly homotopy Leibniz conformal algebras_) in which the Leibniz conformal identity holds up to homotopy. We observe that \(Leib_{\infty}\)-conformal algebras can be better understood when we shift the underlying graded space. Using this observation, we give a Maurer-Cartan characterization of \(Leib_{\infty}\)-conformal algebras. More precisely, given a graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}\), we construct a graded Lie algebra whose Maurer-Cartan elements correspond to \(Leib_{\infty}\)-conformal algebra structures on \(\mathcal{G}\). Our graded Lie algebra generalizes Balavoine's graded Lie algebra [5] that characterizes classical Leibniz algebras. As a consequence of our characterization, we define the cohomology of a \(Leib_{\infty}\)-conformal algebra. Then we put special attention to those \(Leib_{\infty}\)-conformal algebras whose underlying graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}\) is concentrated in arity 0 and 1. We call them \(2\)_-term \(Leib_{\infty}\)-conformal algebras_. The collection of all 2-term \(Leib_{\infty}\)-conformal algebras and homomorphisms between them forms a category, denoted by \(\mathbf{2Leib}_{\infty}\mathbf{Conf}\). Motivated by the results of [2], we consider skeletal and strict \(Leib_{\infty}\)-conformal algebras. Among others, we show that skeletal algebras are characterized by third cocycles of Leibniz conformal algebras and strict algebras are characterized by crossed modules of Leibniz conformal algebras. (ii) In the next, we consider _Leibniz conformal \(2\)-algebras_ as categorification of Leibniz conformal algebras. More precisely, a Leibniz conformal \(2\)-algebra is a category \(C\) (internal to the category of \(\mathbb{C}[\partial]\)-modules) endowed with a \(\mathbb{C}\)-linear conformal sesquilinear functor \([\cdot_{\lambda}]:C\otimes C\to C[[\lambda]]\) (that need not satisfy the Leibniz conformal identity) and a conformal Leibnizator that satisfy some suitable identity which can be described by the commutativity of the diagram (8). We show that the collection of all Leibniz conformal \(2\)-algebras and homomorphisms between them forms a category, denoted by \(\mathbf{LeibConf2}\). We also show that the category \(\mathbf{LeibConf2}\) is equivalent to the category \(\mathbf{2Leib}_{\infty}\mathbf{Conf}\) of \(2\)-term \(Leib_{\infty}\)-conformal algebras. Finally, we give an example of a Leibniz conformal \(2\)-algebra associated with any \(2\)-term complex of finite \(\mathbb{C}[\partial]\)-modules. The results of the present paper can be summarized in the following schematic diagram: \[\begin{array}{|c|c|c|}\hline\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\ The concept of Leibniz conformal algebras was first considered by Bakalov, Kac and Voronov [4] as the non-skew-symmetric analogue of Lie conformal algebras. They can also be considered as the conformal analogue of Leibniz algebras. ### Definition A _Leibniz conformal algebra_ is a \(\mathbb{C}[\partial]\)-module \(\mathfrak{g}\) equipped with a \(\mathbb{C}\)-linear bracket (called the \(\lambda\)-bracket) \[[\cdot_{\lambda}\cdot]:\mathfrak{g}\otimes\mathfrak{g}\to\mathfrak{g}[[ \lambda]],\ x\otimes y\mapsto[x_{\lambda}y]\] that satisfies the conformal sesquilinearity conditions \[[\partial x_{\lambda}y]=-\lambda[x_{\lambda}y],\ \ \ [x_{\lambda}\partial y]=( \partial+\lambda)[x_{\lambda}y], \tag{2}\] and the following Leibniz conformal identity: \[[x_{\lambda}[y_{\mu}z]]=[[x_{\lambda}y]_{\lambda+\mu}z]+[y_{\mu}[x_{\lambda}z ]],\ \text{for}\ x,y,z\in\mathfrak{g}. \tag{3}\] A Leibniz conformal algebra as above may be denoted by the pair \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) or simply by \(\mathfrak{g}\) if the \(\lambda\)-bracket is clear from the context. Since the Leibniz conformal identity (3) is a generalization of the left Leibniz identity (1), a Leibniz conformal algebra as defined above is often called a left Leibniz conformal algebra. Let \(M\) be a \(\mathbb{C}[\partial]\)-module equipped with a \(\lambda\)-bracket. A \(\mathbb{C}\)-linear map \(f:M\to M[[\lambda]],\ m\mapsto f_{\lambda}(m)\) is said to be a _conformal derivation_ if \(f_{\lambda}[m_{\mu}n]=[(f_{\lambda}m)_{\lambda+\mu}n]+[m_{\mu}(f_{\lambda}n)],\) for \(m,n\in M\). It follows that the identity (3) in a Leibniz conformal algebra is equivalent to the fact that all the left translations \([x_{\lambda}-]:\mathfrak{g}\to\mathfrak{g}[[\lambda]]\) are conformal derivations for the \(\lambda\)-bracket. The concept of a right Leibniz conformal algebra can be defined similarly. Throughout the paper, by a Leibniz conformal algebra, we shall always mean a left Leibniz conformal algebra. However, all the results about left Leibniz conformal algebras can be easily generalized to right Leibniz conformal algebras without much work. ### Remark A Lie conformal algebra is a Leibniz conformal algebra \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) whose \(\lambda\)-bracket is skew-symmetric (i.e. \([x_{\lambda}y]=-[y_{-\partial-\lambda}x]\), for all \(x,y\in\mathfrak{g}\)) and the image of the \(\lambda\)-bracket lies in the space of \(\mathfrak{g}\)-valued polynomials in \(\lambda\)[4]. Thus, Lie conformal algebras are natural examples of Leibniz conformal algebras. Representations of Lie conformal algebras also induce Leibniz conformal algebras. Recall from [4] that a representation of a Lie conformal algebra \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) is a \(\mathbb{C}[\partial]\)-module \(M\) equipped with a \(\mathbb{C}\)-linear map (called the \(\lambda\)-action) \(\cdot_{\lambda}\cdot:\mathfrak{g}\otimes M\to M[\lambda],\,x\otimes v\mapsto x _{\lambda}v\) that satisfies \((\partial x)_{\lambda}v=-\lambda x_{\lambda}v\), \(x_{\lambda}(\partial v)=(\partial+\lambda)x_{\lambda}v\) and \[(x_{\lambda}y)_{\lambda+\mu}v=x_{\lambda}(y_{\mu}v)-y_{\mu}(x_{\lambda}v),\ \text{for all}\ x,y\in\mathfrak{g},v\in M.\] If \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) is a Lie conformal algebra and \(M\) is a representation, then the graded \(\mathbb{C}[\partial]\)-module \(\mathfrak{g}\oplus M\) inherits a Leibniz conformal algebra structure with the \(\lambda\)-bracket \[\{(x,u)_{\lambda}(y,v)\}=([x_{\lambda}y],x_{\lambda}v),\ \text{for}\ (x,u),(y,v)\in \mathfrak{g}\oplus M.\] Let \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) be a Leibniz conformal algebra. For each \(j\geq 0\), we define the \(j\)-th product on \(\mathfrak{g}\) by a \(\mathbb{C}\)-linear map \(\cdot_{(j)}:\mathfrak{g}\otimes\mathfrak{g}\to\mathfrak{g}\) that satisfies \[[x_{\lambda}y]=\sum_{j\geq 0}\frac{\lambda^{j}}{j!}\big{(}x_{(j)}y\big{)},\ \text{for all}\ x,y\in\mathfrak{g}.\] Since \([x_{\lambda}y]\) is a \(\mathfrak{g}\)-valued formal power series in \(\lambda\) with possibly infinitely many terms, there are infinitely many \(j\)'s for which \(x_{(j)}y\neq 0\). In terms of the \(j\)-th products, the identities (2) and (3) are equivalent to the followings: for all \(x,y,z\in\mathfrak{g}\) and \(j,m,n\geq 0\), \[(\partial x)_{(j)}y=-jx_{(j-1)}y,\ \ \ x_{(j)}(\partial y)= \partial(x_{(j)}y)+jx_{(j-1)}y,\] \[x_{(m)}(y_{(n)}z)=\sum_{k=0}^{m}\binom{m}{k}(x_{(k)}y)_{(m+n-k) }z+y_{(n)}(x_{(m)}z).\] ### Remark Note that the concept of Leibniz conformal algebras can be generalized to the graded context. More precisely, a _graded Leibniz conformal algebra_ is a pair \((\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i},[\cdot\cdot\cdot])\) of a graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i}\) with a degree \(0\) conformal sesquilinear map \([\cdot\cdot\cdot]:\mathcal{G}\otimes\mathcal{G}\to\mathcal{G}[[\lambda]]\) that satisfies the graded version of the identity (3), namely, \[[x_{\lambda}[y_{\mu}z]]=[[x_{\lambda}y]_{\lambda+\mu}z]+(-1)^{|x||y|}[y_{\mu}[x _{\lambda}z]],\text{ for homogeneous }x,y,z\in\mathcal{G}\] It is further called a _differential graded Leibniz conformal algebra_ if there exists a degree \(-1\)\(\mathbb{C}[\partial]\)-linear map \(d:\mathcal{G}\to\mathcal{G}\) that satisfies \(d^{2}=0\) and \(d[x_{\lambda}y]=[(dx)_{\lambda}y]+(-1)^{|x|}[x_{\lambda}(dy)],\) for all \(x,y\in\mathcal{G}\). ### Definition Let \((\mathfrak{g},[\cdot\cdot\cdot])\) be a Leibniz conformal algebra. A _representation_ of \((\mathfrak{g},[\cdot\cdot\cdot])\) is given by a \(\mathbb{C}[\partial]\)-module \(M\) equipped with two \(\mathbb{C}\)-linear maps (called the left and right \(\lambda\)-actions, respectively) \[\cdot\cdot\cdot:\mathfrak{g}\otimes M\to M[[\lambda]],\,x\otimes v\mapsto x _{\lambda}v\quad\text{ and }\quad\cdot\cdot\cdot:M\otimes\mathfrak{g}\to M[[\lambda]],\,v\otimes x \mapsto v_{\lambda}x\] satisfying the following set of identities: for any \(x,y\in\mathfrak{g}\) and \(v\in M\), \[(\partial x)_{\lambda}v=-\lambda(x_{\lambda}v),\,\,x_{\lambda}( \partial v)=(\partial+\lambda)x_{\lambda}v,\] \[(\partial v)_{\lambda}x=-\lambda(v_{\lambda}x),\,\,v_{\lambda} (\partial x)=(\partial+\lambda)v_{\lambda}x,\] \[x_{\lambda}(y_{\mu}v)=[x_{\lambda}y]_{\lambda+\mu}v+y_{\mu}(x_{ \lambda}v),\] \[x_{\lambda}(v_{\mu}y)=(x_{\lambda}v)_{\lambda+\mu}y+v_{\mu}[x_{ \lambda}y],\] \[v_{\lambda}[x_{\mu}y]=(v_{\lambda}x)_{\lambda+\mu}y+x_{\mu}(v_{ \lambda}y).\] We denote a representation as above simply by \(M\) when the left and right \(\lambda\)-actions are clear from the context. It follows from the above definition that any Leibniz conformal algebra \((\mathfrak{g},[\cdot\cdot\cdot])\) can be regarded as a representation of itself, where both the left and right \(\lambda\)-actions are given by the \(\lambda\)-bracket. This is called the _adjoint representation_. Let \(\mathfrak{g}\) and \(M\) be two \(\mathbb{C}[\partial]\)-modules (not necessarily equipped with any additional structures). For any \(n\geq 1\), a \(\mathbb{C}\)-linear map \[\varphi:\mathfrak{g}^{\otimes n}\to M[[\lambda_{1},\ldots\lambda_{n-1}]],\, \varphi(x_{1},\ldots,x_{n})\mapsto\varphi_{\lambda_{1},\ldots\lambda_{n-1}}(x_{ 1},\ldots,x_{n})\] is said to be _conformal sesquilinear_ if the following conditions are hold: for all \(x_{1},\ldots,x_{n}\in\mathfrak{g}\), \[\varphi_{\lambda_{1},\ldots\lambda_{n-1}}(x_{1},\ldots,\partial x _{i},\ldots,x_{n})=-\lambda_{i}\,\,\varphi_{\lambda_{1},\ldots\lambda_{n-1}}(x _{1},\ldots,x_{n}),\text{ for }1\leq i\leq n-1,\] \[\varphi_{\lambda_{1},\ldots\lambda_{n-1}}(x_{1},\ldots,x_{n-1}, \partial x_{n})=(\partial+\lambda_{1}+\cdots+\lambda_{n-1})\,\,\varphi_{ \lambda_{1},\ldots\lambda_{n-1}}(x_{1},\ldots,x_{n}).\] We denote the space of all such conformal sesquilinear maps by \(\operatorname{Hom}_{cs}(\mathfrak{g}^{\otimes n},M[[\lambda_{1},\ldots, \lambda_{n-1}]]).\) Note that, for \(n=1\), we have \(\operatorname{Hom}_{cs}(\mathfrak{g},M)=\operatorname{Hom}_{\mathbb{C}[ \partial]}(\mathfrak{g},M)\) the space of all \(\mathbb{C}[\partial]\)-linear maps from \(\mathfrak{g}\) to \(M\). Let \((\mathfrak{g},[\cdot\cdot\cdot])\) be a Leibniz conformal algebra and \(M\) be a representation. For each \(n\geq 0\), we define the space \(C^{n}_{\operatorname{cLeib}}(\mathfrak{g},M)\) of \(n\)-cochains by \[C^{n}_{\operatorname{cLeib}}(\mathfrak{g},M)=\begin{cases}M/\partial M&\text{ if }n=0,\\ \operatorname{Hom}_{cs}(\mathfrak{g}^{\otimes n},M[[\lambda_{1},\ldots,\lambda_{n-1}]] )&\text{ if }n\geq 1.\end{cases}\] Then there is a map \(\delta:C^{n}_{\operatorname{cLeib}}(\mathfrak{g},M)\to C^{n+1}_{ \operatorname{cLeib}}(\mathfrak{g},M)\) given by \[\delta(v+\partial M)(x) =(-v_{\lambda}x)|_{\lambda=0},\text{ for }v+\partial M\in M/ \partial M\text{ and }x\in\mathfrak{g},\] \[(\delta\varphi)_{\lambda_{1},\ldots,\lambda_{n}}(x_{1},\ldots,x_{ n+1}) =\sum_{i=1}^{n}(-1)^{i+1}\,\,x_{i\lambda_{i}}\big{(}\varphi_{\lambda_{1},\ldots, \widehat{\lambda_{i}},\ldots,\lambda_{n}}(x_{1},\ldots,\widehat{x_{i}},\ldots,x_ {n+1})\big{)}\] \[\quad+(-1)^{n+1}\big{(}\varphi_{\lambda_{1},\ldots,\lambda_{n-1}}( x_{1},\ldots,x_{n})\big{)}_{\lambda_{1}+\cdots+\lambda_{n}}x_{n+1}\] \[\quad+\sum_{1\leq i<j\leq n+1}(-1)^{i}\,\,\varphi_{\lambda_{1}, \ldots,\widehat{\lambda_{i}},\ldots,\lambda_{j-1},\lambda_{i}+\lambda_{j},\ldots, \lambda_{n}}(x_{1},\ldots,\widehat{x_{i}},\ldots,x_{j-1},[x_{i\lambda_{i}}x_{j}],\ldots,x_{n+1}),\] for \(\varphi\in C^{n\geq 1}_{\mathrm{cLeib}}(\mathfrak{g},M)\) and \(x_{1},\ldots,x_{n+1}\in\mathfrak{g}\). It has been shown in [29] that \(\delta^{2}=0\). In other words \(\{C^{\bullet}_{\mathrm{cLeib}}(\mathfrak{g},M),\delta\}\) is a cochain complex. The corresponding cohomology groups are called the _cohomology_ of the Leibniz conformal algebra \((\mathfrak{g},[\cdot\cdot])\) with coefficients in the representation \(M\). ## 3. Strongly homotopy Leibniz conformal algebras The notion of \(Leib_{\infty}\)-algebras was first introduced by Ammar and Poncin [1] in the operadic study of Leibniz algebras (see also [18]). In this section, we introduce the concept of \(Leib_{\infty}\)-conformal algebras as the homotopification of Leibniz conformal algebras. Given a graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}\), we construct a graded Lie algebra whose Maurer-Cartan elements correspond to \(Leib_{\infty}\)-conformal algebra structures on \(\mathcal{G}\). We end this section by considering the cohomology of \(Leib_{\infty}\)-conformal algebras. **3.1 Definition**.: A \(Leib_{\infty}\)-_algebra_ is a pair \((\mathcal{G}=\oplus_{i\in 2}\mathcal{G}_{i},\{\pi_{k}\}_{k\geq 1})\) consisting of a graded vector space \(\mathcal{G}=\oplus_{i\in 2}\mathcal{G}_{i}\) equipped with a sequence of graded linear maps \(\{\pi_{k}:\mathcal{G}^{\otimes k}\rightarrow\mathcal{G}\}_{k\geq 1}\) with \(\deg(\pi_{k})=k-2\) for \(k\geq 1\), such that for any \(n\geq 1\), \[\sum_{k+l=n+1}\sum_{i=1}^{k}\sum_{\sigma\in\mathrm{Sh}(i-1,l-1)} \varepsilon(\sigma)\ \mathrm{sgn}(\sigma)\ (-1)^{(k-i-1)(l-1)}(-1)^{l(|x_{\sigma(1)}|+\cdots+|x_{\sigma(i-1)}|)}\] \[\pi_{k}\big{(}x_{\sigma(1)},\ldots,x_{\sigma(i-1)},\pi_{l}\big{(} x_{\sigma(i)},\ldots,x_{\sigma(i+l-2)},x_{i+l-1}\big{)},x_{i+l},\ldots,x_{n}\big{)}=0.\] The concept of \(Leib_{\infty}\)-algebras is the homotopification of Leibniz algebras. More precisely, a \(Leib_{\infty}\)-algebra whose underlying graded vector space is concentrated only in degree \(0\) is nothing but a Leibniz algebra. In [1], the authors showed that \(Leib_{\infty}\)-algebras can be better understood if we shift the degree of the underlying graded vector space. Using such observation, one can also give a Maurer-Cartan characterization of \(Leib_{\infty}\)-algebras (see also [9, 28]). Let \(\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i}\) be a graded \(\mathbb{C}[\partial]\)-module. Then for any indeterminates \(\lambda_{1},\ldots,\lambda_{k-1}\) (for \(k\geq 1\)), the space \(\mathcal{G}[[\lambda_{1},\ldots,\lambda_{k-1}]]\) is a graded vector space over \(\mathbb{C}\) with the grading \[\mathcal{G}[[\lambda_{1},\ldots,\lambda_{k-1}]]=\oplus_{i\in\mathbb{Z}} \mathcal{G}_{i}[[\lambda_{1},\ldots,\lambda_{k-1}]].\] **3.2 Definition**.: A \(Leib_{\infty}\)-_conformal algebra_ (also called a _strongly homotopy Leibniz conformal algebra_) is a pair \((\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i},\{\rho_{k}\}_{k\geq 1})\) consisting of a graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i}\) with a collection \[\{\rho_{k}:\mathcal{G}^{\otimes k}\rightarrow\mathcal{G}[[\lambda_{1},\ldots, \lambda_{k-1}]],\ x_{1}\otimes\cdots\otimes x_{k}\mapsto(\rho_{k})_{\lambda_{1},\ldots,\lambda_{k-1}}(x_{1},\ldots,x_{k})\}_{k\geq 1}\] of graded \(\mathbb{C}\)-linear maps with \(\deg(\rho_{k})=k-2\) for \(k\geq 1\), satisfying the following conditions: - each \(\rho_{k}\) is conformal sesquilinear, i.e. \[(\rho_{k})_{\lambda_{1},\ldots,\lambda_{k-1}}(x_{1},\ldots,\partial x _{i},\ldots,x_{k-1},x_{k})= \ -\lambda_{i}(\rho_{k})_{\lambda_{1},\ldots,\lambda_{k-1}}(x_{1},\ldots,x_{k}), \ \text{for}\ 1\leq i\leq k-1,\] \[(\rho_{k})_{\lambda_{1},\ldots,\lambda_{k-1}}(x_{1},\ldots,x_{k-1},\partial x_{k})= \ (\partial+\lambda_{1}+\cdots+\lambda_{k-1})\ (\rho_{k})_{\lambda_{1},\ldots,\lambda_{k-1}}(x_{1},\ldots,\partial x_{i},\ldots,x_{k}),\] - for each \(n\geq 1\) and homogeneous elements \(x_{1},\ldots,x_{n}\in\mathcal{G}\), \[\sum_{k+l=n+1}\sum_{i=1}^{k}\sum_{\sigma\in\mathrm{Sh}(i-1,l-1)} \varepsilon(\sigma)\ \mathrm{sgn}(\sigma)\ (-1)^{(k-i-1)(l-1)}(-1)^{l(|x_{\sigma(1)}|+\cdots+|x_{\sigma(i-1)}|)} \tag{4}\] \[(\rho_{k})_{\lambda_{\sigma(1)},\ldots,\lambda_{\sigma(i-1)}, \lambda_{\sigma(i)}+\cdots+\lambda_{\sigma(i+l-2)}+\lambda_{i+l-1},\ldots, \lambda_{n-1}}\big{(}x_{\sigma(1)},\ldots,x_{\sigma(i-1)},\] \[(\rho_{l})_{\lambda_{\sigma(i)},\ldots,\lambda_{\sigma(i+l-2)}} \big{(}x_{\sigma(i)},\ldots,x_{\sigma(i+l-2)},x_{i+l-1}\big{)},x_{i+l},\ldots,x_ {n}\big{)}=0.\] The above identities (4), called conformal Leibnizator identities, have meaningful interpretations in low values of \(n\). For instance, when \(n=1\), it says that the degree \(-1\)\(\mathbb{C}[\partial]\)-linear map \(\rho_{1}:\mathcal{G}\rightarrow\mathcal{G}\) satisfies \((\rho_{1})^{2}=0\). In other words, \((\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i},\rho_{1})\) is a chain complex in the category of \(\mathbb{C}[\partial]\)-modules. For \(n=2\), it says that the degree \(0\) conformal sesquilinear map (which we call the \(\lambda\)-bracket in this case) \(\rho_{2}:\mathcal{G}\otimes\mathcal{G}\rightarrow\mathcal{G}[[\lambda]]\), \(x\otimes y\mapsto(\rho_{2})_{\lambda}(x,y)=[x_{\lambda}y]\) satisfies \[\rho_{1}[x_{\lambda}y]=[\rho_{1}(x)_{\lambda}y]+(-1)^{|x|}[x_{\lambda}\rho_{1}(y )],\] for all homogeneous elements \(x,y\in\mathcal{G}\). Similarly, for \(n=3\), the identity (4) simply means \[[x_{\lambda}[y_{\mu}z]]- [[x_{\lambda}y]_{\lambda+\mu}z]-(-1)^{|x||y|}[y_{\mu}[x_{\lambda}z]] =(\rho_{1})\big{(}(\rho_{3})_{\lambda,\mu}(x,y,z)\big{)}\] \[+(\rho_{3})_{\lambda,\mu}\big{(}\rho_{1}(x),y,z\big{)}+(-1)^{|x|} (\rho_{3})_{\lambda,\mu}\big{(}x,\rho_{1}(y),z\big{)}+(-1)^{|x|+|y|}(\rho_{3})_ {\lambda,\mu}\big{(}x,y,\rho_{1}(z)\big{)},\] for \(x,y,z\in\mathcal{G}.\) This shows that the \(\lambda\)-bracket \([\cdot_{\lambda}\cdot]\) satisfies the graded Leibniz conformal identity up to an exact term of \(\rho_{3}\) (i.e. up to homotopy). Similarly, we obtain higher identities for higher values of \(n\). ### Example Any Leibniz conformal algebra \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) is a \(Leib_{\infty}\)-conformal algebra whose underlying graded \(\mathbb{C}[\partial]\)-module is \(\mathfrak{g}\) concentrated in degree \(0\). ### Example Any graded Leibniz conformal algebra \((\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i},[\cdot_{\lambda}\cdot])\) is a \(Leib_{\infty}\)-conformal algebra, where \((\rho_{2})_{\lambda}=[\cdot_{\lambda}\cdot]\) and \(\rho_{k}=0\) for \(k\neq 2\). Similarly, a differential graded Leibniz conformal algebra \((\mathcal{G},[\cdot_{\lambda}\cdot],d)\) is a \(Leib_{\infty}\)-conformal algebra, where \[\rho_{1}=d,\ (\rho_{2})_{\lambda}=[\cdot_{\lambda}\cdot]\ \ \text{and}\ \ \rho_{k}=0\ \ \text{for}\ k\geq 3.\] ### Example Let \(\mathfrak{g},\mathfrak{h}\) be two Leibniz conformal algebras and \(f:\mathfrak{g}\to\mathfrak{h}\) be a morphism of Leibniz conformal algebras (i.e. \(f:\mathfrak{g}\to\mathfrak{h}\) is a \(\mathbb{C}[\partial]\)-linear map satisfying \(f([x_{\lambda}y]^{\mathfrak{g}})=[f(x)_{\lambda}f(y)]^{\mathfrak{h}}\), for all \(x,y\in\mathfrak{g}\)). Then it follows that \(\mathrm{ker}f\) is a \(\mathbb{C}[\partial]\)-module. Moreover, the graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}=\mathfrak{g}\oplus\mathrm{ker}f\) (where \(\mathfrak{g}\) is concentrated in arity \(0\) and \(\mathrm{ker}f\) is concentrated in arity \(1\)) inherits a \(Leib_{\infty}\)-conformal algebra structure with the operations \[\rho_{1}:=i:\mathrm{ker}f\hookrightarrow\mathfrak{g},\ (\rho_{2})_{\lambda}(x,y):=[x_{ \lambda}y]^{\mathfrak{g}}\ \text{for}\ x,y\in\mathfrak{g}\ \text{or}\ \text{in}\ \mathrm{ker}f,\ \text{and}\ \rho_{k}=0\ \text{for}\ k\geq 3.\] In [25] the authors introduced the notion of \(L_{\infty}\)-conformal algebras that are the conformal analogues of \(L_{\infty}\)-algebras and homotopification of Lie conformal algebras. It turns out that a \(Leib_{\infty}\)-conformal algebra \((\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i},\{\rho_{k}\}_{k\geq 1})\) in which (i) the structure maps \(\mu_{k}\)'s are graded skew-symmetric in the sense that \[(\rho_{k})_{\lambda_{1},\ldots,\lambda_{k-1}}(x_{1},\ldots,x_{k})=\varepsilon( \sigma)\ \text{sgn}(\sigma)(\rho_{k})_{\lambda_{\sigma(1)},\ldots,\lambda_{\sigma(k-1)}} \big{(}x_{\sigma(1)},\ldots,x_{\sigma(k-1)},x_{\sigma(k)}\big{)}\bigg{|}_{ \lambda_{k}\mapsto\lambda_{k}^{\dagger}},\ \text{for any}\ \sigma\in S_{k}\] (here the notation \(\lambda_{k}\mapsto\lambda_{k}^{\dagger}\) means that \(\lambda_{k}\) is replaced by \(\lambda_{k}^{\dagger}=-\partial-\sum_{i=1}^{k-1}\lambda_{i}\), if it occurs and \(\partial\) moved to the left), (ii) for each \(k\geq 1\), the image of the map \(\rho_{k}\) lies in the space of \(\mathfrak{g}\)-valued polynomials in \(k-1\) indeterminates \(\lambda_{1},\ldots,\lambda_{k-1}\) (i.e. \(\mathrm{Im}(\rho_{k})\subset\mathcal{G}[\lambda_{1},\ldots,\lambda_{k-1}]\) for all \(k\geq 1\)), is nothing but a \(L_{\infty}\)-conformal algebra. Thus, a \(Leib_{\infty}\)-conformal algebra is more general than an \(L_{\infty}\)-conformal algebra where the graded skew-symmetry condition and the polynomial condition of the structure maps are relaxed. It is important to remark that by removing all \(\lambda_{i}\)'s and \(\partial\)'s from the definition of a \(Leib_{\infty}\)-conformal algebra, one simply gets a \(Leib_{\infty}\)-algebra. Hence \(Leib_{\infty}\)-conformal algebras can be realized as the conformal analogue of \(Leib_{\infty}\)-algebras. In [1] (see also [9, 28]) the authors gave a simple description of a \(Leib_{\infty}\)-algebra. Namely, they first considered the notion of \(Leib_{\infty}[1]\)-algebras (which are much more convenient than \(Leib_{\infty}\)-algebras) and showed that they are equivalent to \(Leib_{\infty}\)-algebras when we shift the underlying graded space. In the following, we will adapt their approach in the context of conformal algebras. We start with the following definition. ### Definition A \(Leib_{\infty}[1]\)_-conformal algebra_ is a pair \((\mathcal{H}=\oplus_{i\in\mathbb{Z}}\mathcal{H}_{i},\{\varrho_{k}\}_{k\geq 1})\) of a graded \(\mathbb{C}[\partial]\)-module \(\mathcal{H}=\oplus_{i\in\mathbb{Z}}\mathcal{H}_{i}\) with a collection of degree \(-1\) graded \(\mathbb{C}\)-linear maps \[\{\varrho_{k}:\mathcal{H}^{\otimes k}\to\mathcal{H}[[\lambda_{1},\ldots, \lambda_{k-1}]],\ h_{1}\otimes\cdots\otimes h_{k}\mapsto(\varrho_{k})_{ \lambda_{1},\ldots,\lambda_{k-1}}(h_{1},\ldots,h_{k})\}_{k\geq 1}\] that satisfy the following conditions: - each \(\varrho_{k}\) is conformal sesquilinear, - for any \(n\geq 1\) and homogeneous elements \(h_{1},\dots,h_{n}\in\mathcal{H}\), \[\sum_{k+l=n+1} \sum_{i=1}^{k}\sum_{i=1}^{k}\sum_{\sigma\in\mathrm{Sh}(i-1,l-1)} \varepsilon(\sigma)\ (-1)^{|h_{\sigma(1)}|+\cdots+|h_{\sigma(i-1)}|} \tag{5}\] \[(\varrho_{k})_{\lambda_{\sigma(1)},\dots,\lambda_{\sigma(i-1)}, \lambda_{\sigma(i)}+\cdots+\lambda_{\sigma(i+l-2)}+\lambda_{i+l-1},\dots, \lambda_{n-1}}\big{(}h_{\sigma(1)},\dots,h_{\sigma(i-1)},\] \[(\varrho_{l})_{\lambda_{\sigma(i)},\dots,\lambda_{\sigma(i+l-2)} }\big{(}h_{\sigma(i)},\dots,h_{\sigma(i+l-2)},h_{i+l-1}\big{)},h_{i+l},\dots,h _{n}\big{)}=0.\] If \((\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i},[\cdot\cdot\cdot],d)\) is a differential graded Leibniz conformal algebra, then the shifted graded \(\mathbb{C}[\partial]\)-module \(\mathcal{G}[-1]=\oplus_{i\in\mathbb{Z}}\mathcal{G}[-1])_{i}=\oplus_{i\in \mathbb{Z}}\mathcal{G}_{i-1}\) inherits a \(Leib_{\infty}[1]\)-conformal algebra with the operations \(\{\varrho_{k}:\mathcal{G}[-1]^{\otimes k}\rightarrow\mathcal{G}[-1][[\lambda_ {1},\dots,\lambda_{k-1}]]\}_{k\geq 1}\) given by \[\varrho_{1}(x):=(s\circ d)(s^{-1}(x)),\ (\varrho_{2})_{\lambda}(x,y):=(-1)^{|x|}s[( s^{-1}x)_{\lambda}(s^{-1}y)]\ \ \text{and}\ \ \varrho_{k}=0\ \text{for}\ k\geq 3.\] Here \(s:\mathcal{G}\rightarrow\mathcal{G}[-1]\) is the degree \(+1\) map that identifies \(\mathcal{G}\) with the shifted space \(\mathcal{G}[-1]\), and \(s^{-1}:\mathcal{G}[-1]\rightarrow\mathcal{G}\) is the degree \(-1\) map inverses to \(s\). This is not a surprising result as we have the following general result whose proof is similar to the classical case [1]. **3.7 Theorem**.: _Let \(\mathcal{G}=\oplus_{i\in\mathbb{Z}}\mathcal{G}_{i}\) be a graded \(\mathbb{C}[\partial]\)-module. Then there is a one-one correspondence between \(Leib_{\infty}\)-conformal algebra structures on \(\mathcal{G}\) and \(Leib_{\infty}[1]\)-conformal algebra structures on \(\mathcal{H}=\mathcal{G}[-1]\)._ Let \(\mathcal{H}=\oplus_{i\in\mathbb{Z}}\mathcal{H}_{i}\) be a graded \(\mathbb{C}[\partial]\)-module. For any integer \(n\in\mathbb{Z}\) and natural number \(k\in\mathbb{N}\), let \(\mathrm{Hom}^{n}_{cs}\big{(}\mathcal{H}^{\otimes k},\mathcal{H}[[\lambda_{1}, \dots,\lambda_{k-1}]]\big{)}\) be the space of all conformal sesquilinear maps \(\varphi_{k}:\mathcal{H}^{\otimes k}\rightarrow\mathcal{H}[[\lambda_{1},\dots, \lambda_{k-1}]]\) with \(\deg(\varphi_{k})=n\). We define \[C^{n}_{cs}(\mathcal{H})=\oplus_{k\geq 1}\mathrm{Hom}^{n}_{cs}\big{(} \mathcal{H}^{\otimes k},\mathcal{H}[[\lambda_{1},\dots,\lambda_{k-1}]]\big{)}.\] Thus, an element \(\varphi\in C^{N}_{cs}(\mathcal{H})\) is given by a sum \(\varphi=\sum_{k\geq 1}\varphi_{k}\), where \(\varphi_{k}\in\mathrm{Hom}^{n}_{cs}\big{(}\mathcal{H}^{\otimes k},\mathcal{H} [[\lambda_{1},\dots,\lambda_{k-1}]]\big{)}\) for \(k\geq 1\). Note that the graded space \(C^{\bullet}_{cs}(\mathcal{H})=\oplus_{n\in\mathbb{Z}}C^{n}_{cs}(\mathcal{H})\) inherits a graded Lie bracket given by \[[\sum_{k\geq 1}\varphi_{k},\sum_{l\geq 1}\psi_{l}]=\sum_{p\geq 1}\sum_{k+l=p+ 1}(\varphi_{k}\diamond\psi_{l}-(-1)^{mn}\psi_{l}\diamond\varphi_{k}),\ \text{where}\] \[(\varphi_{k}\diamond\psi_{l})_{\lambda_{1},\dots,\lambda_{p-1}}( h_{1},\dots,h_{p})\] \[=\sum_{i=1}^{k}\sum_{\sigma\in\mathrm{Sh}(i-1,l-1)}\varepsilon( \sigma)(-1)^{m(|h_{\sigma(1)}|+\cdots+|h_{\sigma(i-1)}|)}\] \[(\varphi_{k})_{\lambda_{\sigma(1)},\dots,\lambda_{\sigma(i-1)}, \lambda_{\sigma(i)}+\cdots+\lambda_{\sigma(i+l-2)}+\lambda_{i+l-1},\dots, \lambda_{n-1}}\big{(}h_{\sigma(1)},\dots,h_{\sigma(i-1)},\] \[(\psi_{l})_{\lambda_{\sigma(i)},\dots,\lambda_{\sigma(i+l-2)}} \big{(}h_{\sigma(i)},\dots,h_{\sigma(i+l-2)},h_{i+l-1}\big{)},\dots,h_{n}\big{)} =0,\] for \(\varphi=\sum_{k\geq 1}\varphi_{k}\in C^{n}_{cs}(\mathcal{H})\) and \(\psi=\sum_{l\geq 1}\psi_{l}\in C^{m}_{cs}(\mathcal{H})\). Therefore, \((C^{\bullet}_{cs}(\mathcal{H}),[\ [\,\ ]\ ]) is a graded Lie algebra. This graded Lie algebra is the conformal analogue of the graded Lie algebra considered by Balavoire [5] that characterizes Leibniz algebras as Maurer-Cartan elements. In the present context, we have the following result. **3.8 Theorem**.: _Let \(\mathcal{H}=\oplus_{i\in\mathbb{Z}}\mathcal{H}_{i}\) be a graded \(\mathbb{C}[\partial]\)-module and \(\{\varrho_{k}:\mathcal{H}^{\otimes k}\rightarrow\mathcal{H}[[\lambda_{1}, \dots,\lambda_{k-1}]]\}_{k\geq 1}\) be a collection of degree \(-1\) conformal sesquilinear maps. Then the pair \((\mathcal{H},\{\varrho_{k}\}_{k\geq 1})\) is a \(Leib_{\infty}[1]\)-conformal algebra if and only if the element \(\varrho=\sum_{k\geq 1}\varrho_{k}\in C^{-1}_{cs}(\mathcal{H})\) is a Maurer-Cartan element in the graded Lie algebra \((C^{\bullet}_{cs}(\mathcal{H}),[\ [\,\ ]\ ])._ Proof.: Note that \[[\![\varrho,\varrho]\!]=[\![\sum_{k\geq 1}\varrho_{k},\sum_{l \geq 1}\varrho_{l}]\!] =\sum_{n\geq 1}\sum_{k+l=n+1}\sum_{k+l=n+1}(\varrho_{k}\diamond\varrho_{l} -(-1)^{1}\varrho_{l}\diamond\varrho_{k})\] \[=2\sum_{n\geq 1}\sum_{k+l=n+1}\varrho_{k}\diamond\varrho_{l}.\] This shows that \(\varrho\) is a Maurer-Cartan element if and only if \(\sum_{k+l=n+1}\varrho_{k}\diamond\varrho_{l}=0\) for all \(n\geq 1\). This is equivalent to the fact that \((\mathcal{H},\{\varrho_{k}\}_{k\geq 1})\) defines a \(Leib_{\infty}[1]\)-conformal algebra structure on \(\mathcal{H}\). The above theorem gives rise to a characterization of a \(Leib_{\infty}[1]\)-conformal algebra in terms of a Maurer-Cartan element in a graded Lie algebra. On the other hand, in Theorem 3.7, we have already shown that \(Leib_{\infty}\)-conformal algebras and \(Leib_{\infty}[1]\)-conformal algebras are equivalent up to a degree shift. Combining these results, we get the following description of a \(Leib_{\infty}\)-conformal algebra. **3.9 Theorem**.: _Let \(\mathcal{G}\) be a graded \(\mathbb{C}[\partial]\)-module. Then a \(Leib_{\infty}\)-conformal algebra structure on \(\mathcal{G}\) is equivalent to a Maurer-Cartan element in the graded Lie algebra \((C^{\bullet}_{cs}(\mathcal{G}[-1]),\llbracket\,\ \rrbracket)\)._ In the following, we will focus on the cohomology of a \(Leib_{\infty}\)-conformal algebra using the above Maurer-Cartan characterization. Let \((\mathcal{G},\{\rho_{k}\}_{k\geq 1})\) be a \(Leib_{\infty}\)-conformal algebra. Consider the corresponding Maurer-Cartan element \(\varrho=\sum_{k\geq 1}\varrho_{k}\), where \(\varrho_{k}=(-1)^{\frac{k(k-1)}{2}}s\circ\rho_{k}\circ(s^{-1})^{\otimes k}\) for any \(k\geq 1\). For each \(n\in\mathbb{Z}\), we define \(C^{n}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{G}):=C^{-(n-1)}_{cs}( \mathcal{G}[-1])\). It follows that an element \(\varphi\in C^{n}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{G})\) is a sum \(\varphi=\sum_{k\geq 1}\varphi_{k}\), where \(\varphi_{k}\in\mathrm{Hom}_{cs}^{-(n-1)}\big{(}\mathcal{G}[-1]^{\otimes k}, \mathcal{G}[-1][[\lambda_{1},\ldots,\lambda_{k-1}]]\big{)}\) for \(k\geq 1\). We also define a map \(\delta:C^{n}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{G})\to C^{n+1}_{ \mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{G})\) by \[\delta(\varphi)=(-1)^{n-1}\llbracket\varrho,\varphi\rrbracket,\ \text{for}\ \varphi=\sum_{k\geq 1}\varphi_{k}\in C^{n}_{\mathrm{Ceib}_{\infty}}( \mathcal{G},\mathcal{G})=C^{-(n-1)}_{cs}(\mathcal{G}[-1]). \tag{6}\] Then we have the following. **3.10 Proposition**.: _Let \((\mathcal{G},\{\rho_{k}\}_{k\geq 1})\) be a \(Leib_{\infty}\)-conformal algebra. Then \((C^{\bullet}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{G}),\delta)\) is a cochain complex._ The proof of the above proposition follows as \(\varrho=\sum_{k\geq}\varrho_{k}\) is a Maurer-Cartan element in the graded Lie algebra \((C^{\bullet}_{cs}(\mathcal{G}[-1]),\llbracket\,\ \rrbracket)\). The cohomology groups of the cochain complex \((C^{\bullet}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{G}),\delta)\) are called the _cohomology_ of the given \(Leib_{\infty}\)-conformal algebra \(\mathcal{G}\). **3.11 Remark**.: The above definition of cohomology can be easily generalized in the presence of a representation. Let \(\mathcal{G}=(\mathcal{G}=\oplus_{i\in\mathcal{Z}}\mathcal{G}_{i},\{\rho_{k} \}_{k\geq 1})\) be a \(Leib_{\infty}\)-conformal algebra. A _representation_ of \(\mathcal{G}\) is a graded \(\mathbb{C}[\partial]\)-module \(\mathcal{M}=\oplus_{i\in\mathcal{Z}}\mathcal{M}_{i}\) equipped with a collection of \(\mathbb{C}\)-linear conformal sesquilinear maps \[\big{\{}\theta_{k}:\oplus_{i=0}^{k}(\mathcal{G}^{\otimes(i-1)}\otimes \mathcal{M}\otimes\mathcal{G}^{\otimes(k-i)})\to\mathcal{M}[[\lambda_{1}, \ldots,\lambda_{k-1}]]\big{\}}_{k\geq 1}\] with \(\deg(\theta_{k})=k-2\) for \(k\geq 1\), such that the conformal Leibnizator identities (4) are hold when exactly one of the inputs among \(x_{1},\ldots,x_{n}\) comes from \(\mathcal{M}\) and the corresponding \(\rho_{k}\) or \(\rho_{l}\) are replaced by \(\theta_{k}\) or \(\theta_{l}\). It turns out that any \(Leib_{\infty}\)-conformal algebra is a representation of itself, where \(\theta_{k}=\rho_{k}\) for all \(k\geq 1\). We call this the adjoint representation. Given a \(Leib_{\infty}\)-conformal algebra \((\mathcal{G},\{\rho_{k}\}_{k\geq 1})\) and a representation \(\mathcal{M}\), we now define a cochain complex \(\{C^{\bullet}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{M}),\delta\}\) as follows: for each \(n\in\mathbb{Z}\), \[C^{n}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{M})=\oplus_{k\geq 1} \mathrm{Hom}^{n}_{cs}\big{(}\mathcal{G}[-1]^{\otimes k},\mathcal{M}[-1][[ \lambda_{1},\ldots,\lambda_{k-1}]]\big{)}\] and the coboundary map \(\delta:C^{n}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{M})\to C^{n+1}_{ \mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{M})\) can be defined similar to (6). The cohomology groups of the complex \(\{C^{\bullet}_{\mathrm{Ceib}_{\infty}}(\mathcal{G},\mathcal{M}),\delta\}\) are called the _cohomology_ of \(\mathcal{G}\) with coefficients in the representation \(\mathcal{M}\). Note that the cohomology of \(\mathcal{G}\) defined earlier is nothing but the cohomology with coefficients in the adjoint representation. ## 4. Skeletal and strict homotopy Leibniz conformal algebras In this section, we consider some particular classes of \(Leib_{\infty}\)-conformal algebras. Specifically, we study'skeletal' and'strict' \(Leib_{\infty}\)-conformal algebras. Among others, we characterize skeletal algebras by third cocycles of Leibniz conformal algebras and strict algebras by crossed modules of Leibniz conformal algebras. Our main motivations for the results come from a novel work of Baez and Crans [2]. They mainly considered skeletal and strict \(L_{\infty}\)-algebras and characterized them in terms of suitable objects related to Lie algebras. In other words, they relate some particular classes of \(L_{\infty}\)-algberas and invariants associated to Lie algebras. We mainly generalize their results in the context of Leibniz conformal algebras. We first start with the following. **4.1 Definition**.: A \(2\)**-term \(Leib_{\infty}\)-conformal algebra** is a triple \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) consisting of a complex \(\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0}\) of \(\mathbb{C}[\partial]\)-modules equipped with - a \(\mathbb{C}\)-linear conformal sesquilinear map \(\rho_{2}:\mathcal{G}_{i}\otimes\mathcal{G}_{j}\to\mathcal{G}_{i+j}[[\lambda]]\), for \(0\leq i,j,i+j\leq 1\), - a \(\mathbb{C}\)-linear conformal sesquilinear map \(\rho_{3}:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to \mathcal{G}_{1}[[\lambda,\mu]]\) that satisfy the following set of identities: for all \(x,y,z,w\in\mathcal{G}_{0}\) and \(u,v\in\mathcal{G}_{1}\), 1. \((\rho_{2})_{\lambda}(u,v)=0\), 2. \(d((\rho_{2})_{\lambda}(x,u))=(\rho_{2})_{\lambda}(x,du)\), 3. \(d((\rho_{2})_{\lambda}(u,x))=(\rho_{2})_{\lambda}(du,x)\), 4. \((\rho_{2})_{\lambda}(du,v)=(\rho_{2})_{\lambda}(u,dv)\), 5. \(d\big{(}(\rho_{3})_{\lambda,\mu}(x,y,z)\big{)}=(\rho_{2})_{\lambda}(x,(\rho_{ 2})_{\mu}(y,z))-(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(x,y),z)-(\rho_{ 2})_{\mu}(y,(\rho_{2})_{\lambda}(x,z))\), 6. \((\rho_{3})_{\lambda,\mu}(x,y,dv)=(\rho_{2})_{\lambda}(x,(\rho_{2})_{\mu}(y,v) )-(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(x,y),v)-(\rho_{2})_{\mu}(y,( \rho_{2})_{\lambda}(x,v))\), 7. \((\rho_{3})_{\lambda,\mu}(x,dv,y)=(\rho_{2})_{\lambda}(x,(\rho_{2})_{\mu}(v,y) )-(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(x,v),y)-(\rho_{2})_{\mu}(v,( \rho_{2})_{\lambda}(x,y))\), 8. \((\rho_{3})_{\lambda,\mu}(dv,x,y)=(\rho_{2})_{\lambda}(v,(\rho_{2})_{\mu}(x,y) )-(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(v,x),y)-(\rho_{2})_{\mu}(x,( \rho_{2})_{\lambda}(v,y))\), 9. the conformal Leibnizator identity: \[(\rho_{2})_{\lambda}\big{(}x,(\rho_{3})_{\mu,\nu}(y,z,w)\big{)}-( \rho_{2})_{\mu}\big{(}y,(\rho_{3})_{\lambda,\nu}(x,z,w)\big{)}+(\rho_{2})_{ \nu}\big{(}z,(\rho_{3})_{\lambda,\mu}(x,y,w)\big{)}\] \[+(\rho_{2})_{\lambda+\mu+\nu}\big{(}(\rho_{3})_{\lambda,\mu}(x,y, z),w\big{)}-(\rho_{3})_{\lambda+\mu,\nu}\big{(}(\rho_{2})_{\lambda}(x,y),z,w \big{)}-(\rho_{3})_{\mu,\lambda+\nu}\big{(}y,(\rho_{2})_{\lambda}(x,z),w\big{)}\] \[\qquad\qquad-(\rho_{3})_{\mu,\nu}\big{(}y,z,(\rho_{2})_{\lambda} (x,w)\big{)}+(\rho_{3})_{\lambda,\mu+\nu}\big{(}x,(\rho_{2})_{\mu}(y,z),w \big{)}+(\rho_{3})_{\lambda,\nu}\big{(}x,z,(\rho_{2})_{\mu}(y,w)\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-(\rho_{3})_{ \lambda,\mu}\big{(}x,y,(\rho_{2})_{\nu}(z,w)\big{)}=0.\] It follows from the above definition that a \(2\)-term \(Leib_{\infty}\)-conformal algebra \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) is nothing but a \(Leib_{\infty}\)-conformal algebra \((\mathcal{G}=\mathcal{G}_{0}\oplus\mathcal{G}_{1},\{\rho_{k}\}_{k\geq 1})\) with \(\rho_{1}=d\) and \(\rho_{k}=0\) for \(k\geq 4\). In other words, a \(2\)-term \(Leib_{\infty}\)-conformal algebra is a \(Leib_{\infty}\)-conformal algebra whose underlying graded \(\mathbb{C}[\partial]\)-module is concentrated in degree \(0\) and degree \(1\). Let \(\mathcal{G}=(\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) and \(\mathcal{G}^{\prime}=(\mathcal{G}^{\prime}_{1}\xrightarrow{d^{\prime}}\mathcal{ G}^{\prime}_{0},\rho^{\prime}_{2},\rho^{\prime}_{3})\) be two \(2\)-term \(Leib_{\infty}\)-conformal algebras. A _homomorphism_ from \(\mathcal{G}\) to \(\mathcal{G}^{\prime}\) is given by a triple \(f=(f_{0},f_{1},f_{2})\) in which \(f_{0}:\mathcal{G}_{0}\to\mathcal{G}^{\prime}_{0}\) and \(f_{1}:\mathcal{G}_{1}\to\mathcal{G}^{\prime}_{1}\) are \(\mathbb{C}[\partial]\)-linear maps and \(f_{2}:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to\mathcal{G}^{\prime}_{1}[[ \lambda]]\) is a \(\mathbb{C}\)-linear conformal sesquilinear map satisfying the following conditions: for any \(x,y,z\in\mathcal{G}_{0}\) and \(v\in\mathcal{G}_{1}\), * \(d^{\prime}\circ f_{1}=f_{0}\circ d\), * \((\rho^{\prime}_{2})_{\lambda}\big{(}f_{0}(x),f_{0}(y)\big{)}-f_{0}\big{(}(\rho_{ 2})_{\lambda}(x,y)\big{)}=d^{\prime}\big{(}(f_{2})_{\lambda}(x,y)\big{)}\), * \((\rho^{\prime}_{2})_{\lambda}\big{(}f_{0}(x),f_{1}(v)\big{)}-f_{1}\big{(}(\rho_{ 2})_{\lambda}(x,v)\big{)}=(f_{2})_{\lambda}(x,dv)\), * \((\rho^{\prime}_{2})_{\lambda}\big{(}f_{1}(v),f_{0}(x)\big{)}-f_{1}\big{(}(\rho_{ 2})_{\lambda}(v,x)\big{)}=(f_{2})_{\lambda}(dv,x)\), * \((\rho^{\prime}_{3})_{\lambda,\mu}\big{(}f_{0}(x),f_{0}(y),f_{0}(z)\big{)}-f_{1} \big{(}(\rho_{3})_{\lambda,\mu}(x,y,z)\big{)}=(\rho^{\prime}_{2})_{\lambda} \big{(}f_{0}(x),(f_{2})_{\mu}(y,z)\big{)}-(\rho^{\prime}_{2})_{\lambda+\mu} \big{(}(f_{2})_{\lambda}(x,y),f_{0}(z)\big{)}\) \[-(\rho^{\prime}_{2})_{\mu}\big{(}f_{0}(y),(f_{2})_{\lambda}(x,z)\big{)}+(f_{2})_{ \lambda}\big{(}x,(\rho_{2})_{\mu}(y,z)\big{)}-(f_{2})_{\lambda+\mu}\big{(}( \rho_{2})_{\lambda}(x,y),z\big{)}-(f_{2})_{\mu}\big{(}y,(\rho_{2})_{\lambda} (x,z)\big{)}.\] If \(\mathcal{G}=(\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) is a \(2\)-term \(Leib_{\infty}\)-conformal algebra, then the identity homomorphism from \(\mathcal{G}\) to \(\mathcal{G}\) is given by the triple \(\mathrm{id}_{\mathcal{G}}=\big{(}\mathrm{id}_{\mathcal{G}_{0}},\mathrm{id}_{ \mathcal{G}_{1}},0\big{)}\). Next, if \(f=(f_{0},f_{1},f_{2})\) is a homomorphism from \(\mathcal{G}\) to \(\mathcal{G}^{\prime}\) and \(g=(g_{0},g_{1},g_{2})\) is a homomorphism from \(\mathcal{G}^{\prime}\) to \(\mathcal{G}^{\prime\prime}\), then their composition is given by the triple \(g\circ f=\big{(}g_{0}\circ f_{0},g_{1}\circ f_{1},g_{ **4.2 Theorem**.: _The collection of all \(2\)-term \(Leib_{\infty}\)-conformal algebras and homomorphisms between them forms a category, denoted by \(\mathbf{2Leib_{\infty}Conf}\)._ **Skeletal algebras.** Here we will focus on skeletal \(Leib_{\infty}\)-conformal algebras and their characterization in terms of third cocycles in Leibniz conformal algebras. Skeletal \(Leib_{\infty}\)-conformal algebras are a particular case of \(2\)-term \(Leib_{\infty}\)-conformal algebras. **4.3 Definition**.: A _skeletal \(Leib_{\infty}\)-conformal algebra_ is a \(2\)-term \(Leib_{\infty}\)-conformal algebra \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) in which \(d=0\). Let \((\mathcal{G}_{1}\xrightarrow{0}\mathcal{G}_{0},\rho_{2},\rho_{3})\) be a skeletal \(Leib_{\infty}\)-conformal algebra. Then it follows from condition (v) of Definition 4.1 that the \(\mathbb{C}[\partial]\)-module \(\mathcal{G}_{0}\) with the \(\lambda\)-bracket \([\cdot_{\lambda}\cdot]:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to\mathcal{G}_{ 0}[[\lambda]]\) defined by \([x_{\lambda}y]:=(\rho_{2})_{\lambda}(x,y)\), for \(x,y\in\mathcal{G}_{0}\), is a Leibniz conformal algebra. Similarly, the conditions (vi), (vii) and (viii) of Definition 4.1 implies that the \(\mathbb{C}[\partial]\)-module \(\mathcal{G}_{1}\) can be given a representation of the Leibniz conformal algebra \((\mathcal{G}_{0},[\cdot_{\lambda}\cdot])\) with the left and right \(\lambda\)-actions given by \[x_{\lambda}v:=(\rho_{2})_{\lambda}(x,v)\ \ \text{and}\ \ v_{\lambda}x:=(\rho_{2})_{ \lambda}(v,x),\ \text{for}\ x\in\mathcal{G}_{0},v\in\mathcal{G}_{1}.\] Finally, the condition (ix) of Definition 4.1 simply means that the map \(\rho_{3}:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to \mathcal{G}_{1}[[\lambda,\mu]]\) is a \(3\)-cocycle of the Leibniz conformal algebra \((\mathcal{G}_{0},[\cdot_{\lambda}\cdot])\) with coefficients in the representation \(\mathcal{G}_{1}\). Next, let \((\mathcal{G}_{1}\xrightarrow{0}\mathcal{G}_{0},\rho_{2},\rho_{3})\) and \((\mathcal{G}_{1}\xrightarrow{0}\mathcal{G}_{0},\rho_{2}^{\prime},\rho_{3}^{ \prime})\) be two skeletal \(Leib_{\infty}\)-conformal algebras on the same underlying chain complex. We call them _equivalent_ if \(\rho_{2}=\rho_{2}^{\prime}\) and the corresponding \(3\)-cocycles \(\rho_{3}\) and \(\rho_{3}^{\prime}\) are differ by a coboundary, i.e. there exists a conformal sesquilinear map \(\tau:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to\mathcal{G}_{1}[[\lambda]]\) such that \(\rho_{3}^{\prime}=\rho_{3}+\delta\tau\). Here \(\delta\) is the coboundary map of the Leibniz conformal algebra \(\mathcal{G}_{0}\) with coefficients in the representation \(\mathcal{G}_{1}\). Then we have the following result. **4.4 Theorem**.: _There is a bijection correspondence between the collections of all skeletal \(Leib_{\infty}\)-conformal algebras and the collections of all tuples of the form \((\mathfrak{g},M,\theta)\), where \(\mathfrak{g}\) is Leibniz conformal algebra, \(M\) is a representation and \(\theta\in Z^{3}(\mathfrak{g},M)\) is a \(3\)-cocycle. Moreover, the above correspondence further extends to a bijection between the equivalence classes of all skeletal \(Leib_{\infty}\)-conformal algebras and tuples of the form \((\mathfrak{g},M,[\theta])\), where \([\theta]\in H^{3}(\mathfrak{g},M)\) is a third cohomology class._ Proof.: Let \((\mathcal{G}_{1}\xrightarrow{0}\mathcal{G}_{0},\rho_{2},\rho_{3})\) be a skeletal \(Leib_{\infty}\)-conformal algebra. Then we have seen earlier that \(\mathcal{G}_{0}\) is a Leibniz conformal algebra, \(\mathcal{G}_{1}\) is a representation and \(\rho_{3}\in Z^{3}(\mathcal{G}_{0},\mathcal{G}_{1})\) is a \(3\)-cocycle. Hence, we obtain a required triple \((\mathcal{G}_{0},\mathcal{G}_{1},\rho_{3})\). Conversely, let \((\mathfrak{g},M,\theta)\) be a given triple. Then it is easy to verify that \((M\xrightarrow{0}\mathfrak{g},\rho_{2},\theta)\) is a skeletal \(Leib_{\infty}\)-conformal algebra, where \[(\rho_{2})_{\lambda}(x,y):=[x_{\lambda}y],\ (\rho_{2})_{\lambda}(x,v):=x_{ \lambda}v\ \ \text{and}\ \ (\rho_{2})_{\lambda}(v,x):=v_{\lambda}x,\ \text{for}\ x,y\in\mathfrak{g},v\in M.\] Next, let \((\mathcal{G}_{1}\xrightarrow{0}\mathcal{G}_{0},\rho_{2},\rho_{3})\) and \((\mathcal{G}_{1}\xrightarrow{0}\mathcal{G}_{0},\rho_{2},\rho_{3}^{\prime})\) be two equivalent skeletal \(Leib_{\infty}\)-conformal algebras. Then we have \(\rho_{3}^{\prime}=\rho_{3}+\delta\tau\), for some \(\mathbb{C}\)-linear conformal sesquilinear map \(\sigma:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to\mathcal{G}_{1}[[\lambda]]\). Then it follows that \([\rho_{3}]=[\rho_{3}^{\prime}]\) as an element of \(H^{3}(\mathcal{G}_{0},\mathcal{G}_{1})\). Hence we obtain a triple \((\mathcal{G}_{0},\mathcal{G}_{1},[\rho_{3}]=[\rho_{3}^{\prime}])\). Conversely, any triple \((\mathfrak{g},M,[\theta])\) corresponds to the equivalence class of the skeletal \(Leib_{\infty}\)-conformal algebra \((M\xrightarrow{0}\mathfrak{g},\rho_{2},\theta)\). This completes the proof. The following example is a consequence of the above result. **4.5 Example**.: Let \((\mathfrak{g},[\cdot_{\lambda}\cdot])\) be a Leibniz conformal algebra and \(M\) be a representation of it. For any \(\mathbb{C}\)-linear conformal sesquilinear map \(\tau:\mathfrak{g}\otimes\mathfrak{g}\to M[[\lambda]]\), the triple \((M\xrightarrow{0}\mathfrak{g},\rho_{2},\delta\tau)\) is a skeletal \(Leib_{\infty}\)-conformal algebra. **Strict algebras.** Here we will consider another class of \(2\)-term \(Leib_{\infty}\)-conformal algebras, called strict \(Leib_{\infty}\)-conformal algebras. We also introduce crossed modules of Leibniz conformal algebras and find their relationship with strict \(Leib_{\infty}\)-conformal algebras. **4.6 Definition**.: A _strict \(Leib_{\infty}\)-conformal algebra_ is a \(2\)-term \(Leib_{\infty}\)-conformal algebra \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) in which \(\rho_{3}=0\). Thus, in a strict \(Leib_{\infty}\)-conformal algebra \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3}=0)\), we have from condition (v) of Definition 4.1 that \[(\rho_{2})_{\lambda}(x,(\rho_{2})_{\mu}(y,z)) =(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(x,y),z)+(\rho_{2}) _{\mu}(y,(\rho_{2})_{\lambda}(x,z)),\] \[(\rho_{2})_{\mu}(y,(\rho_{2})_{\lambda}(x,z)) =(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\mu}(y,x),z)+(\rho_{2})_{ \lambda}(x,(\rho_{2})_{\mu}(y,z)),\] for all \(x,y,z\in\mathcal{G}_{0}\). By adding both sides of the above identities, we get that \[(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(x,y),z)+(\rho_{2})_{\lambda+\mu }((\rho_{2})_{\mu}(y,x),z)=0,\text{ for all }x,y,z\in\mathcal{G}_{0}.\] Similarly, one can show that \[(\rho_{2})_{\lambda+\mu}((\rho_{2})_{\lambda}(x,v),y)+(\rho_{2})_{\lambda+\mu }((\rho_{2})_{\mu}(v,x),y)=0,\text{ for all }x,y\in\mathcal{G}_{0}\text{ and }v\in \mathcal{G}_{1}.\] Let \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},0)\) be a strict \(Leib_{\infty}\)-conformal algebra. Then it follows from condition (v) of Definition 4.1 that the \(\mathbb{C}[\partial]\)-module \(\mathcal{G}_{0}\) with the \(\lambda\)-bracket \([\cdot_{\lambda}]^{0}:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to\mathcal{G}_{0 }[[\lambda]]\), \([x_{\lambda}y]^{0}:=(\rho_{2})_{\lambda}(x,y)\), for \(x,y\in\mathcal{G}_{0}\), is a Leibniz conformal algebra. We also define a conformal sesquilinear map \([\cdot_{\lambda}]^{1}:\mathcal{G}_{1}\otimes\mathcal{G}_{1}\to\mathcal{G}[[ \lambda]]\) by \([u_{\lambda}v]^{1}:=(\rho_{2})_{\lambda}(du,v)=(\rho_{2})_{\lambda}(u,dv)\), for \(u,v\in\mathcal{G}_{1}\). Then for any \(u,v,w\in\mathcal{G}_{1}\), we have \[[u_{\lambda}[v_{\mu}w]^{1}]^{1} =[u_{\lambda}(\rho_{2})_{\mu}(dv,w)]^{1}\] \[=(\rho_{2})_{\lambda}\big{(}du,(\rho_{2})_{\mu}(dv,w)\big{)}\] \[=(\rho_{2})_{\lambda+\mu}\big{(}(\rho_{2})_{\lambda}(du,dv),w\big{)} +(\rho_{2})_{\mu}\big{(}dv,(\rho_{2})_{\lambda}(du,w)\big{)}\] \[=(\rho_{2})_{\lambda+\mu}(d[u_{\lambda}v]^{1},w)+(\rho_{2})_{\mu}( dv,[u_{\lambda}w]^{1})\] \[=[[u_{\lambda}v]^{1}_{\lambda+\mu}w]^{1}+[v_{\mu}[u_{\lambda}w]^{ 1}]^{1}.\] This shows that \(\mathcal{G}_{1}\) is also a Leibniz conformal algebra with the above \(\lambda\)-bracket. Moreover, the \(\mathbb{C}[\partial]\)-linear map \(d:\mathcal{G}_{1}\to\mathcal{G}_{0}\) is a morphism of Leibniz conformal algebras as \[d[u_{\lambda}v]^{1}=d(\rho_{2})_{\lambda}(du,v)=(\rho_{2})_{\lambda}(du,dv)=[( du)_{\lambda}(dv)]^{0},\text{ for all }u,v\in\mathcal{G}_{1}.\] Furthermore, the conformal sesquilinear maps \(\rho_{2}:\mathcal{G}_{0}\otimes\mathcal{G}_{1}\to\mathcal{G}_{1}[[\lambda]]\) and \(\rho_{2}:\mathcal{G}_{1}\otimes\mathcal{G}_{0}\to\mathcal{G}_{1}[[\lambda]]\) make the \(\mathbb{C}[\partial]\)-module \(\mathcal{G}_{1}\) into a representation of the Leibniz conformal algebra \(\mathcal{G}_{0}\) (which follows from (vi), (vii) and (viii) of Definition 4.1). They also satisfy some additional conditions. Motivated by this, we now introduce the following notion. **4.7 Definition**.: A _crossed module of Leibniz conformal algebras_ is a quintuple \((\mathfrak{g},\mathfrak{b},d,\Phi^{l},\Phi^{r})\) in which \(\mathfrak{g},\mathfrak{h}\) are both Leibniz conformal algebras, \(d:\mathfrak{g}\to\mathfrak{h}\) is a morphism of Leibniz conformal algebras and there are conformal sesquilinear maps \[\Phi^{l}:\mathfrak{h}\otimes\mathfrak{g}\to\mathfrak{g}[[\lambda]],\ v\otimes x\mapsto\Phi^{l}_{\lambda}(v,x)\text{ \ and \ }\Phi^{r}:\mathfrak{g}\otimes\mathfrak{h}\to\mathfrak{g}[[\lambda]],\ x\otimes v \mapsto\Phi^{r}_{\lambda}(x,v)\] that make \(\mathfrak{g}\) into a representation of the Leibniz conformal algebra \(\mathfrak{h}\) satisfying additionally for \(x,y\in\mathfrak{g}\), \(h\in\mathfrak{h}\), \[d\big{(}\Phi^{l}_{\lambda}(h,x)\big{)} =[h_{\lambda}(dx)]^{\mathfrak{h}},\] \[d\big{(}\Phi^{r}(x,h)\big{)} =[(dx)_{\lambda}h]^{\mathfrak{h}},\] \[\Phi^{l}_{\lambda}(dx,y) =[x_{\lambda}y]^{\mathfrak{g}}=\Phi^{r}_{\lambda}(x,dy),\] \[=\Phi^{r}_{\lambda+\mu}([x_{\lambda}y]^{\mathfrak{g}},h)+[y_{\mu} \Phi^{r}_{\lambda}(x,h)]^{\mathfrak{g}},\] \[=[\Phi^{r}_{\lambda}(x,h)_{\lambda+\mu}y]^{\mathfrak{g}}+\Phi^{l}_ {\mu}(h,[x_{\lambda}y]^{\mathfrak{g}}),\] \[\Phi^{l}_{\lambda}(h,[x_{\mu}y]^{\mathfrak{g}}) =[\Phi^{l}_{\lambda}(h,x)_{\lambda+\mu}y]^{\mathfrak{g}}+[x_{ \mu}\Phi^{l}_{\lambda}(h,y)]^{\mathfrak{g}}.\] Here \([\cdot_{\lambda}]^{\mathfrak{g}}\) (resp. \([\cdot_{\lambda}]^{\mathfrak{h}}\)) denotes the \(\lambda\)-bracket of the Leibniz conformal algebra \(\mathfrak{g}\) (resp. \(\mathfrak{h}\)). With the above definition, we have the following result. **4.8 Theorem**.: _There is a bijective correspondence between strict \(Leib_{\infty}\)-conformal algebras and crossed modules of Leibniz conformal algebras._ Proof.: Let \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3}=0)\) be a strict \(Leib_{\infty}\)-conformal algebra. Then our previous discussions show that \((\mathcal{G}_{1},\mathcal{G}_{0},d,\rho_{2},\rho_{2})\) is a crossed module of Leibniz conformal algebras. Conversely, let \((\mathfrak{g},\mathfrak{h},d,\Phi^{l},\Phi^{r})\) be a crossed module of Leibniz conformal algebras. Then it is easy to observe that \((\mathfrak{g}\xrightarrow{d}\mathfrak{h},\rho_{2},\rho_{3}=0)\) is a strict \(Leib_{\infty}\)-conformal algebra, where \[(\rho_{2})_{\lambda}(u,v)=[u_{\lambda}v]^{\mathfrak{h}},\ (\rho_{2})_{ \lambda}(u,x)=\Phi^{l}_{\lambda}(u,x)\ \ \text{and}\ \ (\rho_{2})_{\lambda}(x,u)=\Phi^{r}_{\lambda}(x,u),\ \text{for}\ u,v\in \mathfrak{h},x\in\mathfrak{g}.\] This completes the proof. ## 5. Leibniz conformal \(2\)-algebras In this section, we categorify Leibniz conformal algebras and hence introduce the notion of Leibniz conformal \(2\)-algebras. We show that the collection of all Leibniz conformal \(2\)-algebras and homomorphisms between them forms a category, denoted by \(\mathbf{LeibConf2}\). We show that the category \(\mathbf{LeibConf2}\) is equivalent to the category \(\mathbf{2Leib}_{\infty}\mathbf{Conf}\) of \(2\)-term \(Leib_{\infty}\)-conformal algebras. Recall that the concept of Leibniz \(2\)-algebras (that is, categorified Leibniz algebras) was first introduced by Sheng and Liu [26] as the non-skew-symmetric version of Lie \(2\)-algebras. Here we generalize their notion in the context of conformal algebras. Let \(\mathbf{Vect}_{\mathbb{C}[\partial]}\) be the category of \(\mathbb{C}[\partial]\)-modules and homomorphisms between them. Thus, objects in \(\mathbf{Vect}_{\mathbb{C}[\partial]}\) are \(\mathbb{C}[\partial]\)-modules and homomorphisms from a \(\mathbb{C}[\partial]\)-module \(M\) to another \(\mathbb{C}[\partial]\)-module \(N\) are precisely \(\mathbb{C}[\partial]\)-linear maps from \(M\) to \(N\). **5.1 Definition**.: A \(2\)**-vector space in the category of \(\mathbb{C}[\partial]\)-modules** is a category internal to the category \(\mathbf{Vect}_{\mathbb{C}[\partial]}\). Thus, a \(2\)-vector space in the category of \(\mathbb{C}[\partial]\)-modules is a category \(C=(C_{1}\rightrightarrows C_{0})\) in which both the collection of objects \(C_{0}\) and the collection of morphisms \(C_{1}\) are \(\mathbb{C}[\partial]\)-modules such that the source and target maps \(s,t:C_{1}\to C_{0}\), the object-inclusion map \(i:C_{0}\to C_{1}\) and the composition map \(m:C_{1}\times_{C_{0}}C_{1}\to C_{1}\) are all \(\mathbb{C}[\partial]\)-linear maps. In a \(2\)-vector space \(C=(C_{1}\rightrightarrows C_{0})\) in the category of \(\mathbb{C}[\partial]\)-modules, we write the images of the object-inclusion map as \(i(x)=1_{x}\), for \(x\in C_{0}\). Let \(C=(C_{1}\rightrightarrows C_{0})\) and \(C^{\prime}=(C^{\prime}_{1}\rightrightarrows C^{\prime}_{0})\) be two \(2\)-vector spaces in the category of \(\mathbb{C}[\partial]\)-modules. A _homomorphism_ from \(C\) to \(C^{\prime}\) is given by a pair \(F=(F_{0},F_{1})\) that consist of two \(\mathbb{C}[\partial]\)-linear maps \(F_{0}:C_{0}\to C^{\prime}_{0}\) and \(F_{1}:C_{1}\to C^{\prime}_{1}\) which commute with all the structure maps, i.e. the following diagram commutes A homomorphism is also called a \(\mathbb{C}[\partial]\)-linear functor. The collection of all \(2\)-vector spaces in the category of \(\mathbb{C}[\partial]\)-modules and homomorphisms between them forms a category. Note that \(2\)-vector spaces in the category of \(\mathbb{C}[\partial]\)-modules are closely related to \(2\)-term complexes of \(\mathbb{C}[\partial]\)-modules. Let \(C=(C_{1}\rightrightarrows C_{0})\) be a \(2\)-vector space in the category of \(\mathbb{C}[\partial]\)-modules. Then \(\ker(s)\xrightarrow{t}C_{0}\) is a \(2\)-term complex of \(\mathbb{C}[\partial]\)-modules. On the other hand, if \(\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0}\) is a complex of \(\mathbb{C}[\partial]\)-modules then \[C=(\mathcal{G}_{0}\oplus\mathcal{G}_{1}\rightrightarrows\mathcal{G}_{0}) \tag{7}\] is a \(2\)-vector space in the category of \(\mathbb{C}[\partial]\)-modules, where the source, target and object inclusion are respectively given by \[s(x,h)=x,\quad t(x,h)=x+dh\quad\text{and}\quad i(x)=(x,0),\] for \((x,h)\in\mathcal{G}_{0}\oplus\mathcal{G}_{1}\) and \(x\in\mathcal{G}_{0}\). A morphism between 2-term complexes of \(\mathbb{C}[\partial]\)-modules induces a morphism between the corresponding 2-vector spaces in the category of \(\mathbb{C}[\partial]\)-modules. ### Definition A _Leibniz conformal \(2\)-algebra_ is a triple \((C,[\cdot\cdot\cdot],\mathcal{L})\) consisting of a 2-vector space \(C=(C_{1}\rightrightarrows C_{0})\) in the category of \(\mathbb{C}[\partial]\)-modules, a \(\mathbb{C}\)-linear conformal sesquilinear functor \([\cdot_{\lambda}]:C\otimes C\to C[[\lambda]]\) and a conformal sesquilinear natural isomorphism (called the conformal Leibnizator) \[\mathcal{L}_{x,y,z}:[x_{\lambda}[y_{\mu}z]]\rightarrow[[x_{\lambda}y]_{ \lambda+\mu}z]+[y_{\mu}[x_{\lambda}z]]\] such that for any \(x,y,z,w\in C_{0}\), the following diagram is commutative (8) In a Leibniz conformal 2-algebra, if the conformal sesquilinear functor \([\cdot_{\lambda}]\) and the conformal Leibnizator are both skew-symmetric, one recovers the notion of a Lie conformal 2-algebra [30]. Therefore, Leibniz conformal 2-algebras can be realized as the non-skew-symmetric analogue of Lie conformal 2-algebras. Let \((C,[\cdot_{\lambda}],\mathcal{L})\) and \((C^{\prime},[\cdot_{\lambda}]^{\prime},\mathcal{L}^{\prime})\) be two Leibniz conformal 2-algebras. A _homomorphism_ of Leibniz conformal 2-algebras is a conformal functor \(F=(F_{0},F_{1}):C\to C^{\prime}\) between the underlying 2-vector spaces (in the category of \(\mathbb{C}[\partial]\)-modules) and a natural isomorphism \[(F_{2})^{x,y}:[F_{0}(x)_{\lambda}F_{0}(y)]^{\prime}\to F_{0}[x_{\lambda}y]\] such that for any \(x,y,z\in C_{0}\), the following diagram is commutative \[[F_{0}(x)_{\lambda}[F_{0}(y)_{\mu}F_{0}(z)]^{\prime}]^{\prime}\xrightarrow{ \mathcal{L}^{\prime}_{F_{0}(\cup),F_{0}(\cup)}}[[F_{0}(x)_{\lambda}F_{0}(y)]^{ \prime}_{\lambda+\mu}F_{0}(z)]^{\prime}+[F_{0}(y)_{\mu}[F_{0}(x)_{\lambda}F_{0} (z)]^{\prime}]^{\prime}\] We often denote a homomorphism as above by the pair \((F=(F_{0},F_{1}),F_{2})\). Let \((C,[\cdot_{\lambda}],\mathcal{L}),\ (C^{\prime},[\cdot_{\lambda}]^{\prime}, \mathcal{L}^{\prime})\) and \((C^{\prime\prime},[\cdot_{\lambda}]^{\prime\prime},\mathcal{L}^{\prime\prime})\) be three Leibniz conformal 2-algebras. Suppose \((F=(F_{0},F_{1}),F_{2}):(C,[\cdot_{\lambda}],\mathcal{L})\rightarrow(C^{\prime },[\cdot_{\lambda}]^{\prime},\mathcal{L}^{\prime})\) and \((G=(G_{0},G_{1}),G_{2}):(C^{\prime},[\cdot_{\lambda}]^{\prime},\mathcal{L}^{ \prime})\rightarrow(C^{\prime\prime},[\cdot_{\lambda}]^{\prime\prime}, \mathcal{L}^{\prime\prime})\) are homomorphisms between Leibniz conformal 2-algebras. Then their composition is given by \(\big{(}G\circ F=(G_{0}\circ F_{0},G_{1}\circ F_{1}),(G\circ F)_{2}\big{)}\), where \((G\circ F)_{2}\) is the ordinary composition \[[(G_{0}\circ F_{0})(x)_{\lambda}(G_{0}\circ F_{0})(y)]^{\prime\prime} \xrightarrow{(G_{2})^{F_{0}(x),F_{0}(y)}}G_{0}[F_{0}(x)_{\lambda}F_{0}(y)]^{ \prime}\xrightarrow{G_{0}\big{(}(F_{2})^{x,y}\big{)}}(G_{0}\circ F_{0})[x_{ \lambda}y],\ \text{for}\ x,y\in C_{0}.\] Further, for any Leibniz conformal 2-algebra \((C,[\cdot_{\lambda}\cdot],\mathcal{L})\), the identity homomorphism is given by the identity functor \(\mathrm{id}_{C}=(\mathrm{id}_{C_{0}},\mathrm{id}_{C_{1}}):C\to C\) with the identity natural isomorphism \((\mathrm{id}_{C})^{x,y}:[x_{\lambda}y]\rightarrow[x_{\lambda}y]\), for any \(x,y\in C_{0}\). With all the above definitions and notations, the collections of all Leibniz conformal \(2\)-algebras and homomorphisms between them form a category, denoted by \(\mathbf{LeibConf2}\). In the following result, we give an equivalent description of the category \(\mathbf{LeibConf2}\). More precisely, we have the following. **5.3 Theorem**.: _The category \(\mathbf{LeibConf2}\) is equivalent to the category \(\mathbf{2Leib}_{\infty}\mathbf{Conf}\)._ Proof.: Let \((C=(C_{1}\rightrightarrows C_{0}),[\cdot_{\lambda}],\mathcal{L})\) be a Leibniz conformal \(2\)-algebra. Consider the \(2\)-term complex \(\mathcal{G}_{1}=\ker(s)\xrightarrow{t}C_{0}=\mathcal{G}_{0}\) of \(\mathbb{C}[\partial]\)-modules. We define conformal sesquilinear maps \(\rho_{2}:\mathcal{G}_{i}\otimes\mathcal{G}_{j}\to\mathcal{G}_{i+j}[[ \lambda]]\) and \(\rho_{3}:\mathcal{G}_{0}\otimes\mathcal{G}_{0}\otimes\mathcal{G}_{0}\to \mathcal{G}_{1}[[\lambda,\mu]]\) by \[(\rho_{2})_{\lambda}(x,y)=[x_{\lambda}y],\quad(\rho_{2})(x,h)=[ (1_{x})_{\lambda}h],\quad(\rho_{2})_{\lambda}(h,x)=[h_{\lambda}(1_{x})],\] \[(\rho_{3})_{\lambda,\mu}(x,y,z)=pr_{2}\big{(}\mathcal{L}_{x,y,z} :[x_{\lambda}[y_{\mu}z]]\to[[x_{\lambda}y]_{\lambda+\mu}z]+[y_{\mu}[x_{ \lambda}z]]\big{)},\] for \(x,y,z\in\mathcal{G}_{0}\) and \(h\in\mathcal{G}_{1}\). Here \(pr_{2}:(\mathcal{G}_{0}\oplus\mathcal{G}_{1})[[\lambda,\mu]]\to\mathcal{G}_{1} [[\lambda,\mu]]\) is the projection onto the second factor. Then it is easy to verify that \(\big{(}\mathcal{G}_{1}=\ker(s)\xrightarrow{t}C_{0}=\mathcal{G}_{0},\rho_{2}, \rho_{3}\big{)}\) is a \(2\)-term \(Leib_{\infty}\)-conformal algebra. Next, let \((C,[\cdot_{\lambda}\cdot],\mathcal{L})\) and \((C^{\prime},[\cdot_{\lambda}\cdot]^{\prime},\mathcal{L}^{\prime})\) be two Leibniz conformal \(2\)-algebras and \((F=(F_{0},F_{1}),F_{2})\) be a homomorphism between them. We consider the corresponding \(2\)-term \(Leib_{\infty}\)-conformal algebras \(\big{(}\mathcal{G}_{1}=\ker(s)\xrightarrow{t}C_{0}=\mathcal{G}_{0},\rho_{2}, \rho_{3}\big{)}\) and \(\big{(}\mathcal{G}_{1}^{\prime}=\ker(s^{\prime})\xrightarrow{t}C_{0}^{\prime }=\mathcal{G}_{0}^{\prime},\rho_{2}^{\prime},\rho_{3}^{\prime}\big{)}\), respectively. We define \(\mathbb{C}[\partial]\)-module maps \(f_{0}:C_{0}\to C_{0}^{\prime}\), \(f_{1}:\ker(s)\to\ker(s^{\prime})\) and a conformal sesquilinear map \(f_{2}:C_{0}\otimes C_{0}\to\ker(s^{\prime})[[\lambda]]\) by \(f_{0}=F_{0}\), \(f_{1}=F_{1}|_{\ker(s)}\) and \(f_{2}(x,y)=(F_{2})^{x,y}-1_{s((F_{2})^{x,y})}\), for \(x,y\in C_{0}\). Then it turns out that \((f_{0},f_{1},f_{2})\) defines a homomorphism between the above \(2\)-term \(Leib_{\infty}\)-conformal algebras. Therefore, we obtain a functor \(S:\mathbf{LeibConf2}\to\mathbf{2Leib}_{\infty}\mathbf{Conf}\). Conversely, let \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) be a \(2\)-term \(Leib_{\infty}\)-conformal algebra. First, we consider the \(2\)-vector space \(C=(\mathcal{G}_{0}\oplus\mathcal{G}_{1}\rightrightarrows\mathcal{G}_{0})\) in the category of \(\mathbb{C}[\partial]\)-modules. We now define a conformal sesquilinear functor \([\cdot_{\lambda}]:C\otimes C\to C[[\lambda]]\) by \[[(x,h)_{\lambda}(y,k)]:=\big{(}(\rho_{2})_{\lambda}(x,y),\ (\rho_{2})_{ \lambda}(x,k)+(\rho_{2})_{\lambda}(h,y)+(\rho_{2})_{\lambda}(dh,k)\big{)},\] for \((x,h),(y,k)\in C_{1}=\mathcal{G}_{0}\oplus\mathcal{G}_{1}\). We define the conformal Leibnizator by \[\mathcal{L}_{x,y,z}:=\big{(}[x_{\lambda}[y_{\mu}z]],-(\rho_{3})_{ \lambda,\mu}(x,y,z)\big{)},\ \text{for}\ x,y,z\in\mathcal{G}_{0}. \tag{9}\] It follows from the condition (v) of Definition 4.1 that the conformal Leibnizator \(\mathcal{L}_{x,y,z}\) has the source \([x_{\lambda}[y_{\mu}z]]\) and the target \([[x_{\lambda}y]_{\lambda+\mu}z]+[y_{\mu}[x_{\lambda}z]]\), for \(x,y,z\in\mathcal{G}_{0}\). It is also easy to check that \(\mathcal{L}_{x,y,z}\) is a natural isomorphism. Finally, a direct computation shows that the conformal Leibnizator (9) makes the diagram (8) commutative. Thus, we obtain a Leibniz conformal \(2\)-algebra \((C=(\mathcal{G}_{0}\oplus\mathcal{G}_{1}\rightrightarrows\mathcal{G}_{0}),[ \cdot_{\lambda}\cdot_{]},\mathcal{L})\). Next, suppose \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) and \((\mathcal{G}_{1}^{\prime}\xrightarrow{d^{\prime}}\mathcal{G}_{0}^{\prime}, \rho_{2}^{\prime},\rho_{3}^{\prime})\) are \(2\)-term \(Leib_{\infty}\)-conformal algebras, and say \((f_{0},f_{1},f_{2})\) is a homomorphism between them. Consider the corresponding Leibniz conformal \(2\)-algebras \(\big{(}C=(\mathcal{G}_{0}\oplus\mathcal{G}_{1}\rightrightarrows\mathcal{G}_{0} ),[\cdot_{\lambda}\cdot],\mathcal{L}\big{)}\) and \(\big{(}C^{\prime}=(\mathcal{G}_{0}^{\prime}\oplus\mathcal{G}_{1}^{\prime} \rightrightarrows\mathcal{G}_{0}^{\prime}),[\cdot_{\lambda}\cdot_{]^{\prime}},\mathcal{L}^{\prime}\big{)}\). We define a \(\mathbb{C}[\partial]\)-linear functor \(F=(F_{0},F_{1}):C\to C^{\prime}\) by \[F_{0}=f_{0}:\mathcal{G}_{0}\to\mathcal{G}_{0}^{\prime}\ \ \text{and}\ \ F_{1}=f_{0}\oplus f_{1}: \mathcal{G}_{0}\oplus\mathcal{G}_{1}\to\mathcal{G}_{0}^{\prime}\oplus\mathcal{G}_{1}^{ \prime},\] and a natural isomorphism \((F_{2})^{x,y}:[F_{0}(x)_{\lambda}F_{0}(y)]^{\prime}\to F_{0}[x_{ \lambda}y]\) by \((F_{2})^{x,y}=\big{(}[f_{0}(x)_{\lambda}f_{0}(y)]^{\prime},-(f_{2})_{\lambda}(x, y)\big{)}\), for \(x,y\in\mathcal{G}_{0}\). Then a straightforward calculation shows that \((F=(F_{0},F_{1}),F_{2})\) is a homomorphism between the above Leibniz conformal \(2\)-algebras. Hence, we obtain a functor \(T:\mathbf{2Leib}_{\infty}\mathbf{Conf}\to\mathbf{LeibConf2}\). In the following, we construct natural isomorphisms \(\alpha:T\circ S\Rightarrow\mathbf{1_{LeibConf2}}\) and \(\beta:S\circ T\Rightarrow\mathbf{1_{2Leib_{\infty}}\mathbf{Conf}}\) that will show the equivalence of the categories \(\mathbf{LeibConf2}\) and \(\mathbf{2Leib_{\infty}\mathbf{Conf}}\). Let \((C,[\cdot_{\lambda}\cdot],\mathcal{L})\) be a Leibniz conformal \(2\)-algebra. If we apply the functor \(S\), we obtain the \(2\)-term \(Leib_{\infty}\)-conformal algebra \((\ker(s)\xrightarrow{t}C_{0},\rho_{2},\rho_{3})\). Further, if we apply the functor \(T\), we obtain a new Leibniz conformal \(2\)-algebra, say \(\big{(}C^{\prime}=(C_{0}\oplus\ker(s)\rightrightarrows C_{0}),[\cdot_{\lambda} \cdot]^{\prime},\mathcal{L}^{\prime}\big{)}\). We define a \(\mathbb{C}[\partial]\)-linear functor \(\alpha_{C}=\big{(}(\alpha_{C})_{0},(\alpha_{C})_{1}\big{)}:C^{\prime}\to C\) by \((\alpha_{C})_{0}(x)=x\) and \((\alpha_{C})_{1}(x,h)=1_{x}+h\), for \(x\in C_{0}\), \((x,h)\in C_{0}\oplus\ker(s)\). There is also the identity natural isomorphism \[(\alpha_{C})_{2}^{x,y}=\mathrm{id}:[(\alpha_{c})_{0}(x)_{\lambda}(\alpha_{C})_{0} (y)]=[x_{\lambda}y]\to(\alpha_{C})_{0}[x_{\lambda}y]^{\prime}=[x_{\lambda}y].\] It is also easy to see that \(\big{(}\alpha_{C}=\big{(}(\alpha_{C})_{0},(\alpha_{C})_{1}\big{)},(\alpha_{C})_{2} \big{)}\) is an isomorphism of Leibniz conformal \(2\)-algebras from \((C^{\prime},[\cdot_{\lambda}]^{\prime},\mathcal{L}^{\prime})\) to \((C,[\cdot_{\lambda}\cdot],\mathcal{L})\). This construction yields a natural isomorphism \(\alpha:T\circ S\Rightarrow\mathbf{1_{LeibConf2}}\). Next, let \((\mathcal{G}_{1}\xrightarrow{d}\mathcal{G}_{0},\rho_{2},\rho_{3})\) be a \(2\)-term \(Leib_{\infty}\)-conformal algebra. If we apply \(S\circ T\), we get back the same \(2\)-term \(Leib_{\infty}\)-conformal algebra. Therefore, we may take the natural isomorphism \(\beta:S\circ T\Rightarrow\mathbf{1_{2Leib_{\infty}Conf}}\) to be the identity natural isomorphism. This completes the proof. ### An example of a Leibniz conformal \(2\)-algebra Let \(A\) and \(B\) be two finite \(\mathbb{C}[\partial]\)-modules. A _conformal linear map_ from \(A\) to \(B\) is a \(\mathbb{C}\)-linear map \(f:A\to B[\lambda],x\mapsto f_{\lambda}(x)\) that satisfies \(f_{\lambda}(\partial x)=(\partial+\lambda)f_{\lambda}(x)\), for all \(x\in A\). We denote the space of all conformal linear maps from \(A\) to \(B\) by the notation \(\mathrm{CHom}(A,B)\). The space \(\mathrm{CHom}(A,B)\) has a canonical \(\mathbb{C}[\partial]\)-module structure with the action of \(\partial\) given by \((\partial f)_{\lambda}=-\lambda f_{\lambda}\), for all \(f\in\mathrm{CHom}(A,B)\). We use the notation \(\mathrm{CEnd}(A)\) for the space \(\mathrm{CHom}(A,A)\) of all conformal linear maps from \(A\) to \(A\). Then there is a \(\lambda\)-product \(\cdot_{\lambda}:\mathrm{CEnd}(A)\otimes\mathrm{CEnd}(A)\to\mathrm{CEnd}(A)[\lambda]\) given by \[(f_{\lambda}g)_{\mu}x:=f_{\lambda}(g_{\mu-\lambda}(x)),\text{ for }f,g\in \mathrm{CEnd}(A),x\in A\] which makes \(\mathrm{CEnd}(A)\) into an associative conformal algebra in the sense of [4]. As a consequence, \((\mathrm{CEnd}(A),[\cdot_{\lambda}\cdot])\) is a Lie conformal algebra, where the \(\lambda\)-bracket is given by \[[f_{\lambda}g]:=f_{\lambda}g-g_{-\partial-\lambda}f,\text{ for }f,g\in\mathrm{CEnd }(A).\] It is easy to see that the Lie conformal algebra \((\mathrm{CEnd}(A),[\cdot_{\lambda}\cdot])\) has a representation on the \(\mathbb{C}[\partial]\)-module \(A\) with the \(\lambda\)-action \(f_{\lambda}x:=f_{\lambda}(x)\), for \(f\in\mathrm{CEnd}(A)\) and \(x\in A\). Thus, the direct sum \(\mathbb{C}[\partial]\)-module \(\mathrm{CEnd}(A)\oplus A\) carries a Leibniz conformal algebra structure the \(\lambda\)-bracket \[\{(f,x)_{\lambda}(g,y)\}=([f_{\lambda}g],f_{\lambda}y),\text{ for }(f,x),(g,y)\in \mathrm{CEnd}(A)\oplus A.\] In the following, we will generalize the above construction in the graded context. Let \(A:(A_{1}\xrightarrow{d}A_{0})\) be a complex of finite \(\mathbb{C}[\partial]\)-modules. We define \[\mathrm{CEnd}^{0}(A)=\{(f_{0},f_{1})| \ f_{0}:A\to A_{0}[\lambda]\text{ and }f_{1}:A_{1}\to A_{1}[\lambda]\text{ are }\] \[\text{conformal linear maps with }d\circ f_{1}=f_{0}\circ d\}\] and \(\mathrm{CEnd}^{1}(A)=\mathrm{CHom}(A_{0},A_{1})\). Note that both the spaces \(\mathrm{CEnd}^{0}(A)\) and \(\mathrm{CEnd}^{1}(A)\) has obvious \(\mathbb{C}[\partial]\)-module structures. Then the map \(\Delta:\mathrm{CEnd}^{1}(A)\to\mathrm{CEnd}^{0}(A)\) given by \(\Delta(h)=(d\circ h,h\circ d)\) is a \(\mathbb{C}[\partial]\)-module map. Moreover, the graded \(\mathbb{C}[\partial]\)-module \(\mathrm{CEnd}(A):=\mathrm{CEnd}^{0}(A)\oplus\mathrm{CEnd}^{1}(A)\) carries a graded Lie conformal algebra structure with the \(\lambda\)-bracket \[[(f_{0},f_{1})_{\lambda}(g_{0},g_{1})]:=\big{(}(f_{0})_{\lambda}(g _{0})-(g_{0})_{-\partial-\lambda}(f_{0}),(f_{1})_{\lambda}(g_{1})-(g_{1})_{- \partial-\lambda}(f_{1})\big{)},\] \[[(f_{0},f_{1})_{\lambda}h]=-[h_{-\partial-\lambda}(f_{0},f_{1})]: =(f_{1})_{\lambda}h-h_{-\partial-\lambda}(f_{0}),\] for \((f_{0},f_{1}),(g_{0},g_{1})\in\mathrm{CEnd}^{0}(A)\) and \(h\in\mathrm{CEnd}^{1}(A)\). With the above operations, \((\mathrm{CEnd}(A),\Delta,[\cdot_{\lambda}\cdot])\) is a differential graded Lie conformal algebra. Further, it is easy to verify that the differential graded Lie conformal algebra \((\mathrm{CEnd}(A),\Delta,[\cdot_{\lambda}\cdot])\) has a representation on the graded \(\mathbb{C}[\partial]\)-module \(A\) with the \(\lambda\)-action given by \[(f_{0},f_{1})_{\lambda}x=(f_{0})_{\lambda}x,\quad(f_{0},f_{1})_{\lambda}v=(f_{1 })_{\lambda}v,\quad h_{\lambda}x=h_{\lambda}(x),\quad h_{\lambda}v=0,\] for \((f_{0},f_{1})\in\mathrm{CEnd}^{0}(A)\), \(h\in\mathrm{CEnd}^{1}(A)\), \(x\in A_{0}\) and \(v\in A_{1}\). As a consequence, the direct sum \[\mathrm{CEnd}(A)\oplus A=\big{(}\mathrm{CEnd}^{0}(A)\oplus A_{0}\big{)}\oplus \big{(}\mathrm{CEnd}^{1}(A)\oplus A_{1}\big{)}\] carries a differential graded Leibniz conformal algebra structure with the differential given by \(\Delta+d:\operatorname{CEnd}^{1}(A)\oplus A_{1}\to\operatorname{CEnd}^{0}(A)\oplus A _{0}\) and the \(\lambda\)-bracket \[\big{\{}\big{(}(f_{0},f_{1}),x\big{)}_{\lambda}\big{(}(g_{0},g_{1} ),y\big{)}\big{\}} =\big{(}[(f_{0},f_{1})_{\lambda}(g_{0},g_{1})],(f_{0},f_{1})_{ \lambda}y\big{)},\] \[\big{\{}\big{(}(f_{0},f_{1}),x\big{)}_{\lambda}(h,v\big{)}\big{\}} =\big{(}[(f_{0},f_{1})_{\lambda}h],(f_{0},f_{1})_{\lambda}v\big{)},\] \[\big{\{}(h,y)_{\lambda}\big{(}(f_{0},f_{1}),x\big{)}\big{\}} =\big{(}[h_{\lambda}(f_{0},f_{1})],h_{\lambda}x\big{)},\] for any \(\big{(}(f_{0},f_{1}),x\big{)},\big{(}(g_{0},g_{1}),y\big{)}\in\operatorname{ CEnd}^{0}(A)\oplus A_{0}\) and \((h,y)\in\operatorname{CEnd}^{1}(A)\oplus A_{1}\). Since it is a \(2\)-term differential graded Leibniz conformal algebra, it can be considered as a \(2\)-term \(Leib_{\infty}\)-algebra with vanishing \(\rho_{3}.\) Hence it is an example of a strict \(Leib_{\infty}\)-conformal algebra. Therefore, it can be realized as a Leibniz conformal \(2\)-algebra with the trivial conformal Leibinator. ### Conflict of interest statement There is no conflict of interest. **Data Availability Statement.** Data sharing does not apply to this article as no new data were created or analyzed in this study. ### Acknowledgements Anupam Sahoo would like to thank CSIR, Government of India for funding the PhD fellowship. Both authors thank the Department of Mathematics, IIT Kharagpur for providing the beautiful academic atmosphere where the research has been carried out.
2308.04896
Why Data Science Projects Fail
Data Science is a modern Data Intelligence practice, which is the core of many businesses and helps businesses build smart strategies around to deal with businesses challenges more efficiently. Data Science practice also helps in automating business processes using the algorithm, and it has several other benefits, which also deliver in a non-profitable framework. In regards to data science, three key components primarily influence the effective outcome of a data science project. Those are 1.Availability of Data 2.Algorithm 3.Processing power or infrastructure
Balaram Panda
2023-08-08T06:45:15Z
http://arxiv.org/abs/2308.04896v1
# Why Data Science Projects Fail ###### Abstract Data Science is a modern Data Intelligence practice, which is the core of many businesses and helps businesses build smart strategies around to deal with businesses challenges more efficiently. Data Science practice also helps in automating business processes using the algorithm, and it has several other benefits, which also deliver in a non-profitable framework. In regards to data science, three key components primarily influence the effective outcome of a data science project. Those are 1. Availability of Data 2. Algorithm 3. Processing power / infrastructure In today's technology world, there is no limitation on data as well as processing power, and we have a much more efficient algorithm to produce the desired output. In spite of the success of Data Science projects, many Data Science projects still fail and are unable to produce the desired outcome. In this paper, we have explored the bottleneck of Data Science projects and provided some recommendations to make data science projects more successful. Standard Data Science project development life-cycle CRISP-DM [1] is old in this agile development [31] world, and but most Data Science practices still follow CRISP-DM. In general, Data Scientist analyses scenarios where a predictive model or machine learning model might fail. But this study is to analyze when and why the Data Science project fails despite an excellent model. Data Science is a diverse field. It needs technical as well as business knowledge to deliver a project. Hence, to understand why the Data Science project fails, we need to understand challenges from the business side and technical side. Software Engineering, Data Science, Machine Learning, CRISP-DM ## I Introduction Data Science is comparable to applied research. Often in research, there are lots of repetitive processes to be followed until it generates the desired outcome. Similarly, data science processes are often repetitive until they meet the business objectives. Those business objectives are set during the repeated Business Understanding and Data Understanding phase of CRISP-DM. CRISP-DM developed during 90's, and since then, components of Data Science have evolved a lot in regards to the way data is collected, volume, velocity, and variety of the information being analyzed(big data [24]), processing methodology, algorithms, and outcome delivery. Hence it is crucial to understand in detail the current day's Data Science challenges, deep dive into the reasoning of failures, and find out the best practice. Data Science is an evolving field, and lots have been changed since last decade in regards to various keys areas, such as data collection, big data processing on cloud and/or parallel or distributed processing framework, evolution of new algorithm, data visualization, deployment and productionizing Data Science solutions as a digital product etc. There are several challenges in those key areas; as an example, because of big data, the Data Science project collects lots of irrelevant and uncleaned data. Sometimes the intermediate data source doesn't have the correct business logic, which is mapped to the business objective of the data science project. "big data brings lots of "big errors" in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories." [4] Data quality is one of the common challenges in the Data Science Project. Another major challenge is domain understanding, and many times domain understanding is a bottleneck for Data Science project success. Also, in many cases, data science issues are seen differently and vary from one domain to another, stakeholder view and data scientist view. As mentioned by Dr.Om Deshmukh [9] " Data Science project success also depends on what stakeholder knows about the expected outcome of the data science project and how effectively stakeholder expectations being managed though effectwithroughunication." Data Science projects struggles with these types of challenges and more other challenges in industry settings, but not enough research has been done in this space to highlight those problems and their possible solutions. For better quality research in this domain, years of experience in the Data Science field and role/designation level diversity in data science projects such as key stakeholders and data scientists have been included. We prepared a list of questions and interviewed experienced Data Scientists, Senior Leader who actively manage data science and data analytic team, and stakeholders part of data science projects, in a motivation to understand the perception on data science project failure. Then did the thematic analysis to analyze and draw conclusions from data. Then came up with the most popular theme by thoroughly analyzing the discussion. In this research, we have discovered and highlighted the key challenging areas where Data Science practice and Data scientists should focus. Also, some suggestions on tackling those challenges. Also, we came up with a state of art framework to tackle those challenges in efficient ways. ## II Research Methodology ### _Research methodology_ In this research Interview based semi-structured Qualitative research methodology [8] have been used. ### _Interview Design_ In this research the objective is also to capture the diversity in regards to the people involved in a data science project such as Data Scientist and business owner and/or data science capability leader and/or stakeholder. Hence for better outcome, two types of interview questions have prepared * Data Scientist with some years of experience working in Data Science project * Business owner / leader / stakeholder The interviews had done via zoom software. During the interview, answers and key points were captured. Also, interview voice recording has been stored in google drive for reference. ### _Candidate selection for interview and Sample Diversity_ Interview candidates contacted via reference and Linkedin(professional networking website). Manually reviewed their background and put them into the Group A or Group B Category based on their experience. Some candidates were also selected through reference. While selecting a reference candidate, Snowball technique [5] had been used to select the reference candidate. Snowball helped to capture unbiased interviews candidate samples. During the selection process, candidates domain experience deiveris such as Banking and finance, Retail, Software Product, IT Services, Telecommunication etc. had been considered. In this research the objective also is to add domain diversity so that Data Science problems can be viewed from a normalized and generic approach. Also to capture more diversity candidate selected from various geography such as India, USA, Canada in addition to New Zealand. ### _Preparation of research Questions_ In first iteration a list of questions has been prepared from prior knowledge and after going through literature review papers [10][11][12] Then questions are discussed with the research supervisor. Based on the discussion few questions are added, and a few are modified. A final list of questions has been prepared and shared with the supervisor in the second iteration. Then after 2nd interview, 2 more questions were added to make the interview more target-oriented. * Data Scientist with some years of experience working in Data Science project * Business owner / leader / stakeholder ### _Research Questions_ ### _Group A Questions_ 1. What are the key reasons for a Data Science project success as per your experience 2. What are key reasons for Data Science project failure as per your experience 3. What steps Data Science should follow to match user expectations? Can you refer to any specific project and explain it in detail? 4. How stakeholder can contribute towards Data Science project success, and how important is it? 5. How value measured in the Data Science project and who are involved and what are the challenges? Any example based on a specific project? 6. Do you think infrastructure is a challenge for any data science project? 7. How important Data Quality is for Data Science projects? 8. What are the hindrances to maintaining data quality? 9. What are the solutions to overcome those hindrances? 10. What testing approaches do you take to test the model based on business user expectations? 11. Some detailed discussion on failure mentioned in open ended. Like how to solve model explainability? Is there any approach you followed, or can you suggest from your experience how to maintain model explainability and without compromising the model accuracy? 12. Does the level of competencies affect project outcome and how to solve that? 13. How often do Data Science projects productionize and the pitfalls you experienced during productionization? 14. Many Data Science projects help automate the existing process; what are the challenges you face during knowledge elicitation from business users in those cases? 15. How challenging is change management in Data Science? 16. There are limitations of Data usability due to governance restriction? How often do you see this as a limitation, and how do you solve these? In Group A, questions from 1 to 6 are open ended questions and 6 to 16 are focused questions. **Group B Questions** 1. What are the essential things Data Scientists must do to make a Data Science project successful? 2. What is your view on Stakeholder's contribution to a Data Science project? 3. What are the critical gaps between a successful and unsuccessful Data Science Project from stakeholder/business owner's perspective? 4. What improvement do you want to make towards the traditional Data Science approach to make it more valuable for business? 5. How important change management is for a Data Science Project? Any suggestions to make change management successful? 6. Data Science projects have lots of intangible benefits; how do you create value metrics for Data Science projects? How often are you able to achieve that? 7. Do you follow any ROI metrics for Data Science projects? (question 6 extension) 8. How do you define a Data Science project different from a Business Intelligence project? (Optional question) ### _Interview Process and Data Collection_ Interviewer contacted via Linked-in and via emails. Once they confirmed their participation, the consent document was shared, and formal consent approval was taken before the interview. Interviews are done via Zoom and in-person meetings. During the interview, answers and critical points has been captured. Also, interview voice recording has been stored in google drive for reference. Only the question-answer part of the interview has been recorded. In an average interview, only the question-answer part lasted for 30 minutes. For the introduction, including research motivation, we spent around 8 to 10 minutes in each interview. ### _Data Understanding and Analysis_ Thematic analysis [6] has been used to analysis the data. Thematic analytic is a popular method for qualitative research. As suggested by Braun and Clark [30] there are 6 steps of thematic analyses as follows **Step 1:**: Become familiar with the data **Step 2:**: Generate initial codes **Step 3:**: Search for themes **Step 4:**: Review themes **Step 5:**: Define themes **Step 6:**: Write-up As the questions were designed before the interviews and interview, we familiarised ourselves with the data during the interview. Also, note has been taken during the interview. The code generation to theme generation is described in the following Theme Generation section and the write-up step mentioned in the result section. ### _Theme Generation_ There are various orientations in thematic analytic mentioned in [26, 27]. In this research, we have used a combination of the Inductive and Deductive approaches. We have prior knowledge from experience and literature review; based on that, questions have been prepared on the most common issues that have been known so far. After the interview, get to know more new issues that the Data Science project is experiencing and solutions around how to tackle that. After the interview, we had iterate over the recordings, list down all the key issues that experts mentioned in the interview, and mapped that to the questions. Then cluster those issues together to identify the common themes in those issues. In addition to the theme, we also got solutions and workaround to tackle those issues. ## III Results ### _Effective Stakeholder Management:_ Stakeholders are the people representing a department or owning a business product or a group of people who benefit from a project's success. In Data Science, Stakeholder can be business process owner or product owner or end-user who is going to use the Data science outcome or product. Many participants mentioned how crucial it is to communicate and manage Stakeholder effectively. C1 also mentioned that data scientists need to communicate well across different project stakeholders like project managers, business stakeholders, and data engineers. "Stakeholders should share equal responsibility as data scientist to make the project successful" - C1. But often, stakeholders may not be available all the time as they have other priorities. Also its often found the there is a gap what Stakeholder thinks about Data Science can solve vs what data science can actually solve for business. C1 mentioned that from her experience, she found that data science projects are more likely to successful where data scientist and stake holder work together towards a common business goal. C2 mentioned that Stakeholder should take equal responsibilities as data scientist to make the Data Science project successful. [13] Data science teams often struggle to use their model into business processes. Also stakeholders often can't articulate the problems they are aiming to solve. [13] there is a significant gap between stakeholder and data science teams that needs to be recognized and addressed. [13] suggest data science need to bridge the gap between an organizational structure and leadership commitment to develop better communication, processes, and gain trust among all stakeholders. ### _Business Problem Understanding:_ All the participants mentioned that Business problem understanding is the most critical step in a data science project. Business problem understanding with effective stakeholder communication improves the success criteria of a data science project. During the business problem understanding process, the data science team gets an overview of the business process and tries to articulate the problem that the business is trying to solve. It helps the data science team get more clarity around the business objective, business process, and some understanding of data. Then Data Science team prepares the plan to check for data availability. This also helps define the target variable or dependent variable in the case of a supervised modeling approach. In this process Data Science team understands the business problem well; more accurately and define the business objective and target/dependent/outcome variable for the predictive model or another type of modeling task. More accurately, the outcome variable is defined, more chances of success of the data science project. ### _Data Quality issues:_ Data quality issue another common issue which came out as a theme. Most participants from group a mentioned about Data Quality issue. Also they mentioned most teams don't want to spend much time on fixing data quality issues and focusing more on deliverable, which can be measured. Because the time spend is not mapped to any tangible benefit at this stage. So the idea is to create something which can be used for multiple projects. Example the DW(Data Warehousing) and Cube concept got more attention when BI (Business Intelligence) generate value for organisation and BI/ DW team started creating something which can be reused for many different projects. ### _Model Deployment and Production:_ Several participants from group A and group B, described that many Data Science project is not deployable or failed due to complexity in productionzing the model or the outcome. Example an antivirus software company built a predictive model build with 99.1% accuracy on test data. No doubt that is a that is one of the best model, but if the expectation is model should be integrated into a real-time environment, that might become tricky. That requite a software engineering skill, also the key thing is to how a model file can be integrated in the existing programming environment. These things need to be plan ahead before the project starts during the business understanding and planning phase. Based on the complexity of implementation project kick-off. If its is feasible to implement, then the production implementation plan need to be discussed and agreed with the stakeholder at the beginning of the project. ### _Department level Collaboration:_ Several group B candidates highlighted that cross department collaboration is one of the major challenges while delivering the Data Science outcome. Many times model outperform and implementation done but due to lack in department level integration project may not be useful for the organization. Example a market propensity model can produce most accurate target customer, but that need to integrated in the marketing channel. Even though its integrated, the forntline channel might de-prioritize the request due to some other priority they are having. These situation are out of control of a Data Science and Stakeholder. ### _Change Management:_ Most of the data science project are either used for strategy or time saving or cost saving using automation to reduce manual effort. Sometimes this creates resistance from the people affected. Several candidates from group B mentioned that Change Management is one of the challenge in any data science implementation. But this is not the key challenge as there are strategy to tackle it. Also few participants mentioned that the solution for the change management sometimes out of control of Data Science team. As suggested by C2 "Change management specialized area and need specialized people to handle such things." "Change management is not a key challenge" -C3. May be because he is working in google where automation using algorithm is always promoted. To handle change management, "DS team need to engage with the team going to affect very early and try to gain confidence from the team affected by illustrating an explaining how this is going to improve their work" - C1. ### _Data Governance:_ Data governance processes, roles, policies and standard are getting more stronger after GDPR(General Data Protection Regulation) [21] law. "Around 30% of data science projects unable to go to production due to Data Governance rules" C3. Same issues due to data governance also mentioned by various other participants from group A and Group B. Data governance is a must have standard for any organization using data as a asset. But to avoid data science project late failure, data governance team engagement need to be discussed at the early stage of the project. So that Data governance team can advise on alternate solution to get the data, or they can advise not to opt for this project due to the data used in Data Science project are not meeting the governance standard. In addition to that organisation need to adopt PII (personal identifiable information) masking approaches to mask the data. ## IV Discussion and Recommendations There are the top 7 themes generated [Table II] in this research. More emphasis was given to Effective Stakeholder management, Most of the interview participants mentioned the challenges during stakeholder management, and few mentioned that stakeholder management is not that challenging. Also, business leaders who sometimes act as stakeholders (in our interview panel) also agreed that there is a gap in understanding the data science process from a stakeholder lens. Usually, Data science projects have more than one stakeholder. In this multi-stakeholder environment, it's always good to have a strategy around stakeholder management. The stakeholder can have different types of posture [15] based on the RDAP(reactive, defensive, accommodative, or proactive) strategy (table Fig 2). The same can be applied to Data Science projects. As C2 mentioned, stakeholders should share equal responsibility. Hence for a successful data science project, the strategy should be Accommodative or Proactive where stakeholder postures are Accept responsibility and Anticipate responsibility, respectively, as described in fig1. But to do that as suggested by few candidated, every Data Science project should have top management buying to make the DS project successful. Sometimes these postures of stakeholders due to the nature of the organization of the nature of data science application. Sometimes the posture varies at the department level, even in the same organization. Example in a Banking and finance business posture of the same stakeholder may vary from a Credit Risk project to a pure customer marketing project. Hence stakeholder communication and management should be strategic, and one strategy can't be generalized for all data science projects. Another key challenge most data scientists mention is that most projects struggle to articulate the business problem well enough. Business problem understanding is not easy; it usually happens with stakeholders, business subject matter experts, and the data science team. This isn't easy because it requires a substantial amount of prior business knowledge to understand the business objective clearly. Another challenge sometimes it is hard to define a uniform business process or rule even in the same department. For example, in a credit risk modeling scenario, the objective is to identify the risky customer based on historical default customer data. But defining a default customer sometimes varies from one business SME to another business SME. Some SMEs can define a customer as a default customer if someone has not made the payment 60 days after the due date. In some cases, SME may consider 90 days or sometimes 1st-time defaulter is not true defaulter as 1st time can be due to negligence or due to some mistake. So it's critical to understand the business problem definition clearly and once you get some level of understanding, validate that understanding in a meeting with business SME. Sometimes stakeholder comes from a mindset predictive model or machine learning a solution to any type of business problem. In some cases, it is true, but the definition of the business rule should be clear enough to make any Data Science project successful. [13] stakeholders are sometimes not sure what they want. This usually happens in business strategy projects. In this business understanding process, Data scientists should also take the opportunity to help refine the stakeholder objective in several meetings. But often, business understanding is not given priority because of the following reason. 1. Stakeholder and Data scientist are both very excited to start working on the project 2. Data scientists cannot ask the right questions initially due to a lack of domain knowledge. So to solve this issue, business SME and Data Scientists should spend a good amount of time articulating the business problem clearly. Another challenge in Data Science we found from this research is that the data quality. Data quality has always been an issue. Spending a good amount of time on fixing Data Quality is always difficult because effort/time to value mapping in data quality framework is always missing. Hence fixing data quality is never get prioritized. To understand data quality, we need to understand what data quality means. Data quality has three dimensions as mentioned in [20]. 1. Accuracy : Are the data free of error? 2. Consistency in Timeliness : Are the data up-to-date? 3. Completeness : Are necessary data present? Reasons: Data Quality issues arise due to three main reasons. 1. Standardized Data collection process across organization margin someone's in saying about group Simpson someone's in a number of energy 2. Error due to human or software 3. Error in business logic to process the data Fig. 1: Stakeholder Management strategy [15] Solution we proposed to fix data quality : 1. Better data collection standard from application level data generation 2. Centralized data store : An organization needs to spend time building a feature store with the help of a robust data pipeline. The objective of the feature store is to create a dataset which can be of high quality and useful for various DS projects. Ideally the team should list down all the possible DS projects and start mapping the possible features/ data which need to be created. Another key issue identified from this research is the lack of department-level collaboration, leading to low-value outcomes. Because the department which should use the product doesn't use the model as they might have other priorities to attend. The solution is to list down all the departments going to be affected by this outcome and department who are going to use the outcome and which department is going to be involved in hosting this solution. This need to be discussed keeping stakeholder in loop. After this department level engagement Data Science team need to share the deployment plan mention how the outcome can be deployed or reusable. Other issues such as change management and data governance came out of the theme are not the key bottleneck for the data science project success. However one participant mentioned that "around 30% of the time they loose the project due to data governance and privacy limitation" - C3. The earlier Data Science assess the data privacy and governance issues, that is better for the project. So that Data Science can have the opportunity to negotiate or can find other source which can be approved by governance. We discussed the problems and the solution in the discussion section. In this section, we have summarized the solutions based on interviews, my knowledge, and my experience in Data Science. Based on this research work, we would recommend there are three key areas where improvement needed 1. Stakeholder Management 2. Data Quality 3. Durable and deployable outcome A potential data science project might fail due to the above reasons what we discussed so far. The Data Science team needs to have good stakeholder management skills to set the expectation of the stakeholder at an early stage based on the business problem understanding. Also, a data scientist should have the ability to highlight the potential risk ahead of time that might occur due to data privacy and/or data security and/or data governance restrictions. Also, Data scientists should have visibility of the risk of failure due to factors such as data quality issues and/or deployment limitations due to the complexity of the software platform and/or IT infrastructure. To create a fail-proof data quality management for the Data Science project, the Data Science team needs to spend dedicated time checking the data quality and plan to engage the data engineer to improve the data quality standard. Also, to validate the requirement with stakeholder Data Scientist need to build a deployment plan ahead and share it with the stakeholder in the early phase of the project. By combining all, we are proposing a new Data Science methodology called HYBRID CRISP DS(Hybrid Cross Industry Standard Process for Data Science), which will take care of all the above hindrances and make Data Science project less prone to failure. Following figure-II represents HYBRID-CRISP-DS. In this HYBRID-CRISP-DS methodology stakeholder and business SME always be in the loop during the various phases of the Data Science project. As mentioned in the figure-II, emphasis has been given on variable selection after consulting a business SME(subject matter expert), which is an iterative process. Soon after, that needs to be validated through DGP (Data Governance and Privacy) in collaboration with DG(Data Governance) team. Based on the outcome from DGP, proceed to the next step, DDP(deployment and delivery plan), or negotiate on variable selection for those variables rejected by DG. Based on the complexity of requirements in DDP SE(software engineer) or Data engineer, dependencies will be created. Once the deployment plan is ready, that needs to be validated by stakeholders. Then, the project flows through DCQ(Data collection and quality) to find the right data source and fix the quality issues where needed. After DCQ, next is to prepare the data, will follow through with model building and validation. Then the emphasis is given to outcome validation by the business user. The accuracy of a predictive model or any unsupervised model may not make sense to the business. Businesses should always test the model with live data or any validation data on which businesses can rely upon. Once the business is satisfied with the outcome, then the project moves to deployment. Once the project is deployed, a proper drift monitoring technique is implemented to monitor the drift in an automated fashion. Fig. 2: HYBRID-CRISP-DS ## V Future work Although several industry experts and researchers suggested alternate approaches to CRISP-DM such as TDSP(team data science process) by Microsoft [25] SEMMA(Sampling, Exploring, Modifying, Modeling, and Assessing) by SAS [28], each comes with the limitation. HYBRID-CRISP-DS is a new approach that came out of this research. In the future, this approach needs to be applied and can be observed where issues might occur while applying HYBRID-CRISP-DS in real-world data science projects. In this process, the Data Science project outcome never looked from the Data Science project as a software product lens. So the testing ends in step 8 of this proposed methodology after the business validates the modeling outcome. So the HYBRID-CRISP-DS approach can be researched and analyzed further to the Data Science as a product scenario, where the methodology can be slightly different with respect to business requirement, deployment, and software product testing. In addition to that, the proposed HYBRID-CRISP-DS is a series of processes. Hence this research can be further extended to understand how HYBRID-CRISP-DS process can be paralleled to make the Data Science process more efficient. Data Science experts participated in this interview process, not emphasizing model explainability. However, model explainability can be a big roadblock in other domains such as healthcare, critical safety environment. Even though this study has been conducted by interviewing experts from various domains and geography, this study can be further extended by including more professionals from a data science background. ## Acknowledgment Special thanks to my supervisor Dr Kelly Blincoe (Senior Lecturer in Software Engineering, University of Auckland) for her expert advise, guidance and excellent support during this research. We acknowledged all the candidates ((Pragyansmita Nayak (C1), Confidential (C2), Mani Kanteswara Rao Garlapati (C3), Peter Devine (C4), Tural Gulammmadov (C5), Suresh Bommu (C6), Confidential (C7), Siddhant Gawsane (C8), Confidential (C9)) for their participation in this research interview.
2304.14950
Dynamic Tracing: a graphical language for rewriting protocols
The category Set_* of sets and partial functions is well-known to be traced monoidal, meaning that a partial function S+U -/-> T+U can be coherently transformed into a partial function S -/-> T. This transformation is generally described in terms of an implicit procedure that must be run. We make this procedure explicit by enriching the traced category in Cat#, the symmetric monoidal category of categories and cofunctors: each hom-category has such procedures as objects, and advancement through the procedures as arrows. We also generalize to traced Kleisli categories beyond Set_*, providing a conjectural trace operator for the Kleisli category of any polynomial monad of the form t+1. The main motivation for this work is to give a formal and graphical syntax for performing sophisticated computations powered by graph rewriting, which is itself a graphical language for data transformation.
Kristopher Brown, David I. Spivak
2023-04-25T03:20:42Z
http://arxiv.org/abs/2304.14950v2
# Dynamic Tracing: a graphical language for rewriting protocols ###### Abstract The category **Set**, of sets and partial functions is well-known to be traced monoidal, meaning that a partial function \(S+U\rightharpoonup T+U\) can be coherently transformed into a partial function \(S\rightharpoonup T\). This transformation is generally described in terms of an implicit procedure that must be run. We make this procedure explicit by enriching the traced category in \(\mathbf{Cat}^{\sharp}\), the symmetric monoidal category of categories and cofunctors: each hom-category has such procedures as objects, and advancement through the procedures as arrows. We also generalize to traced Kleisli categories beyond **Set**, providing a conjectural trace operator for the Kleisli category of any polynomial monad of the form \(t+1\). The main motivation for this work is to give a formal and graphical syntax for performing sophisticated computations powered by graph rewriting, which is itself a graphical language for data transformation. Double pushout rewriting category theory graph rewriting ## 1 Motivation Explicitly constructed programs are the standard means of specifying data transformation. However, by forgoing the full expressivity of a general programming language, one can work within a restricted syntax--a _domain-specific language_--that has desirable properties. For example, flowcharts are often used as an informal syntax for software projects, where each box is associated with a subroutine that transforms data that flows on input wires into data that flows on output wires. Category theory makes this assignment of semantics precise: the syntax of directed wiring diagrams can formally be given a semantics in any symmetric monoidal category. There are advantages of this two-dimensional syntax over one-dimensional expression trees in a general purpose language, including transparent visualization, operadic substitution, and algebraic manipulation. Furthermore, when represented as a combinatorial object, the diagram itself is a particularly efficient normal form for a large number of syntax trees which it is equivalent to [18]. The field of graph transformation uses the syntax of spans in an adhesive category, interpreting them as rewrite rules. The semantics of deletion, copying, merging, and adding data can be attributed to these spans in various formalisms, notably DPO [7], SPO [13], and SqPO [6] rewriting. If these sorts of operations are all one needs, it is advantageous to work with this syntax rather than general expression trees, as its rules are more easily visualized and subject to static analysis. Despite these virtues, there has been difficulty in applying graph transformation in engineering practice [2, 27]. The cited reviews discuss various strategies for computation via graph rewriting. A straightforward method is unordered graph rewriting, where rewrite rules are applied in an arbitrary order, possibly with constraints; however, many applications require the expressivity of executing sequences of atomic rewrites in a systematic, domain-specific way, e.g. looping over a set of matches. Some earlier diagrammatic languages for this are based on directed graphs [5, 8], where vertices are rewrite rules and edges are \(\mathbb{B}\)-valued, indicating where to go next if the current rule either does match or doesn't match. There are also more expressive languages which propose a BNF grammar for graph programs[20, 19]. The control flow of such programs are implicit in the semantics given to constructors like while rather than explicit (e.g. the looping of a diagram). Current abstractions for programming with graph rewriting make it difficult for engineers to collaborate due to incompatibilities between software implementations or between the ontologies presupposed by rewrite rules. Furthermore, fine-grained control over _which_ match is used in a rewrite is not emphasized. We will demonstrate how the structure of a rich class of data transformations can be given the structure of a symmetric monoidal category with a conjectural trace operator, which licenses the use of wiring diagrams with feedback loops as a graphical syntax. After describing this class abstractly, we demonstrate how this formalism guides the user interface and implementation of software designed to construct elaborate computations built up out of rewrite rules. In Section 2 we lead up to the construction of a category \(\mathbf{D}\mathbf{K}_{t}\), parameterized by a polynomial monad \(t\), that will be sufficient for our rewriting application. In Section 3, we show it is a cocartesian monoidal category enriched in \(\mathbf{Cat}^{\sharp}\). We also offer a conjectural trace operator for monads meeting a certain criterion. In Section 4, we produce a domain-specific language (DSL), with semantics in \(\mathbf{D}\mathbf{K}_{t}\), for constructing sophisticated programs that manipulate data via rewrite rules. We conclude with a summary and future work in Section 5. ## Notation We assume familiarity with categories, functors, and enriched category theory. Many technical details related to **Poly** and its many monoidal structures \((+,\times,\otimes,+)\) can be referenced in [25]; we elide these to instead focus on providing intuitions for these constructions in Section 2. In contrast, Section 3 requires a strong technical understanding of **Poly**. We will use the notation \([-,-]\) to refer to the internal hom in a category, while \((-,-)\) will be used for (co)pairing morphisms in a category with (co)products. We use \(\sum\) to denote the disjoint union of sets. We often denote the identity on an object simply by the object name, e.g. we use \(A\) to denote \(\operatorname{id}_{A}\). ## 2 Background In this section, we incrementally improve frameworks for modeling dynamical systems of the sort we need for rewriting by proposing a sequence of categories: **Meal**, **Meal**/\(\sim\), **DynMeal**, and finally, \(\mathbf{D}\mathbf{K}_{t}\). Each formalism should allow us to view a large system as the composition of smaller ones, where at any level of granularity we can consider local systems as enclosed boxes, which can be entered and exited and which evolve as we interact with them. ### Mealy machines We first consider the category **Meal** of Mealy machines: \[\operatorname{Ob}(\mathbf{Meal}) :=\operatorname{Ob}(\mathbf{Set})\] \[\operatorname{Hom}_{\mathbf{Meal}}(A,B) :=\left\{(S:\mathbf{Set},s_{0}:S,\phi:A\times S\to S\times B)\right\}\] A map could be called a _dynamic function_. It includes a set \(S\) of states and a particular state \(s_{0}\). Further, given any state \(s:S\), it provides both a function \(A\to B\) and, for any input \(a:A\), an updated state; all this is encoded in \(\phi(-,s)\colon A\to S\times B\). Thus we can think of a morphism in **Meal** as a function \(A\to B\) that is updated that every time it receives an input. An example morphism in a toy model of chess pieces evading traps is given in Figure 0(a)-0. We claim **Meal** is a symmetric monoidal category, with monoidal unit \(\varnothing\) and the object \(A\otimes B\) given by disjoint union of sets. For tensor and composition of morphisms, we define1 Footnote 1: We are elide the symmetry \(B\times S\cong S\times B\) in our notation ‘\(\phi\times\psi^{\prime}\). \[(S,s_{0},\phi)\otimes(T,t_{0},\psi) \coloneqq(S\times T,(s_{0},t_{0}),\phi\times\psi)\] \[(S,s_{0},\phi)\mathbin{\ensurestack[\width={{}]{\dddot}{\,}}}(T,t_{0},\psi) \coloneqq(S\times T,(s_{0},t_{0}),(\phi\times T)\mathbin{ \ensurestack[\width={{}]{\dddot}{\,}}}(S\times\psi))\] Because **Meal** is a symmetric monoidal category, there is a ready-made graphical language for visualizing serial and parallel compositions of its morphisms. The icons of this language are wires, each labeled with a set, boxes with input and output ports, also each labeled with a set, and braidings; these are described in detail in [21, SS3.1, 3.5]. In brief, data flows from left to right on wires, which are objects. The vertical dimension represents the tensoring \(\otimes\) of objects and morphisms. We visualize an element of \(\operatorname{Hom}_{\mathbf{Real}}(A,B)\) as a box with \(|A|\) input ports and \(|B|\) output ports. A diagrammatic example of a nontrivial composition of morphisms is provided in Figure 1e. We are ultimately working towards a traced monoidal category with coproducts; however, coproducts do not generally exist in \(\mathbf{Real}\). The natural choice for a copairing of morphisms \((S,s_{0},\phi):\operatorname{Hom}_{\mathbf{Real}}(A,Z)\) and \((S^{\prime},s^{\prime}_{0^{\prime}},\psi):\operatorname{Hom}_{\mathbf{Real}}( B,Z)\) is to place the boxes in parallel and merge their outputs, i.e. \((S,s_{0},\phi)\otimes(S,s_{0},\phi)\); \(\nabla_{Z}\), where \(\nabla_{Z}\colon Z+Z\to Z\) is the codiagonal. This would have the correct _behavior_, in a sense that will be made precise in the following section. However, the required coproduct equalities (e.g. \(\iota_{1}\mathbin{\hat{\ast}}(\phi,\psi)=\phi\)) do not hold in general because the state spaces are not equal, i.e. \(S\times S^{\prime}\cong^{?}S\). ### Quotienting by behavioral equivalence We now talk about behaviors as proper objects of study, rather than merely induced by Mealy machines. Although the set of behaviors of an Mealy machine with input \(A\) and output \(B\) can be succinctly characterized as the underlying set of the final coalgebra on \(X\mapsto B^{A}X^{A}\), a set we will eventually denote by \(\varsigma([Ay,By])(1)\), we will more naturally be able to generalize our results by viewing behavior through the lens of polynomial functors; indeed, this is where the above notation comes from: \([Ay,By]\) is an internal hom in \(\mathbf{Poly}\) and \(\varsigma(-)\) is the cofree comonoid construction on \(\mathbf{Poly}\). Polynomial functors are formally sums of representable functors \(\mathbf{Set}\to\mathbf{Set}\), i.e. functors of the form \(y^{A}:=\operatorname{Hom}_{\mathbf{Set}}(A,-):\mathbf{Set}\to\mathbf{Set}\), sending any set \(X\) to \(X^{A}\). The sum is indexed by a set \(I\) such that the polynomial can be denoted \(p:=\sum_{i:I}y^{A_{i}}\). Note that \(I\) is canonically isomorphic to \(p(1)\). The elements of \(p(1)\) are called positions, while for each position \(i:p(1)\) there is a set of directions \(A_{i}\). We denote \(A_{i}\) by \(p[i]\), so Figure 1: Suppose we play a game like chess where certain pieces may be in various conditions: \(\mathbf{Unaware}\), \(\mathbf{Alert}\), and \(\mathbf{Captured}\). \(\mathbf{a}\). A morphism \(\mathit{Trap}:\operatorname{Hom}_{\mathbf{Real}}(\{U,A\},\{U,A,C\})\) represents a system where one’s opponent may have set a trap to capture a piece: this is a dynamical system that accepts a piece (Unaware or Alert) and outputs an Unaware, Alert, or Captured piece. \(\mathit{Trap}\) has two possible states: either there is a trap (\(T\)) or no trap(\(N\)) set, with the system starting in state \(T\). \(\mathbf{b}\). \(\mathit{Trap}\) requires a function \(S\times\{U,A\}\to S\times\{U,A,C\}\), which is visualized by showing the dynamics in both possible states. In state \(T\), unaware pieces fall for the trap and are captured, whereas alert pieces stay alert (and the state changes to \(N\)). In state \(N\), all inputs exit as unaware, though a \(U\) input triggers the state to change to \(T\). \(\mathbf{c}\). A family of morphisms in \(\mathbf{Meal}\) of the form \(\operatorname{Hom}_{\mathbf{Meal}}(X+X,X)\), for any set \(X\). We will visually depict this as a dot with multiple incoming wires. \(\mathbf{d}\). A family of morphisms in \(\mathbf{Meal}\) of the form \(\operatorname{Hom}_{\mathbf{Meal}}(\varnothing,X)\), for any set \(X\). We will visually depict this as a dot with one outgoing wire. \(\mathbf{e}\). A composite dynamical system \(\mathit{Trap}_{2}\) depicting pieces passing through two \(\mathit{Trap}\) systems: Alert pieces exiting the first subsystem skip the second one. This diagram formally depicts the sequential composition of five morphisms, which are distinguished by the dotted vertical lines: \(\mathit{Trap}\mathbin{\hat{\ast}}(U+\Box_{A}+A+\nabla_{C})\mathbin{\hat{\ast} }(\mathit{Trap}+A+C)\mathbin{\hat{\ast}}(U+A+\sigma_{C,A}+C)\mathbin{\hat{\ast} }(U+V_{A}+\nabla_{C})\). The state space of the composite system is \(\{T,N\}\times\{T,N\}\), with initial state \((T,T)\). we may write \(p\cong\sum_{i:p(1)}y^{p[i]}\). Polynomial functors can be thought of as system interfaces, and a few ways to represent them are shown below. Polynomial comonads can be identified with categories [1] and cofunctors, a category we denote \(\mathbf{Cat}^{\sharp}\). The cofree comonad functor \(\epsilon\langle-\rangle\colon\mathbf{Poly}\to\mathbf{Cat}^{\sharp}\) can be thought of as sending a polynomial \(p\) to a category whose objects are possible behavior trees of a system with interface \(p\). These are potentially infinite trees and are obtained by starting with a root and stacking one-step behaviors, as seen in the corolla forest representation of the polynomial. To provide intuition for what these behavior trees look like, Figure 3 shows a tree corresponding to the Mealy machine of Figure 1a-b as well as a tree for the above example interface. This allows us to define the category, \(\mathbf{Meal}/\sim\): \[\begin{split}\operatorname{Ob}(\mathbf{Meal}/\sim)& :=\operatorname{Ob}(\mathbf{Set})\\ \operatorname{Hom}_{\mathbf{Meal}/\sim}(A,B)&:= \operatorname{Ob}\epsilon\langle[Ay,By]\rangle\end{split} \tag{1}\] An important feature is not explicitly modeled. We have \(\operatorname{Hom}(A,B)\) being a mere _set_ of behaviors, but we need to make use of a rich structure that this set of behaviors has when evolving the system over time: the way the system changes as new inputs are received. Enriching our category, i.e. replacing each set of morphisms by a category, will allow us to characterize how the morphisms change as inputs are received. ### Modeling the system evolution in time Recall that \(\epsilon\langle p\rangle\) sends a polynomial \(p\) to a category. Until now we have only considered the objects of that category. The morphism \(\mathit{tree}_{1}\to\mathit{tree}_{2}\) is given by a path in \(\mathit{tree}_{1}\), starting from the root and ending at a copy of \(\mathit{tree}_{2}\) (see Figure 4). This is crucial for explicitly modeling how a sequence of inputs leads to a new dynamical system. We define \(\mathbf{DynMeal}\) to have the same objects as \(\mathbf{Meal}\) but to be enriched in \(\mathbf{Cat}^{\sharp}\) rather than \(\mathbf{Set}\), such that the hom-object \(\operatorname{Hom}_{\mathbf{DynMeal}}(A,B):=\epsilon\langle[Ay,By]\rangle\). The category \(\mathbf{Cat}^{\sharp}\) is that of \(\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin \cdot}}}}}}}}}}}}}}\)-comonoids in \(\mathbf{Poly}\), which is equivalent to the category of categories and cofunctors as proven in [1, 14]. A cofunctor has a map in the forward direction on objects but a map on morphisms in the reverse direction. This means, for a composite system \(\phi\cdot\psi\), we can construct a composite Figure 3: **Left. An object of \(\epsilon\langle[\{U,A\}y,\{U,A,C\}y]\rangle\) which is the behavior of \(\mathit{Trap}\) from Figure 1b. Note that \([2y,3y]\cong 3^{2}y^{2}\), i.e. it is an interface that receives one of three possible inputs and can be configured in any possible function \(2\to 3\). We label two of these eight possible functions as \(T\) and \(N\). Right. One possible behavior for the interface of Figure 2, i.e. an object of the category \(\epsilon\langle y^{2}+2y+1\rangle\). The machine starts in listening state, and upon receiving the input \(L\) (respectively \(R\)) it moves left (resp. right). The machine repeats this once more and then stops.** Figure 2: Various representations of a polynomial functor that characterizes an interface which can do three things: it can listen (and receive a boolean value), it can move (left or right), or it can stop. behavior tree given behavior trees for \(\phi\) and \(\psi\), but the way \(\phi\) and \(\psi\) evolve over time is dictated by how \(\phi\cdot\psi\) evolves over time, for example, see Figure 5. Every comonoid in **Poly** has an underlying polynomial, and the associated functor \(U\) has a right adjoint \(\varsigma(-)\): There is a strong connection between this story and that of coalgebras. For any polynomial \(p\), the set of \(\varsigma(p)\) objects is isomorphic to the underlying set of the final coalgebra on \(X\mapsto B^{A}X^{A}\). The category of functors \(\varsigma(p)\rightarrow\textbf{Set}\) is equivalent to the category of \(p\)-coalgebras. Each behavior tree in \(\varsigma(p)\) is sent to the set of states with that behavior. The enriched structure of **DynMeal** now makes explicit how our behavior trees change in response to inputs. The last improvement to be made is one of expressivity, e.g. adding the possibility of entering an environment and failing to ever exit it, considering lists of possible outcomes, and considering probability distributions of possible outcomes. ### Adding monadic effects The expressivity captured so far can be vastly generalized by incorporating a polynomial monad \((t,\eta,\mu)\) on **Set** into our morphisms. Particular monads of interest are \[\text{Maybe}=y+1,\qquad\text{Writer}_{M}=My,\qquad\text{List}=\sum_{N:N}y^{N}, \qquad\text{and Dist}=\sum_{N:N}\Delta_{N}y^{N} \tag{3}\] where \(M\) is a monoid and \(\Delta_{N}=\{P:N\rightarrow[0,1]\,|\,1=\sum P(i)\}\). Our work so far has been general to polynomials, not merely interfaces of the form \(Ay\to By\), so monadic effects can be incorporated into our category by considering the Hom object of an interface \(A\to B\) to be \(\varsigma([Ay,t\ast By])\). Figure 4: A morphism between two objects in \(\varsigma(y^{2}+2y+1)\), with the data of the morphism represented by thick arrows, i.e. _Move_, then \(L\). Figure 5: Example of the change in behavior of the composite system \(\textit{Tmp}_{2}\) (Figure 1e) in response to an input \(A\). Just like in Figure 3, where \(T\) and \(N\) represented elements of \(3^{2}\), the labels \(TT\) and \(NT\) refer to elements of \(3^{3}\), induced by where \(\textit{Tmp}_{2}\) sends its inputs data of the wiring diagram of \(\textit{Tmp}_{2}\) instructs how to convert this \(A\)-interaction into interactions for its _Trap_ subcomponents. Note that the second _Trap_ subcomponent is not updated at all for an \(A\) input, i.e. its update is an identity morphism in \(\varsigma([\{U,A\}y,\{U,A,C\}y])\). **Definition 1** (Dynamic Kleisli category).: Given a polynomial monad \(t\), we define a category enriched in \(\mathbf{Cat}^{\sharp}\), denoted \(\mathbf{DK}_{t}\), as follows: \[\operatorname{Ob}(\mathbf{DK}_{t}) :=\operatorname{Ob}(\mathbf{Set})\] \[\operatorname{Hom}_{\mathbf{DK}_{t}}(A,B) :=\langle\{[Ay,t\mathbin{\preccurlyeq}By]\}\] ## 3 \(\mathbf{DK}_{t}\) as a traced monoidal category Throughout this section we assume that \((t,\eta,\mu)\) is a Cartesian polynomial monad. This is sufficient to show that \(\mathbf{DK}_{t}\) is a cocartesian monoidal \(\mathbf{Cat}^{\sharp}\)-enriched category. In Section 3.1 we will show it is a \(\mathbf{Cat}^{\sharp}\)-enriched category, and in Section 3.2 we will show it has coproducts. In fact, we would like \(\mathbf{DK}_{t}\) to be traced monoidal, meaning that there are morphisms \[\operatorname{Tr}_{A,B}^{U}\colon\operatorname{Hom}_{\mathbf{DK}_{t}}(A \otimes U,B\otimes U)\to\operatorname{Hom}_{\mathbf{DK}_{t}}(A,B) \tag{4}\] satisfying various compatibility conditions. Traced monoidal categories have a graphical syntax that includes loops (see also [21]). This notion was defined for \(\mathbf{Set}\)-enriched categories in [11], and the definition can be extended to the \(V\)-enriched setting by asking that the trace map (4) be a map in \(V\). In Section 3.3, we will propose a trace map for \(\mathbf{DK}_{t}\), whenever \(t\) is exceptional in the sense of Definition 2. ### \(\mathbf{Cat}^{\sharp}\)-enriched category structure on \(\mathbf{DK}_{t}\) In this section we suppose a good deal more familiarity with \(\mathbf{Poly}\). The proposed \(\mathbf{Cat}^{\sharp}\)-enriched category structure on \(\mathbf{DK}_{t}\) was given in Definition 1; our first goal in this section is to prove that it satisfies the correct properties. In Section 3.2 we will show that it is monoidal. For any lax monoidal functor \(\mathcal{V}\to\mathcal{W}\), there is an induced functor \(\mathcal{V}\)-\(\mathbf{Cat}\to\mathcal{W}\)-\(\mathbf{Cat}\). For all monoidal categories \((\mathcal{V},I,\otimes)\), the functor \(\mathcal{V}(I,-)\colon\mathcal{V}\to\mathbf{Set}\) is lax monoidal, and we call the induced functor \(\mathcal{V}\)-\(\mathbf{Cat}\to\mathbf{Cat}\) as the underlying category functor. We say that a category \(\mathcal{C}\) is enriched in \(\mathcal{V}\) when there is a \(\mathcal{V}\)-category for which \(\mathcal{C}\) is the underlying category. Our second goal in this section is to show that the usual Kleisli category \(\mathbf{Set}_{t}\) is enriched in \((\mathbf{Cat}^{\sharp},y,\otimes)\). Our strategy for the first goal is to show that \(\mathbf{Set}_{t}\) is enriched in \((\mathbf{Poly},y,\otimes)\). **Theorem 1**.: _Let \((t,\eta,\mu)\) be a polynomial monad. The Kleisli category \(\mathbf{Set}_{t}\) is enriched in \((\mathbf{Poly},y,\otimes)\)._ Proof.: For sets \(A,B:\mathbf{Set}\), define the polynomial \[h^{t}_{A,B}\coloneqq[Ay,t\mathbin{\preccurlyeq}By]. \tag{5}\] Maps of the form \(y\to h^{t}_{A,B}\) are in bijection with polymomial maps \(Ay\to t\mathbin{\preccurlyeq}By\). Note that these are in bijection with functions \(A\to t\mathbin{\preccurlyeq}B\), which are exactly the Kleisli morphisms, i.e. we have \[\mathbf{Poly}(y,h^{t}A,B)\cong\mathbf{Set}_{t}(A,B). \tag{6}\] So it suffices to define an identity and a unital and associative composition law for the hom-objects \(h^{t}_{A,B}:\mathbf{Poly}\). For the identity on \(A\) we use \[Ay\cong y\mathbin{\preccurlyeq}Ay\xrightarrow{\eta\mathbin{\preccurlyeq}dy}t \mathbin{\preccurlyeq}Ay.\] Maps of the form \(h^{t}_{A,B}\otimes h^{t}_{B,C}\to h^{t}_{A,C}\) are in bijection with maps \(Ay\otimes[Ay,t\triangleleft By]\otimes[By,t\triangleleft Cy]\to Cy\), so beginning with the evaluation map \(Ay\triangleleft[Ay,t\triangleleft By]\to t\triangleleft By\), it suffices to find a map \((t\triangleleft By)\otimes[By,t\triangleleft Cy]\to Cy\). Since **Poly** is duoidal, we have the desired map: \[(t\triangleleft By)\otimes(y\triangleleft[By,t\triangleleft Cy])\xrightarrow{ \text{duoid}}(t\otimes y)\triangleleft(By\otimes[By,t\triangleleft Cy]) \xrightarrow{\text{eval}}t\triangleleft t+\triangleleft Cy\xrightarrow{ \mu}t\triangleleft C.\] It is easy to check that these definitions are associative and unital. **Corollary 1**.: _Let \((t,\eta,\mu)\) be a polynomial monad. The Kleisli category \(\mathbf{Set}_{t}\) is enriched in \((\mathbf{Cat}^{\sharp},y,\otimes)\)._ Proof.: For sets \(A,B\), let \(h^{t}_{A,B}:\textbf{Poly}\) be as in (5). Since the functor \(\varsigma(-)\colon\textbf{Poly}\to\mathbf{Cat}^{\sharp}\) is lax monoidal, we can define hom-objects \[\mathbf{DK}_{t}(A,B)\coloneqq\varsigma(h^{t}_{A,B})=\varsigma([Ay,t \triangleleft By])\] giving rise to a \(\mathbf{Cat}^{\sharp}\)-category. To see that its underlying category is \(\mathbf{Set}_{t}\), we use Eqs. (2) and (6) \[\mathbf{Cat}^{\sharp}(y,\varsigma(h^{t}_{A,B}))\cong\textbf{Poly}(y,h^{t}_{A,B })\cong\textbf{Set}_{t}(A,B),\] completing the proof. As a \(\mathbf{Cat}^{\sharp}\)-category, \(\mathbf{DK}_{t}\) not only has an underlying ordinary category, induced by the lax monoidal functor \(\mathbf{Cat}^{\sharp}(y,-)\colon\mathbf{Cat}^{\sharp}\to\mathbf{Set}\), but another corresponding ordinary category as well, induced by the lax monoidal functor \(\text{Ob}\colon\mathbf{Cat}^{\sharp}\to\mathbf{Set}\). There is a natural transformation between these two: \[(\mathbf{Cat}^{\sharp},y,\otimes)\xrightarrow{\text{Cat}^{\sharp}(y,-)} \xrightarrow{\text{Cat}^ ### Proposed traced structure on \(\mathbf{DK}_{t}\) In this section we propose a trace structure on \(\mathbf{DK}_{t}\) for certain polynomial monads \(t\), which we call _exceptional_ because they are equipped with an element \(\xi\colon 1\to t\) that acts like an exception: if any branches of a syntax tree throw an exception then so does the whole tree. Defining the trace map requires even more background on **Poly** than the previous sections. Whereas the \(\mathbf{Cat}^{\sharp}\)-category structure on \(\mathbf{DK}_{t}\), as defined in Sections 3.1 and 3.2 was induced by a simpler, **Poly**-category structure on \(\mathbf{DK}_{t}\), the application to rewriting and the trace structure both make use of the full \(\mathbf{Cat}^{\sharp}\)-enrichment. Our first goal is to define exceptional monads. **Lemma 1**.: For any cartesian polynomial monad \((t,\eta,\mu)\), the polynomial \(t+1\) also carries the structure of a polynomial monad, and the coproduct inclusion \(t\to t+1\) is a morphism of monads. Proof.: It suffices to show that for any \(t\) there is a distributive law \(t\triangleleft(y+1)\rightarrow(y+1)\triangleleft t\) that commutes with the inclusion \(y\to y+1\) on both sides. In any category with a terminal object, a coproduct inclusion \(A\to B\), i.e. an isomorphism \(A+A^{\prime}\cong B\) for some \(A^{\prime}\), induces a map \[B\xrightarrow{\cong}A+A^{\prime}\xrightarrow{A+1}A+1\] such that \(A\to B\to A+1\) is the coproduct inclusion. It is easy to show that every cartesian monomorphism in **Poly** is a coproduct inclusion. Moreover, for any polynomial \(p\), the map \(p\cong p\triangleleft y\to p\triangleleft(y+1)\) is a cartesian monomorphism because \(y\to y+1\) is, and \(\triangleleft\) preserves monomorphisms and cartesian maps in both variables. Thus we have a map \(p\triangleleft(y+1)\to p+1\cong(y+1)\triangleleft p\), natural with respect to cartesian maps \(p\to p^{\prime}\). One can check that when \(t\) is a cartesian monad, this map is always a distributive law, completing the proof. **Definition 2** (Exceptional monad).: Let \((t,\eta,\mu)\) be a cartesian polynomial monad. An _exception_ structure on \(t\) is a retraction \(\xi\colon t+1\to t\) of the monad inclusion \(t\to t+1\) from Lemma 1. By Lemma 1, \(t+1\) is an exceptional monad for any cartesian polynomial monad \(t\). _Remark 1_.: In the multiplication \(\mu\colon t\triangleleft t\to t\) of an exceptional monad, a position of the left-hand side consists of a position \(I:t(1)\) and, for every direction \(i:t[I]\), a position \(J_{i}:t(1)\). If either \(I\) or any of the the \(J_{i}\) is the exceptional element \(\xi\), then its image under \(\mu\) must be the exceptional element: an exception anywhere causes an exception in the whole computation. The notion of exceptional monad differs from that of _monad with zero_[28] even though in each case the monad \(t\) is equipped with a constant \(1\to t\). For example, the List monad has a zero, namely the empty list, but it is not exceptional because a list of lists can contain the empty list without its concatenation being empty. We next propose our trace map \(\operatorname{Tr}_{A,B}^{IJ}\colon\langle\{[A+U)y,t\triangleleft(B+U)y\} \rangle\rightarrow\epsilon\langle[Ay,t\triangleleft By]\rangle\rangle\). In fact, it is more straightforward to define an _iteration_ map as in [21] of the form \[iter_{A}^{A}\colon\langle\{[Ay,t\triangleleft(A+B)y]\}\rangle\rightarrow \epsilon\langle[Ay,t\triangleleft By]\rangle\rangle \tag{8}\] at which point we can define \(\operatorname{Tr}_{A,B}^{IJ}\) to be given by the composite \[\begin{split}\epsilon\langle\{[(A+U)y,t\triangleleft(B+U)y]\} \rangle&\rightarrow\epsilon\langle\{[(A+U)y,t\triangleleft(A+U+B)y ]\}\rangle\\ &\xrightarrow{iter_{B}^{A}}\epsilon\langle\{[A+U)y,t\triangleleft By ]\}\rangle\\ &\rightarrow\epsilon\langle\{[Ay,t\triangleleft By]\}\rangle. \end{split}\] To get there, we need to be more explicit about the cofree comonad construction and the free monad construction on a polynomial \(p\). The free monad \(\mathfrak{m}\langle p\rangle\) on a pointed polynomial \(y\to p\) is constructed in two steps. For any finitary pointed polynomial \(q\)--one for which each \(q[J]\) is a finite set--the free monad on \(q\) can be constructed in one step, namely as the colimit: \[\mathfrak{m}\langle q\rangle\coloneqq\operatorname{colim}(\cdots\gets q ^{\mathfrak{m}+1}\stackrel{{ g_{\mathfrak{m}}}}{{\longleftarrow}}q^{ \mathfrak{m}}\leftarrow\cdots\gets q)\] where the maps \(g_{n}\colon g^{\mathfrak{m}}\to q^{\mathfrak{m}+1}\) are defined inductively by \(p^{\mathfrak{m}}\cong y\ast p^{\mathfrak{m}}\to p\ast p^{\mathfrak{m}}=p^{ \mathfrak{m}+1}\). An arbitrary polynomial \(p\) can be written as the filtered limit of its vertical projections \(p\to p^{j}\) onto finitary polynomials: that is, for each sum-component \(I:p(1)\), just replace \(p[I]\) by an arbitrary finite subset of it, and take the limit of all such things under component-wise projection. That limit is isomorphic to \(p\), and we write \(p\cong\lim_{j:j_{p}}p^{(j)}\). By construction, each of these \(p^{(j)}\) is finitary, so let \(\mathfrak{m}(p^{(j)})\coloneqq\mathfrak{m}(p^{(j)})\) denote the free monad on it, constructed as above. Then finally we construct the free monad \(\mathfrak{m}(p)\) on \(p\) as their filtered limit: \[\mathfrak{m}(p)\coloneqq\lim_{j:j_{p}}\mathfrak{m}(p^{(j)}).\] The cofree comonoid \(\mathfrak{\epsilon}(p)\) on \(p:\mathbf{Poly}\) is constructed in just one step. It is carried by the limit \[\mathfrak{\epsilon}(p)\coloneqq\lim(\cdots\to p^{(n+1)}\xrightarrow{f^{(n)}}p ^{(n)}\to\cdots\to p^{(1)}\xrightarrow{f^{(n)}}p^{(0)})\] where the \(p^{(k)}\) are defined inductively as follows: \[p^{(0)}\coloneqq y\hskip 113.811024ptp^{(k+1)}\coloneqq(p\ast p^{(k)})\times y\] and the maps \(f^{(k)}\colon p^{(k+1)}\to p^{(k)}\) are defined inductively as follows: \[p^{(0)}=p\times y\xrightarrow{f^{(0)}\coloneqq\mathsf{proj}}y=p^{(0)}\hskip 28.452756ptp ^{(k+1)}=(p\ast p^{(k+1)})\times y\xrightarrow{f^{(k+1)}\coloneqq(p\ast p^{(k )})\times y}(p\ast p^{(k)})\times y=p^{(k+1)}\] **Proposition 2**.: Let \(y\to p\) be a cartesian map of polynomials, and let \(\mathfrak{m}(p)\) be the free monad on it; let \(\mathfrak{\epsilon}(p)\) be the cofree comonad on \(p\). There is a natural cartesian monomorphism \[\mathfrak{m}(p)\to\mathfrak{\epsilon}(p)\ast(y+1).\] Proof.: For any \(q:\mathbf{Poly}\), the functor \(-\ast q\) commutes with limits. Since \(\mathfrak{\epsilon}(-)\) is a right adjoint, it also commutes with limits. Since the limit of cartesian monomorphisms is a cartesian monomorphism, we may assume \(p\) is finitary and it suffices to produce compatible cartesian monomorphism \(\varphi_{l}\colon p^{\ast i}\to p^{(i)}\ast(y+1)\) for each \(i:\mathbb{N}\). We take \(\varphi_{0}\) to be the cartesian monomorphism \(y\to y+1\). Suppose given \(\varphi_{i}\). To define \(p^{\ast i+1}\to(p\ast p^{(i)})y\ast(y+1)\), it suffices to give two maps, \(p^{\ast i+1}\to p\ast p^{(i)}\ast(y+1)\) and \(p^{\ast i+1}\to y+1\). For the latter, use \(p_{(i+1)}\to 1\to y+1\). It remains to give a cartesian monomorphism \(p^{\ast i+1}\to p\ast p^{(i)}\ast(y+1)\), which we obtain by induction \(p\ast p^{\ast i}\xrightarrow{\varphi_{i}}p\ast p^{(i)}\to p\ast p^{(i)}\ast(y+1)\). **Lemma 2**.: In \(\mathbf{Poly}\), any cartesian monomorphism \(p\to q\) is a coproduct inclusion. Proof.: Suppose \(\varphi\colon p\to q\) is a cartesian monomorphism. Since \(p\mapsto p(1)\) is a right adjoint, \(\varphi(1)\colon p(1)\to q(1)\) is an injection; let \(J^{\prime}\coloneqq q(1)-p(1)\) be its compliment. Then we have the desired isomorphism \[q\cong p+\sum_{j:J^{\prime}}y^{\sigma[j]}.\] **Proposition 3**.: For any pointed polynomial \(y\to p\) there is a map \(\mathfrak{\epsilon}(p)\ast(y+1)\to\mathfrak{m}(p)+1\). Proof.: By Proposition 2 we have a cartesian monomorphism \(\mathfrak{m}(p)\to\mathfrak{\epsilon}(p)\ast y+1\), which is a coproduct inclusion by Lemma 2. For any coproduct \(p\to q\) inclusion in a category with terminal object, we have a map \(q\cong p+p^{\prime}\to p+1\), completing the proof. Let's return to our goal of producing an _iter_ map as in (8). By (2), it suffices to define a polynomial map of the form \(\mathfrak{\epsilon}([Ay,t\ast(A+B)y])\to[Ay,t\ast By]\), or equivalently one of the form \(Ay\otimes\mathfrak{\epsilon}([Ay,t\ast(A+B)y])\to t\ast By\). Given the exceptional structure \(\xi\colon t+1\to t\) on \(t\) and the fact that any monad \(t\) carries an algebra structure \(\mathfrak{m}(t)\to t\), it suffices by Proposition 3 to find a map \(Ay\otimes\mathfrak{\epsilon}([Ay,t\ast(A+B)y])\to\mathfrak{\epsilon}(t)\ast(y+ 1)\ast By\). We produce the desired map again by induction. The right-hand side is the limit \[\mathfrak{\epsilon}(t)\ast(y+1)\ast By\cong\lim(t^{(i)}\ast(y+1)\ast By).\] When \(i=0\), we use \(Ay\otimes\zeta([Ay,t\ast(A+B)y])\to 1\to t^{(0)}\triangleleft(y+1)\triangleleft By\). Suppose given a map \(\varphi_{i}\colon Ay\otimes\epsilon([Ay,t\ast(A+B)y])\to t^{(i)}\ast(y+1) \triangleleft By\). Define \[Ay\otimes\epsilon([Ay,t\ast(A+B)y]) \to y\times(Ay\otimes([Ay,t\ast(A+B)y]\ast\epsilon([Ay,t\ast(A+B)y]))\] \[\to y\times(t\ast(A+B)y\ast\epsilon([Ay,t\ast(A+B)y]))\] \[\to y\times(t\ast((Ay\triangleleft\{Ay,t\ast(A+B)y])\})+By))\] \[\stackrel{{\cong}}{{\rightarrow}}y\times(t\ast((Ay \otimes\{([Ay,t\ast(A+B)y]))+By))\] \[\to y\times(t\ast((t^{(i)}\ast(y+1)\triangleleft By)+By))\] \[\to y\times(t\ast t^{(i)}\ast(y+1)\triangleleft By)\] \[\to(y\times(t\ast t^{(i)})\ast(y+1)\triangleleft By)\] \[\stackrel{{\cong}}{{\rightarrow}}t^{(i+1)}\ast(y+1) \triangleleft By\] We have now defined the purported _iter_ map and hence trace map, as explained above. **Conjecture 1**.: The purported trace on \(\mathbf{D}\mathbf{K}_{t}\) defined above satisfies the axioms of a traced monoidal category. While this conjecture has not been proven, it has influenced the development of a working implementation in the open source AlgebraicRewriting.jl. This is the subject of Section 4. ## 4 Application to rewriting and agent-based modeling ### Attributed C-Sets Our case study uses the AlgebraicJulia ecosystem [16] due to its support for wiring diagram manipulation (Catlab.jl) as well as its graph rewriting library, AlgebraicRewriting.jl [4]. The core data structure of AlgebraicJulia is the ACSet, i.e. attributed \(C\)-Set for some finitely-presented category \(C\). ACSets offer a category-theoretic model of databases which extends \(C\)-Sets (i.e. copresheaves) to include noncombinatorial data [17]. The database schema is given by a profunctor, i.e. a functor \(S:|S|\to 2\), which distinguishes objects in \(|S|\) as representing either tables or attribute types. Given an assignment \(K:S_{1}\rightarrow\mathbf{Set}\) which provides concrete Julia types for attributes, the category \(\mathrm{ACSet}^{S}_{K}\) is bicomplete and a topos, and thus is an appropriate setting for applying graph rewriting rules. Consider an ACSet \(X\), equipped with a distinguished 'focus', i.e. morphism \(A\to X\). We will soon see applications where it makes sense to think of \(A\) as the _shape_ of a particular agent in the state of the world \(X\), where the agent is picked out by the chosen morphism. Note that considering ACSets without any agent is tantamount to picking the agent shape to be an empty ACSet, \(0\), which is the initial object of \(\mathrm{ACSet}^{S}_{K}\). For example, consider the task of modeling wolves, sheep, and grass distributed on a directed graph, where the wolves and sheep are facing particular directions and have some integer number of energy units. Grass grows on vertices and has an integer number of days until it is grown. The schema in Figure 6 shows one way to model this. ### A DSL for graph rewriting programs We use the theory of \(\mathbf{D}\mathbf{K}_{t}\) to implement a graph rewriting programming language, with data manipulation specified by rewrite rules acting upon ACSets (possibly with agents). These programs can be assembled from a small number of primitive generators. In conjunction with Catlab's general infrastructure for manipulating wiring diagrams, these primitives function as a powerful domain-specific language for agent-based modeling and programming via graph rewriting. The domain and codomain of our morphisms of interest consist of sets of diagrams of the form in Figure 7, and coproducts thereof. Each such diagram, which we will call a _trajectory_, is a sequence of 'world states' \(X_{i}\) with distinguished focuses \(A_{i}\to X_{i}\). If the trajectory is represented by a variable t:Traj in pseudocode, then let last(t) return \(X_{n}\), length(t) return \(n\), and t[i] return \(A_{i}\to X_{i}\). Let postcompose(t:Traj,f:Hom(X,Xi),i::Int) compute the composite of f with the partial maps from \(i\) to \(n\). This returns either a total morphism or nothing. Let (t:Traj)+(b:Hom(A,Xn)) extend the trajectory with an identity partial map. Because there are now infinite sets as the domains and codomains of our morphisms, we adopt the following shorthand: when visualizing wiring diagrams, a wire labeled by an ACSet \(A_{n}\) represents the set of all trajectories whose current agent is \(A_{n}\). Stating a morphism is of type \(A+B\to C\) for ACSets \(A,B,C\) indicates the domain is the coproduct of the set of trajectories with current agent \(A\) and the trajectories with current agent \(B\) and that the codomain is the set of trajectories with current agent \(C\). Some generating morphisms for rewriting programs are shown in Figures 8 and 9. The semantics of the stateless generators is visually represented in Figure 8 and described here: Rewrite extends a trajectory with a partial map induced by applying the rewrite rule (DPO, SPO, SqPO, and PBPO+ [15] semantics supported). Rewrite rules must also have their pattern \(L\) and replacement \(R\) related to a specific input agent shape \(A\) and output agent shape \(B\), respectively. The input agent shape imposes a strong constraint on valid matches via a triangle which must commute. If, nevertheless, multiple matches are valid, an arbitrary one is selected. If it is successfully rewritten, the \(B\) output is exited, otherwise the \(A\) output is exited. Weaken extends a trajectory without changing the state of the world \(X_{n}\) by precomposing the agent morphism. Strengthen extends a trajectory via pushout, which simultaneously changes the agent shape and the state of the world. Init switches the trajectory to a particular world state and agent, with no relation to the previous world state. Fail can be given the semantics of raising an exception. Alternatively, if the context is the List \(+1\) monad, it could silently produce an empty list. The semantics of the ControlFlow box is to redirect an input to one of its outputs, possibly nondeterministically and possibly as a function of its trajectory data. These morphisms are of the form \(\operatorname{Hom}_{\mathbf{D}\mathbf{K}_{i}}(A,n\times A)\) for some set \(n\) and have a Mealy transition function which is _pure_, by which we mean it can be factored into a map \(\phi:S\times A\to S\times(n\times A)\) (such that \(\phi\upharpoonright\pi_{3}=\pi_{2}\)) followed by the monad unit \(\eta\). Figure 6: **Left** An ACSet schema with six objects and morphisms as well as two attribute types and six attributes (dotted edges). **Center** An example ACSet on this schema **Right** The example, informally visualized with energy as integers, sheep as blue boxes, and wolves as red boxes. If we wished to distinguish the world as the ‘agent’, we would use an ACSet morphism into this example from an ACSet with one wolf, zero edges, and one vertex, i.e. the _shape_ of a wolf. Figure 7: A trajectory in the space of ACSets. \(X_{1}\) and \(X_{n}\) respectively represent the initial (resp. current) state of the world during the simulation, and each successive world state is related to the previous via a partial map (indicated by a ticked arrow). Each world state \(X_{i}\) also has a distinguished focus \(A_{i}\to X_{i}\). The semantics of the query box is a Mealy machine with state space \(\text{List}(\text{Hom}_{\text{ACSet}})\times\mathbb{N}\). The first element is the list of queued 'agents' we have yet to process, and the second element keeps track of what time step (in the trajectory) the query box was originally entered. The dynamics are given by two functions, an update function \(S\times A+C\to S\) and a readout function \(S\times A+C\to A+B+0\). Entering through the \(A\) port at step \(n\) has the significance of starting the query process anew; the internal state is overwritten to store \(n\) and a list of morphisms \(B\to X_{n}\). Entering the \(C\) port communicates that we have finished one of the subagent's subroutines - the agent is removed from the box's state and all other agents are pushed forward to the current timestep. How the query box is exited is firstly determined by whether or not there are any remaining agents to process; if there are any, the \(B\) port is exited with a \(B\) agent. If there are none, we try to exit with original \(A\) agent. If this agent is total when brought to the current state, we exit through the \(A\) port, otherwise the \(0\) port. In pseudocode, these functions are characterized below: ``` 1updateA(_,traj::Traj)=(homomorphisms(B,last(traj)),length(traj)) 2updateC((bnext:bs,i),traj::Traj)= 3([bforbinbsif!insnothing(postcompose(traj,b,i))],i) 4updateC(([],i::Int),_)=error 5 6readout((bnext:bs,_),traj::Traj)=return(B,traj+bnext) 7readout(([],i::Int),traj::Traj)=casepostcompose(traj,traj[i],i)of 8nothing=>return(0,traj+initial(last(traj))) 9new_a_x=>return(A,traj+new_a_x) ``` Figure 8: Primitive generating morphisms for graph rewriting programs which have trivial dynamics, i.e. they are pure functions from input ports to output ports, with exception to Fail, which can raise an exception. Although the'standard' means of using a query box is to connect a subroutine from the \(B\) output to the \(C\) import, yielding a \(A\to A+0\) interface, it can be used in more flexible ways. For example, a procedure which applies the rule \(rw\) to a single (arbitrary) \(A\) in the world state which satisfies property \(\phi\) is visualized in Figure 10. The above primitives were designed in order to be both easily interpretable (e.g. control flow, focus shifting, and state changing are all separated concerns) as well as expressive enough to reproduce popular agent-based models. However, these primitives can be extended in a principled rather than ad-hoc manner, due to the specification that new primitives must contain the data of a morphism in \(\mathbf{D}\mathbf{K}_{t}\). Furthermore, these primitives can be composed to libraries of operations at a higher level of abstraction for use by a domain expert. ### Implementation Our implementation is general over a finitary exceptional polynomial monad as in Definition 2, using a codata structure for representing (potentially-infinite) behavior trees. These trees can be provided directly for each generator, although it is more convenient to programmatically generate them from a Mealy machine representation. By switching from \(t=\text{Maybe to List}+1\) or \(\text{Dist}+1\), we obtain nondeterministic or probabilistic simulations. For many graph rewriting examples, we wish to consider all possible matches, not merely an arbitrary match. A principled way to do this is to use a Rewrite which produces a list of outputs, corresponding to all possible matches. This is particularly important if a program is being constructed to empirically test which graphs are reachable via a collection of rewrite rules. Furthermore, if certain matches are more likely than others, then a distribution on this output list can be incorporated into our programs. Figure 10: A program \(0\to 0\), in context of the Maybe monad. Its input is a trajectory with an empty agent in its current time step. We then loop over possible agents of shape \(A\). As soon as one is found which both satisfies property \(\phi\)_and_ is successfully rewritten, we replace our focus \(A\)-shaped agent with the unique \(0\)-shaped agent and exit. Figure 9: Primitive generating morphisms for graph rewriting programs which have nontrivial dynamics. **Left.** A generator for pure control flow. The value \(\mathbb{R}^{\geq 0}\) assigned to each outgoing wire is a weight to bias the probability of exiting through that port. **Right.** A generator for running a subroutine \(B\to C\) for each \(B\) agent in the the current world, \(X_{i}\). One way in which we can take advantage of our formalism for rewriting programs is functorial data migration [24]. Given a functor \(S\to T\) between ACSet schemas, a \(\Sigma\) migration pushes \(S\) ACSets forward to \(T\) ACSets in a universal way, while \(\Delta\) migration migrates data the other direction. As this is functorial, it is possible to apply these migrations to ACSet morphisms, rewrite rules, and entire graph rewriting programs. Another key advantage of working graphically is the various forms of composition of wiring diagrams, which allow for concise operations for organization of morphisms with \(\otimes\) and \(\dagger\); as well as hierarchically constructing programs via operadic substitution. ### Example: discrete Lotka Volterra model The first example agent-based model showcased by Netlogo [26] on their website is a model of wolf-sheep predation. An analogue of this model using the present framework is presented in Figure 11. Its construction leverages \(\Sigma\) and \(\Delta\) data migrations, operadic substitution, \(t=\) Maybe, and the primitives Rewrite, Weaken, ControlFlow, and Query. A pedagogical walkthrough of the model's construction and running the model is provided in an accompanying notebook. An example construction in \(\mathbf{D}\mathbf{K}_{t}\) using the Dist monad can also be found in the accompanying notebook. ## 5 Conclusion and Future work A general theory of dynamical tracing guided the development of a graph rewriting DSL. Because this language is understood mathematically and expressed as combinatorial data, rather than programming syntax, high level rewrite procedures can be understood and serialized, independently from any particular implementation (although an implementation in Julia was developed). Abstract operations like data migration and various forms of composition become natural to perform on these procedures due to their interpretation as morphisms in \(\mathbf{D}\mathbf{K}_{t}\). Furthermore, the assimilation of Mealy machines, system interfaces and behaviors, as well as monadic effects via polynomial functors inspired an implementation that is both concise and general. There are dimensions along which this work can be extended. The formalism we presented focuses on a dynamical system as something that is interacted with by a single agent. It remains frozen as the agent interacts with other systems, but this assumption could be relaxed as we consider multiple simultaneous agents--i.e. parallel programming with graph transformation. The notion of a trajectory presented here requires all ACSets involved to share the same schema (as there are no morphisms between ACSets of different schemas); this could be generalized to allow for multiscale modeling (data transformation in both a high level,'macroscopic' schema as well as a low level'microscopic' schema). We plan to represent more existing agent-based models as well as develop new ones in this formalism. Agent-based models are a preferred style of modeling in any situation with emergent effects, such as Figure 11: A program performing a wolf-sheep simulation. _Swap_ here represents a \(\Delta\) migration which swaps wolves and sheep in the schema of Figure 6, allowing for a shared implementation of actions that are common to wolves and sheep. The agent shape 0 is depicted as an unlabeled wire, although wires sharing the same target port need not all be labeled, as they must share the same agent for the program to be well-typed. physical phenomena (e.g. flow and diffusion simulations) [3], human transportation networks [22], and epidemiology [10]. Adding Catlab support for incremental graph matching [9] will be important for rewriting programs to be competitive in performance with established software like Netlogo and Kappa.
2307.12871
Topology-aware Piecewise Linearization of the AC Power Flow through Generative Modeling
Effective power flow modeling critically affects the ability to efficiently solve large-scale grid optimization problems, especially those with topology-related decision variables. In this work, we put forth a generative modeling approach to obtain a piecewise linear (PWL) approximation of AC power flow by training a simple neural network model from actual data samples. By using the ReLU activation, the NN models can produce a PWL mapping from the input voltage magnitudes and angles to the output power flow and injection. Our proposed generative PWL model uniquely accounts for the nonlinear and topology-related couplings of power flow models, and thus it can greatly improve the accuracy and consistency of output power variables. Most importantly, it enables to reformulate the nonlinear power flow and line status-related constraints into mixed-integer linear ones, such that one can efficiently solve grid topology optimization tasks like the AC optimal transmission switching (OTS) problem. Numerical tests using the IEEE 14- and 118-bus test systems have demonstrated the modeling accuracy of the proposed PWL approximation using a generative approach, as well as its ability in enabling competitive OTS solutions at very low computation order.
Young-ho Cho, Hao Zhu
2023-07-24T15:09:40Z
http://arxiv.org/abs/2307.12871v1
# Topology-aware Piecewise Linearization of the AC Power Flow through Generative Modeling ###### Abstract Effective power flow modeling critically affects the ability to efficiently solve large-scale grid optimization problems, especially those with topology-related decision variables. In this work, we put forth a generative modeling approach to obtain a piecewise linear (PWL) approximation of AC power flow by training a simple neural network model from actual data samples. By using the ReLU activation, the NN models can produce a PWL mapping from the input voltage magnitudes and angles to the output power flow and injection. Our proposed generative PWL model uniquely accounts for the nonlinear and topology-related couplings of power flow models, and thus it can greatly improve the accuracy and consistency of output power variables. Most importantly, it enables to reformulate the nonlinear power flow and line status-related constraints into mixed-integer linear ones, such that one can efficiently solve grid topology optimization tasks like the AC optimal transmission switching (OTS) problem. Numerical tests using the IEEE 14- and 118-bus test systems have demonstrated the modeling accuracy of the proposed PWL approximation using a generative approach, as well as its ability in enabling competitive OTS solutions at very low computation order. Generative modeling, piecewise linear approximation, nonlinear AC power flow, grid topology optimization. ## I Introduction Effective power flow modeling is critical for analyzing and optimizing large-scale power systems for efficient and reliable grid operations. With worldwide energy transitions and decarbonization, grid optimization tasks are increasingly challenged by e.g., uncertainty factors, extreme conditions, and fast computation needs. Meanwhile, optimizing the grid topology for increased flexibility has been advocated in problems like optimal transmission switching (OTS) [1], adaptive islanding [2], and post-disaster restoration [3]. Hence, it is important to develop effective power flow models that can facilitate the accurate and fast solutions for grid optimization problems, especially those with the combinatorial topology variables. There exist significant efforts in developing approximation models for the nonlinear AC power flow. Notably, linearized power flow models have been advocated due to their simplicity, such as the well-known DC model [4], or the first-order approximation at an operating point for better accuracy [5]. As linear models are very limited by their generalizability across all possible operation region [2], one straightforward extension is the piecewise linear (PWL) approximation approach by using multiple operating points; see e.g., [6]. By and large, there is a trade-off between accuracy and complexity for these model-based approaches, while the number and location of operating points could be difficult to select. To tackle these issues, data-driven approaches have been recently advocated as an alternative for PWL modeling. Trained from realistic power flow scenarios, machine learning models such as \(K\)-plane regression [7] and neural networks (NNs) [8, 9] have shown good power flow approximation capabilities. In particular, ReLU-based NNs can be used to construct simple yet accurate PWL power flow models by incorporating the power flow Jacobian information [8]. Similar ideas have been explored in the constraint learning framework [9] but for general grid operational constraints. Interestingly, these PWL models under the ReLU activation allow to reformulate grid optimization problems with nonlinear power flow as mixed-integer linear programs (MILPs), for which there exist efficient off-the-shelf solvers. For example, successful applications to unit commitment and distribution management have been considered. Albeit the success, these existing approaches mainly build on an end-to-end learning framework that does not consider the underlying physical models of power flow. In addition, none of them has yet considered a flexible grid topology. In this paper, we put forth a generative modeling approach to obtain the PWL approximation of AC power flow that is directly applicable to complex grid optimization problems. With the ReLU activation, we design the NN architecture to uniquely explore the generative structure of AC power flow. With the voltage and angle inputs, our proposed NN model first predicts the nonlinear terms that are common to all power variables in the first two layers, and then transforms these common terms to all line flows and power injections with two more layers. All the layers will be jointly trained to ensure an excellent consistency among all variables. This way, the proposed PWL model is generated in accordance with the power flow physics, while able to incorporate flexible topology connectivity. Thanks to our proposed NN design, we can cast grid optimization tasks like OTS into an MILP form for efficient solutions. We use the IEEE 14- and 118-bus test systems to validate the proposed PWL approximation in terms of improving power flow modeling accuracy over non-generative, as well as an excellent optimality/feasibility performance in solving AC-OTS. The rest of the paper is organized as follows. Section II provides the nonlinear AC power flow modeling. In Section III, we develop the PWL model that uses a neural network and discuss the formulation steps to attain a mixed-integer program. Section IV provides the simulation set-up for the IEEE 14- and 118-bus test systems and presents the numerical comparisons and validations for the proposed scheme, along with some concluding remarks. ## II Nonlinear AC Power Flow Modeling We first present the nonlinear AC power flow modeling for transmission systems, while introducing the relevant topology status variables and coupling terms useful for the discussions later on. Consider a transmission system consisting of \(n\) buses collected in the set \(\mathcal{N}:=\{1,\ldots,n\}\) and \(\ell\) lines (including transformers) in \(\mathcal{L}:=\{(i,j)\}\subset\mathcal{N}\times\mathcal{N}\). For each bus \(i\in\mathcal{N}\), let \(V_{i}\angle\theta_{i}\) denote the complex nodal voltage phasor, and \(\{P_{i},Q_{i}\}\) denote the active and reactive power injections, respectively. For each line \((i,j)\in\mathcal{L}\), let \(\theta_{ij}:=\theta_{i}-\theta_{j}\) denote the angle difference between bus \(i\) and \(j\), and \(\{P_{ij},Q_{ij}\}\) denote the active and reactive power flows from bus \(i\) to \(j\); and similarly for \(\{P_{ji},Q_{ji}\}\) from bus \(j\) to \(i\). In addition, the line's series and shunt admittance values are respectively denoted by \(y_{ij}=g_{ij}+\text{j}b_{ij}\) and \(y_{ij}^{sh}=g_{ij}^{sh}+\text{j}b_{i}^{sh}\). By defining a binary variable \(\epsilon_{ij}\in\{0,1\}\) to indicate the status for each line \((i,j)\in\mathcal{L}\) (01: off/on), the nodal power balance at bus \(i\) per the Kirchhoff's law becomes \[P_{i}=\sum_{(i,j)\in\mathcal{L}}\ \epsilon_{ij}P_{ij}, \tag{1a}\] \[Q_{i}=\sum_{(i,j)\in\mathcal{L}}\ \epsilon_{ij}Q_{ij}. \tag{1b}\] Note that these binary variables \(\{\epsilon_{ij}\}\) will be important for formulating topology-related grid optimization tasks such as the optimal transmission switching problem as detailed later on. Without any topology changes, they can be fixed at \(\epsilon_{ij}=1\). For each line \((i,j)\in\mathcal{L}\), the power flows relate to the angle difference \(\theta_{ij}\) and nodal voltages \(\{V_{i},V_{j}\}\), and in the case of transformer, its tap ratio \(a_{ij}\), as given by \[P_{ij}=V_{i}^{2}(\frac{g_{ij}}{a_{ij}^{2}}+g_{i}^{sh})-\frac{V_{ i}V_{j}}{a_{ij}}(g_{ij}\cos\theta_{ij}+b_{ij}\sin\theta_{ij}), \tag{2a}\] \[Q_{ij}= -V_{i}^{2}(\frac{b_{ij}}{a_{ij}^{2}}+b_{i}^{sh})-\frac{V_{i}V_{j }}{a_{ij}}(g_{ij}\sin\theta_{ij}-b_{ij}\cos\theta_{ij}). \tag{2b}\] For the transmission lines, we can simply set \(a_{ij}=1\). As for a transformer, the tap ratio is typically set within the range of \([0.9,1.1]\) and it only affects the primary-to-secondary direction. Thus, for the power flows in the secondary-to-primary direction, one can use \(a_{ij}=1\) in (2). One advantage of our proposed piecewise linear (PWL) approximation is to leverage the underlying coupling among active and reactive power flows. To this end, let us denote the three nonlinear terms in (2) by \[\gamma_{i}:=V_{i}^{2},\ \rho_{ij}:=V_{i}V_{j}\cos\theta_{ij},\ \text{and}\ \pi_{ij}:=V_{i}V_{j}\sin\theta_{ij}.\] To form the bi-directional power flows \(\{P_{ij},Q_{ij},P_{ji},Q_{ji}\}\) per line \((i,j)\), we only need the nonlinear terms \(\{\gamma_{i},\gamma_{j},\rho_{ij},\pi_{ij}\}\), and the mapping between the two groups of variables is simply linear. To represent the resultant linear relation of (2) in a matrix-vector form, let us concatenate all the power flow variables in \(\mathbf{z}^{pf}\in\mathbb{R}^{4\ell}\), and all the injection ones in \(\mathbf{z}^{inj}\in\mathbb{R}^{2n}\). In addition, let \(\mathbf{\gamma}\in\mathbb{R}^{n}\), \(\mathbf{\rho}\in\mathbb{R}^{\ell}\), and \(\mathbf{\pi}\in\mathbb{R}^{\ell}\) denote the respective vectors for the three groups of common terms. This way, we have \[\mathbf{z}^{pf}=\mathbf{W}^{\gamma}\mathbf{\gamma}+\mathbf{W}^{\rho}\mathbf{\rho}+ \mathbf{W}^{\pi}\mathbf{\pi}, \tag{3a}\] \[\mathbf{z}^{inj}=\mathbf{W}^{\psi}\mathbf{z}^{pf} \tag{3b}\] where the weight matrices \(\{\mathbf{W}^{\gamma},\mathbf{W}^{\rho},\mathbf{W}^{\pi},\mathbf{W}^{\psi}\}\) are of appropriate dimension given by the known line parameters and line status variables in (2) and (1), respectively. Clearly, the three groups of common terms are sufficient for fully generating all power flow and injection quantities, and our proposed generative modeling will work by predicting these terms as the first step. ### _Linear Approximation_ We discuss the linear approximation for the common terms, which will be used by the proposed PWL models. Linear approximation is a basic approach to deal with power flow nonlinearity, thanks to its simplicity and reasonable accuracy within a small region of the operating point. We will consider the first-order approximation method to attain a linearized modeling, while there also exist other popular methods such as fixed-point method [10]. For the squared voltage term, it can be approximated by \(\hat{\gamma}_{i}=2V_{i}-1\), \(\forall i\in\mathcal{N}\), based on a flat-voltage value of \(V_{o}\). Of course, this linearized model can be improved by using the exact operating point if different from the flat-voltage profile. As shown in [2], the former already attains a very high accuracy for power systems with well-regulated bus voltages within the p.u. range of [0.94, 1.06]. Thus, for simplicity, this linear representation of \(\hat{\mathbf{\gamma}}\) will be adopted in this work. Nonetheless, the other two terms \(\mathbf{\rho}\) and \(\mathbf{\pi}\) are more complicated to approximate than \(\mathbf{\gamma}\) due to the presence of angle differences. To this end, we consider the first-order approximation for the former at the operating point. To simplify the notation, let us use \([\mathbf{\rho};\ \mathbf{\pi}]=\mathbf{f}(\mathbf{x})\in\mathbb{R}^{2\ell}\) to represent the nonlinear mapping from the input \(\mathbf{x}\), which consists of the voltage magnitude \(\mathbf{V}=\{V_{i}\}_{i\in\mathcal{N}}\in\mathbb{R}^{n}\) and the angle difference \(\mathbf{\theta}=\{\theta_{ij}\}_{(i,j)\in\mathcal{L}}\in\mathbb{R}^{\ell}\). With a fixed operating point denoted by \(\mathbf{x}_{o}\), the first-order approximation becomes \[[\hat{\mathbf{\rho}};\hat{\mathbf{\pi}}] =\mathbf{f}(\mathbf{x}_{o})+\mathbf{J}(\mathbf{x}_{o})(\mathbf{x}-\mathbf{x}_{o})\] \[=\mathbf{f}(\mathbf{x}_{o})+\mathbf{J}(\mathbf{x}_{o})\Delta\mathbf{x} \tag{4}\] where \(\mathbf{J}(\mathbf{x}_{o})\) denotes the Jacobian matrix of \(\mathbf{f}(\mathbf{x})\) evaluated at \(\mathbf{x}_{o}\), while we use \(\Delta\mathbf{x}:=\mathbf{x}-\mathbf{x}_{o}\) for simplicity. The ensuing section will build upon the linear model in (4) by using a data-driven approach to improve the approximation accuracy. ## III PWL Approximation via Generative Modeling Our proposed PWL model uses a two-layer neural network (NN) to first approximate the common nonlinear terms in \([\mathbf{\rho};\mathbf{\pi}]\), followed by two additional linear layers to generate the power flow and injection variables. As illustrated in Fig. 1, using the input voltage and angle difference in \(\mathbf{x}=[\mathbf{V};\mathbf{\theta}]\), the first two layers will rely on the ReLU activation functions to form the best PWL model for \([\hat{\mathbf{\rho}};\mathbf{\bar{\pi}}]\) by adjusting the NN parameters. In addition, the last two layers use the fixed weight parameters from (3a) to generate the power flows for all lines, and accordingly, use (3b) to generate the power injection at all nodes. Note that the squared voltage \(\hat{\mathbf{\gamma}}\) used by the third layer of generating power flow is based on the simple linearized model as described in Sec. II-A. Thus, the proposed approximation fully matches the power flow relations and coupling among different terms, in an efficient and generative fashion. Using the ReLU activation, the first two layers can effectively produce a PWL mapping that can improve the accuracy of the linearized model in (4). The ReLU function is defined by \(\sigma(\cdot)\) where outputs the entry-wise maximum between the input value and 0. Intuitively, when the activation status of ReLU function stays unchanged within a certain region of input values, its functional output enjoys the same linear relation with the input in that region. Therefore, one can view that the combination of ReLU activation status would divide the whole input space into multiple smaller regions, within each it boils down to a purely linear function. Thus, the overall function over the whole input space becomes a PWL one. In this sense, the number of linear regions will grow exponentially with the number of activation functions, and it would be challenging to search for all possible combinations. Therefore, we will train the NN parameters within the first two layers from generated data samples that can best select the activation status and linear regions from the data. To concretely connect the NN model with PWL functions, we first consider a simple case of two linear regions obtained by using two different operating points, namely \(\mathbf{x}_{o}\) and \(\mathbf{x}_{1}\), as \[[\hat{\mathbf{\rho}};\hat{\mathbf{\pi}}] =\mathbf{f}(\mathbf{x}_{o})+\Delta\mathbf{y},\ \mathrm{with} \tag{5}\] \[\Delta\mathbf{y} =\begin{cases}\mathbf{J}(\mathbf{x}_{o})\Delta\mathbf{x},&\mathbf{x}\in\mathcal{ R}_{o}\\ \mathbf{J}(\mathbf{x}_{1})(\mathbf{x}-\mathbf{x}_{1})+\mathbf{r},&\mathbf{x}\in\mathcal{R}_{1}\end{cases}\] where \(\mathcal{R}_{q}\) represents the linear region corresponding to \(\mathbf{x}_{q}\), and the residue in \(\mathcal{R}_{1}\) is given by \(\mathbf{r}:=\mathbf{f}(\mathbf{x}_{1})-\mathbf{f}(\mathbf{x}_{o})\). To recover the NN structure for (5), we follow [11] to assume that the two Jacobian matrices therein are different by a low-rank component. Specifically, we assume that \[\mathbf{J}(\mathbf{x}_{1})\approxeq\mathbf{J}(\mathbf{x}_{o})+\mathbf{w}_{2}\mathbf{w}_{1}^{\top} \tag{6}\] where both \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) consist of the NN weight parameters, that can be of much lower dimension than the size of Jacobian matrix. For simplicity, we consider both of them to be vectors with \(\mathbf{w}_{1}\in\mathbb{R}^{n+\ell}\) and \(\mathbf{w}_{2}\in\mathbb{R}^{2\ell}\), and thus the difference term in (6) becomes a rank-one matrix. This will be expanded to a higher-rank case later by using weight matrices. Interestingly, the simplification in (6) allows to unify the two scenarios in (5) by using one ReLU activation function, as given by \[\Delta\mathbf{y}\approxeq\mathbf{J}(\mathbf{x}_{o})\Delta\mathbf{x}+\mathbf{w}_{2}\sigma(\mathbf{w}_ {1}^{\top}\Delta\mathbf{x}+b) \tag{7}\] where \(b\) is a scalar bias parameter. When the ReLU function is not activated, it becomes the linear model in \(\mathcal{R}_{o}\) of (5). Otherwise, upon the activation of \(\sigma(\cdot)\) the resultant linear model should approach the one in \(\mathcal{R}_{1}\) by recognizing the relation between the two Jacobian matrices in (6), as given by \[\Delta\mathbf{y}\approxeq\Big{(}\mathbf{J}(\mathbf{x}_{o})+\mathbf{w}_{2}\mathbf{w}_{1}^{\top} \Big{)}\Delta\mathbf{x}+\mathbf{w}_{2}b. \tag{8}\] In addition, to match the offset term in \(\mathcal{R}_{1}\), we would need to have \(\mathbf{w}_{2}b\approxeq\mathbf{J}(\mathbf{x}_{1})(\mathbf{x}_{o}-\mathbf{x}_{1})+\mathbf{r}\). In general, the two-layer form in (7) may not fully express or match the two-region linearized model at the two operating points as in (5). Nonetheless, (7) definitely constitutes as a PWL approximation for the underlying \(\mathbf{f}(\mathbf{x})\) function. In particular, the single ReLU activation in (7) has led to 2 linear regions for the resultant PWL model. The simple case of two linear regions can be expanded to encompass more complex PWL model by increasing the number of ReLU activation functions. If the first layer has \(q\) ReLU functions with different linear transformations as the input, it is possible to generate a PWL model with up to \(2^{q}\) linear regions. This way, the weight parameters form the two matrices \(\mathbf{W}_{1}\in\mathbb{R}^{(n+\ell)\times q}\) and \(\mathbf{W}_{2}\in\mathbb{R}^{2\ell\times q}\), as well as the bias vector \(\mathbf{b}\in\mathbb{R}^{q}\). The number of linear regions is related to the combination of activation status for all \(q\) ReLU functions. The larger \(q\) is, the more expressive the corresponding PWL model becomes, at the price of more model parameters to consider. This makes it difficult to determine the weight parameters using model-based linearization as in (5), motivating us to train these parameters from generated power flow samples. Fig. 1: The structure of the proposed neural network that can generate the common terms on the second layer through the trainable weight matrix and generate power flows and injections on the third and fourth layers through the fixed weight matrices. In general, the latter can be designed to reflect the realistic operating points and the statistical variability around them, and thus the resultant PWL model could outperform a model-based approach by pre-selecting the points for linearization. Before presenting the training loss, recall that the full generative model in Fig. 1 includes the first two layers for obtaining the nonlinear terms and two fixed-weight layers for power variables, as given by \[\mathbf{z}^{(1)} =\sigma(\mathbf{W}_{1}^{\top}\Delta\mathbf{x}+\mathbf{b}), \tag{9a}\] \[\mathbf{z}^{(2)} =\mathbf{f}(\mathbf{x}_{o})+\Big{(}\mathbf{J}(\mathbf{x}_{o})\Delta\mathbf{x}+\mathbf{W}_ {2}\mathbf{z}^{(1)}\Big{)},\] (9b) \[\mathbf{z}^{(3)} =\mathbf{W}^{\top}\mathbf{\hat{\gamma}}+[\mathbf{W}^{\rho};\mathbf{W}^{\pi}]\mathbf{z} ^{(2)},\] (9c) \[\mathbf{z}^{(4)} =\mathbf{W}^{\psi}\mathbf{z}^{(3)} \tag{9d}\] where the first two layers generalize the simple case of (7) to \(q\) ReLU functions, and the last two layers follow from (3). When we generate random power flow data, the actual values for both \(\mathbf{f}(\mathbf{x})=[\mathbf{\rho};\mathbf{\pi}]\) and \([\mathbf{z}^{pf};\mathbf{z}^{inj}]\) can be obtained and using all of them for the loss function could effectively maintain the relations among the corresponding predicted values in (9). Specifically, we can use the Euclidean distance to form the following loss function \[\mathcal{L}(\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{b})= \|\mathbf{f}(\mathbf{x})-\mathbf{z}^{(2)}\|_{2}^{2}\] \[+\lambda\left\|[\mathbf{z}^{pf};\mathbf{z}^{inj}]-[\mathbf{z}^{(3)};\mathbf{z}^{ (4)}]\right\|_{2}^{2} \tag{10}\] where \(\lambda>0\) denotes a regularization hyperparameter to balance the error terms among the layers. The average loss will be used to aggregate different samples, yielding the total training loss objective to minimize. After training, the proposed PWL model can fully generate the power flows and injections with linear transformations. On the other hand, (9a) is not a linear transformation due to the ReLU function. We also face nonlinear constraints when considering the binary line status variables with power flows. Hence, we will work to reformulate the ReLU function and the line status-related constraints into mixed-integer linear forms. ### _Mixed-integer Linear Formulation for the PWL Model_ The proposed PWL models allow for formulating the nonlinear power flow equations into mixed-integer linear forms and thus enable efficient solutions for grid optimization problems involving topology variables. We will present the formulation steps to attain a mixed-integer linear program (MILP) for the PWL models, and also for dealing with the binary line status relation in (1). To adopt the PWL model in (9) into an MILP, the main issue lies in the ReLU function of (9a), as all other transformations are just linear ones. To tackle the ReLU function, we will use a relaxation technique based on the big-M tightening method [12]. For the \(k\)-th entry \(z_{k}^{(1)}\) in (9a), we will approximate it by introducing a binary variable \(\beta_{k}\), and its upper/lower bounds \(\{\bar{M}_{k},\underline{M}_{k}\}\). The two bounds can be determined through an off-line optimization procedure [13]. After determining these bounds and denoting the input in (9a) by \(\hat{\mathbf{z}}^{(1)}=\mathbf{W}_{1}^{\top}\Delta\mathbf{x}+\mathbf{b}\), the big-M method asserts that each \(z_{k}^{(1)}\) can be reformulated by using four linear inequality constraints, as given by \[0 \leq z_{k}^{(1)}\leq\bar{M}_{k}\beta_{k}, \tag{11a}\] \[\hat{z}_{k}^{(1)} \leq z_{k}^{(1)}\leq\hat{z}_{k}^{(1)}-\underline{M}_{k}(1-\beta_{k }), \tag{11b}\] The binary variable \(\beta_{k}\) critically relates to the ReLU activation status based on the input \(\hat{z}_{k}^{(1)}\). If the input \(\hat{z}_{k}^{(1)}>0\), then the constraints in (11b) enforce \(\beta_{k}\) to be one such that \(z_{k}^{(1)}=\hat{z}_{k}^{(1)}\) holds exactly. Otherwise, if \(\hat{z}_{k}^{(1)}\leq 0\), the constraints in (11a) enforce \(\beta_{k}\) to be zero to yield \(z_{k}^{(1)}=0\). This way, the output \(z_{k}^{(1)}\) from (11) exactly attains the ReLU-based output in (9a). Thus, with accurate upper/lower bounds, the big-M method allows for an equivalent reformulation of (9) into an MILP form. Similarly, we formulate the line status-related constraints into an MILP form. We face the multiplication of continuous variables \(\{P_{ij}\), \(Q_{ij}\}\) and binary variables \(\epsilon_{ij}\) that are denoted as \(\hat{P}_{ij}:=\epsilon_{ij}P_{ij}\) and \(\hat{Q}_{ij}:=\epsilon_{ij}Q_{ij}\). To tackle this multiplication term, we will use the McCormick relaxation technique [14] derived from the big-M tightening method. After attaining the upper/lower bounds of active and reactive power flows \(\{\bar{P}_{ij},\underline{P}_{ij}\}\) and \(\{\bar{Q}_{ij},\underline{Q}_{ij}\}\), each \(\{\hat{P}_{ij},\hat{Q}_{ij}\}\) can be reformulated by using four linear inequality constraints, as given by \[\underline{P}_{ij}\epsilon_{ij} \leq\hat{P}_{ij}\leq\bar{P}_{ij}\epsilon_{ij}, \tag{12a}\] \[\underline{Q}_{ij}\epsilon_{ij} \leq\hat{Q}_{ij}\leq\bar{Q}_{ij}\epsilon_{ij},\] (12b) \[P_{ij}+\bar{P}_{ij}(\epsilon_{ij}-1) \leq\hat{P}_{ij}\leq P_{ij}+\underline{P}_{ij}(\epsilon_{ij}-1),\] (12c) \[Q_{ij}+\bar{Q}_{ij}(\epsilon_{ij}-1) \leq\hat{Q}_{ij}\leq Q_{ij}+\underline{Q}_{ij}(\epsilon_{ij}-1). \tag{12d}\] The outputs \(\{\hat{P}_{ij},\hat{Q}_{ij}\}\) critically relate to \(\epsilon_{ij}\). If \(\epsilon_{ij}\) is equal to zero, then the constraints in (12a) and (12b) enforce \(\hat{P}_{ij}\) and \(\hat{Q}_{ij}\) to be zero. Otherwise, if \(\epsilon_{ij}\) is equal to one, then the constraints in (12c) and (12d) enforce \(\hat{P}_{ij}\) and \(\hat{Q}_{ij}\) to be \(P_{ij}\) and \(Q_{ij}\), respectively. This way, the output \(\hat{P}_{ij}\) and \(\hat{Q}_{ij}\) from (12) exactly attain the power flows based on the binary line status. Thus, the McCormick relaxation allows for an equivalent reformulation of the multiplication terms of power flows and line status variables into an MILP form. ## IV Numerical Studies We have implemented the proposed generative modeling approach on the IEEE 14-bus and 118-bus test cases [15], to compare its performance in power flow modeling and grid topology optimization. The NN training has been performed in PyTorch with Adam optimizer on a regular laptop with Intel(r) CPU @ 2.70 GHz, 32 GB RAM, and NVIDIA(r) RTX 3070 Ti GPU @ 8GB VRAM. We have formulated the OTS problem through Pyomo [16] and used the Groubi optimization solver [17] for the resultant MILPs. To train the proposed NN-based PWL models in Fig. 1, we generate 10,000 samples from the actual power flow model, with the outputs of common nonlinear terms \(\{\mathbf{\gamma},\mathbf{\rho},\mathbf{\pi}\}\), as well as line flows and nodal injections. For each sample, we generate uniformly distributed voltage magnitudes within the range of [0.94, 1.06] p.u., and similarly for the angle, which randomly varies within \([-\pi/6,\pi/6]\) radians around the initial operating point. For the reference bus, namely Bus 1 in the 14-bus or Bus 69 in the 118-bus system, we fix its voltage magnitude and angle at default values. For the first two trainable layers in Fig. 1, we use \(q=25\) and \(q=75\) ReLU activation functions, respectively for the two systems. The parameters are trained through the backpropagation using the loss function in (10) with \(20\times 10^{3}\) epochs and a learning rate of \(2.5\times 10^{-3}\). For the training of the PWL model, we separate 90% of the data set as training and 10% as testing. The PWL model only trains the training dataset and validates the loss through the testing dataset. ### _AC Power Flow Approximation_ We first validate the AC power flow modeling performance of our PWL-based approximation. We compare the proposed generative modeling approach using the common nonlinear term prediction step (indicated by Gen) with the existing work [8] that directly predicts the power flow variables (indicated by Direct). The latter directly uses a two-layer NN of ReLU activation to output the line power flow, with the structure given by \[\mathbf{z}^{pf}\leftarrow\mathbf{J}(\mathbf{x}_{o})\Delta\mathbf{x}+\mathbf{W}_{2}\mathbf{z}^{(1)}. \tag{13}\] Note that we use the same number of activation functions for both types of models. We compare the approximation error between the predicted and actual line power flow values as normalized by the line capacity. Figs. 2 and 3 show the box plots of the normalized prediction error percentages of both active and reactive line flows, respectively for the 14- and 118-bus systems. Note that both the average error and the maximum error, out of all transmission lines in each system are included for comparisons. Each box plot shows the median values as midlines, the first and third quartiles as boxes, maximum values as horizon bars, and some outliers. Clearly, the proposed generative model is of better accuracy in predicting the line flows than the direct method, especially for the reactive power parts. Notably, the proposed method has shown significant improvements in terms of reducing the maximum values of errors in all cases. These results have verified the benefits of incorporating the underlying coupling between active and reactive power flows considered by our proposed NN design. Furthermore, we also compare the error performance in predicting the active and reactive power injections, which can be formed directly from the line flows using (3b). Without any normalization basis, Fig. 4 instead shows the box plots for the root mean square error (RMSE) in predicting the injected power vectors in both test systems. Similar results have been observed for predicting the injections, with even more noticeable improvements in both active and reactive power values. This is because our proposed NN model in Fig. 1 has directly accounted for the power flow coupling, and thus its joint training process would achieve high consistency with nodal power balance. Thanks to the generative structure of the underlying NN design, our proposed PWL models can improve the accuracy and consistency in the resultant power flow approximation. ### _OTS Applications_ We adopt the proposed PWL models in solving the OTS problem using the 118-bus system. The objective function and operational constraints are set up similarly to the typical optimal power flow (OPF) problem. Additionally, OTS allows for line switching under the constraint of a total switching budget of \(\alpha\) lines, given by \[\sum_{(i,j)\in\mathcal{L}}\ \epsilon_{ij}\geq\ell-\alpha.\] The number \(\alpha\) is typically no greater than 5-10. Hence, the computation complexity of OTS is much higher than that of OPF, due to the integer line status variables. We introduce the proposed PWL models into the AC-OTS formulation by replacing the power flow constraints by (11), as well as the Fig. 4: Comparisons of the root mean square error (RMSE) in predicting the injected power vectors for both the proposed Gen and Direct methods in the (a) IEEE 14-bus and (b) 118-bus systems. Fig. 3: Comparisons of the (a) average error and (b) maximum error in approximating the line power flows for both the proposed Gen and Direct methods using the IEEE 118-bus system. Fig. 2: Comparisons of the (a) average error and (b) maximum error in approximating the line power flows for both the proposed Gen and Direct methods using the IEEE 14-bus system. line status constraints by (12). This way, the resultant MILP problem can be efficiently solved with solvers like Gurobi. We test the performance of the proposed PWL model-based OTS solutions. We compare it with the OTS solutions using the DC- and AC- power flow models, both provided by the open-source platform [18]. To compare across different OTS methods, we re-run the AC-OPF problem after fixing the topology with their line-switching decision outputs, using the MATPOWER [19] solver. This way, we can compare the metrics in terms of the objective costs (for optimality), as well as the percentage rates of infeasibility and constraint violations (for feasibility), using the corresponding AC-OPF outputs. For the two feasibility measures, the infeasible solution rates measure the percentage of infeasible solutions over all solutions, while the constraint violation rates are based on the percentage of over-limit voltage magnitude and angle over the infeasible solutions. Table I lists these metrics and also the computation time for each of the three OTS methods with a switching budget \(\alpha\) equal to 1 or 3. Note that the computation time corresponds to solving the OTS optimization problem, not the follow-up AC-OPF one. A total of 1,000 power flow scenarios by having nodal demand uniformly distributed within [50%, 200%] of the initial demands [15] have been used to compute the average of all these four metrics. For the objective cost, the AC-OTS method has been used as a baseline (normalized to be 100%), and thus the other two OTS methods using approximate models attain higher percentage values. Nonetheless, the proposed PWL model only slightly increases the objective cost by less than 2%, while attaining exactly the same infeasiblity metrics as the AC-based OTS solutions. Notably, our model achieves a highly competitive performance and also great efficiency, as its computation time is almost a tenth of the AC-OTS one. In particular, the PWL model has allowed for a very low computation complexity in the order of DC-OTS one. But the latter leads to significantly worse feasibility performance, with an almost order of magnitude higher of infeasiblity rates than PWL-based OTS. Thanks to its high modeling accuracy, our proposed PWL model can greatly simplify the computation for grid topology optimization tasks by using the MILP reformulation trick, while approaching the ideal optimality/performance performance. To sum up, we have designed a NN-based PWL approximation model for AC power flow with a good balance between model complexity and accuracy. Through its generative design, the proposed PWL models not only account for the underlying power flow coupling, but also allow for highly competitive solutions for topology-aware grid optimization problems. Our future research directions include improving the scalability of our proposed PWL models in large-scale power systems, as well as considering more generalized topology-aware grid optimization tasks like restoration and adaptive islanding.
2303.09018
A Spatially Varying Hierarchical Random Effects Model for Longitudinal Macular Structural Data in Glaucoma Patients
We model longitudinal macular thickness measurements to monitor the course of glaucoma and prevent vision loss due to disease progression. The macular thickness varies over a 6$\times$6 grid of locations on the retina with additional variability arising from the imaging process at each visit. Currently, ophthalmologists estimate slopes using repeated simple linear regression for each subject and location. To estimate slopes more precisely, we develop a novel Bayesian hierarchical model for multiple subjects with spatially varying population-level and subject-level coefficients, borrowing information over subjects and measurement locations. We augment the model with visit effects to account for observed spatially correlated visit-specific errors. We model spatially varying (a) intercepts, (b) slopes, and (c) log residual standard deviations (SD) with multivariate Gaussian process priors with Mat\'ern cross-covariance functions. Each marginal process assumes an exponential kernel with its own SD and spatial correlation matrix. We develop our models for and apply them to data from the Advanced Glaucoma Progression Study. We show that including visit effects in the model reduces error in predicting future thickness measurements and greatly improves model fit.
Erica Su, Robert E. Weiss, Kouros Nouri-Mahdavi, Andrew J. Holbrook
2023-03-16T01:12:14Z
http://arxiv.org/abs/2303.09018v2
A spatially varying hierarchical random effects model for longitudinal macular structural data in glaucoma patients ###### Abstract We model longitudinal macular thickness measurements to monitor the course of glaucoma and prevent vision loss due to disease progression. The macular thickness varies over a 6\(\times\)6 grid of locations on the retina with additional variability arising from the imaging process at each visit. Currently, ophthalmologists estimate slopes using repeated simple linear regression for each subject and location. To estimate slopes more precisely, we develop a novel Bayesian hierarchical model for multiple subjects with spatially varying population-level and subject-level coefficients, borrowing information over subjects and measurement locations. We augment the model with visit effects to account for observed spatially correlated visit-specific errors. We model spatially varying (a) intercepts, (b) slopes, and (c) log residual standard deviations (SD) with multivariate Gaussian process priors with Matern cross-covariance functions. Each marginal process assumes an exponential kernel with its own SD and spatial correlation matrix. We develop our models for and apply them to data from the Advanced Glaucoma Progression Study. We show that including visit effects in the model reduces error in predicting future thickness measurements and greatly improves model fit. \({}^{1}\)Department of Biostatistics, Fielding School of Public Health, University of California Los Angeles, \({}^{*}\)[email protected]; \({}^{\dagger}\)[email protected]; \({}^{\ddagger}\)[email protected] \({}^{2}\)Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, \({}^{\lx@sectionsign}\)[email protected] Bayesian modeling, ganglion cell complex, glaucoma, multivariate Gaussian processes, optical coherence tomography, random effects, spatially varying coefficients. ## 1 Introduction Glaucoma damages the optic nerve and is the second leading cause of blindness worldwide (Kingman, 2004). As there is no cure, timely detection of disease progression is imperative to identify eyes at high risk of or demonstrating early progression so that timely treatment can be provided and further visual loss prevented. Ophthalmologists assess glaucomatous progression by monitoring functional changes in visual fields or structural changes in the retina over time. Visual field (VF) measurements assess functional changes by measuring how well eyes are able to detect light. Repeatedly measuring the thickness of retinal layers, such as macular ganglion cell complex (GCC), with optical coherence tomography (OCT) allows ophthalmologists to evaluate central retinal (macular) structural change over time. Both VF and OCT obtain data from multiple locations across the retina. In current practice, clinicians detect progression by modeling functional or structural changes over time using simple linear regression (SLR) for each subject-location combination (Gardiner and Crabb, 2002; Nouri-Mahdavi et al., 2007; Tatham and Medeiros, 2017; Thompson et al., 2020). SLR does not accommodate the hierarchical structure that patients are members of a population and ignores the spatial arrangement of the data. For analyzing VF data at individual locations, Montesano et al. (2021) introduce a hierarchical model accounting for location and cluster levels fit to data from a single eye, Betz-Stablein et al. (2013) and Berchuck, Mwanza and Warren (2019) present models accounting for spatial correlation fit to data from a single eye, and Bryan et al. (2017) describe a two-stage approach to fit a hierarchical model taking subject, eye, hemifield (one half of the VF), and location into account. While these methods exist for VF data, they cannot be directly applied to structural macular data as the measurement processes are markedly different. Key features of VF data that differ from structural data include censoring, heteroskedasticity, and a different underlying spatial structure. We analyze data from the Advanced Glaucoma Progression Study (AGPS), a cohort of eyes with moderate to severe glaucoma. To monitor glaucoma progression, we model longitudinal macular GCC thickness measurements over a square \(6\times 6\) grid of 36 superpixels (roughly a \(20^{\circ}\times 20^{\circ}\) area) for all subjects. For a single subject, the intercepts, slopes, and residual standard deviations (SD) vary spatially across superpixel locations. Mohammadzadeh et al. (2021) model GCC data from each superpixel separately and compare different Bayesian hierarchical models, preferring a model with random intercepts, random slopes, and random residual SDs. Our desired model needs to account for both the hierarchical structure of the data and the spatial correlations in both the population- and subject-level intercepts, slopes, and residual SDs and in the residuals. The parameters at the population level summarize information from the whole cohort at each superpixel location. Additional difficulties in modeling GCC data arise from the amount and sources of measurement error. Thickness measurements are reliant on automated segmentation algorithms, which may introduce spatially correlated errors unique to each imaging scan. We show that including visit effects to account for visit-specific errors reduces error in predicting future thickness measurements and greatly improves model fit. In this study, we motivate and develop the Spatially varying Hierarchical Random Effects with Visit Effects (SHREVE) model, a novel Bayesian hierarchical model with spatially varying population- and subject-level coefficients and SDs, accounting for spatial and within-subject correlation, between-subject variation, and spatially correlated visit-specific errors. For the AGPS data, we allow the intercepts, slopes, and residual SDs to vary over space. Varying coefficient models are natural extensions to classical linear regression and extensively used in imaging studies and the analysis of spatial data (Hastie and Tibshirani, 1993; Ge et al., 2014; Zhu, Fan and Kong, 2014; Liu et al., 2019), where regression coefficients are allowed to vary smoothly as a function of one or more variables, and in our case, over spatial locations. Regression coefficients may vary over space in a discrete fashion as with areal units or in a continuous manner as with point-referenced data (Gelfand et al., 2010). In the context of imaging studies with grid data, a conditional autogressive (CAR) model (Gossl, Auer and Fahrmeir, 2001; Penny, Trujillo-Barreto and Friston, 2005; Ge et al., 2014) or a Gaussian process (GP) model (Zhang et al., 2016; Castruccio, Ombao and Genton, 2018) may be assumed for discrete or continuous spatial variation, respectively. In a GP model, coefficients from any finite set of locations has a multivariate normal distribution with a mean function and valid covariance function specifying the expected value at each location and covariance between coefficients at any two locations, respectively (Gelfand et al., 2010). Gelfand et al. (2003) first proposed the use of GPs to model spatially varying regression coefficients and multivariate Gaussian processes (MGP) for multiple spatially varying regression coefficients in a hierarchical Bayesian framework. We can assign GP priors at different levels in the hierarchy, which allows for flexible specification in hierarchical models (Gelfand and Schliep, 2016; Kim and Lee, 2017). In our case with three components, spatially varying intercepts, slopes, and residual SDs, we employ MGPs to model the correlations between components within a location and across locations at both the subject and population level. MGPs are specified with a multivariate mean function and cross-covariance function, defining the covariance between any two coefficients at any two locations (Banerjee, Carlin and Gelfand, 2014). For simplicity and computational convenience, separable cross-covariance functions are often used where components share the same spatial correlation and components within a location share a common covariance matrix, and the resulting covariance matrix is the Kronecker product of a covariance matrix between components and a spatial correlation matrix (Banerjee, Carlin and Gelfand, 2014). Assuming all components share a common spatial correlation structure is likely inadequate in practice, as processes may be very different from each other in nature. Instead, we propose a nonseparable cross-covariance function to allow each process to have its own spatial correlation function. Constructing valid cross-covariance models is a challenging task for nonseparable MGPs. Genton and Kleiber (2015) review approaches to construct valid cross-covariance functions for MGPs including the linear model of coregionalization (Wackernagel, 2013; Schmidt and Gelfand, 2003) and kernel and covariance convolution methods (Ver Hoef and Barry, 1998; Gaspari and Cohn, 1999). For univariate GPs, the Matern class of covariance models is widely used, featuring a smoothness parameter that defines the level of mean square differentiability and a lengthscale parameter that defines the rate of correlation decay (Guttorp and Gneiting, 2006). Gneiting, Kleiber and Schlather (2010) and Apanasovich, Genton and Sun (2012) introduce multivariate Matern models and provide necessary and sufficient conditions to allow the cross-covariance functions to have any number of components (processes) while allowing for different smoothnesses and rates of correlation decay for each component. We propose such a multivariate Matern construction to model our spatially varying intercepts, slopes, and residual SDs, so that each component is allowed its own spatial correlation structure. In Section 2, we describe the motivating data. In Section 3, we briefly review GPs and develop the SHREVE model. In Section 4, we apply the SHREVE model to GCC data and compare its performance to several nested models lacking visit effects or other model components. We give a concluding discussion in Section 5. ## 2 Ganglion cell complex data This section highlights data characteristics that motivate model development. We provide details on the imaging procedure and study subjects. ### Macular optical coherence tomography Macular OCT has emerged as a standard imaging modality to assess changes in retinal ganglion cells (RGCs) (Mohammadzadeh et al., 2020). As glaucoma is characterized by progressive loss of RGCs, clinicians use macular OCT as a means to monitor changes in retinal thickness over time (Weinreb and Khaw, 2004). Macular GCC thickness, measured in microns (\(\mu\)m), has been shown to be more efficient for detecting structural loss regardless of glaucoma severity compared to measures of other macular layers (Mohammadzadeh et al., 2022). Glaucomatous damage to the macular area, reflected in thinning of GCC, has been associated with VF loss (Mohammadzadeh et al., 2020). Visual field loss occurs when part(s) of the peripheral vision is (are) lost. ### Advanced Glaucoma Progression Study We analyze data from the AGPS (Mohammadzadeh et al., 2021, 2022, 2022), an ongoing longitudinal study at the University of California, Los Angeles. The study adhered to the tenets of the Declaration of Helsinki and conformed to Health Insurance Portability and Accountability Act policies. All patients provided written informed consent at the time of enrollment in the study. The data include GCC thickness measurements from 111 eyes with at least 4 OCT scans and a minimum of approximately 2 years of observed follow-up time, up to 4.25 years from baseline. Subjects returned approximately every 6 months for imaging using Spectralis OCT (Heidelberg Engineering, Heidelberg, Germany). This device acquires \(30^{\circ}\times 25^{\circ}\) volume scans centered on the fovea, the center of the macula represented as a white dot in Figure 1 and as a black dot in subsequent figures (Mohammadzadeh et al., 2020). We used built-in software, the Glaucoma Module Premium Edition, to automatically segment macular layers of interest. GCC thickness is calculated by summing the thicknesses of the retinal nerve fiber layer, inner plexiform layer, and ganglion cell layer. The posterior pole algorithm of the Spectralis reports layer thickness averaged over pixels within a _superpixel_ with superpixels forming an 8 \(\times\) 8 grid of locations, as shown in Figure 1. We display superpixels in right eye orientation with superpixels labeled as row number 1-8, a dot, then column number 1-8. Superpixels in rows 1-4 are located in the _superior hemiretina_ and rows 5-8 are located in the _inferior hemiretina_; the temple and nose are to the left and right, respectively. Left eyes are mirror images of right eyes and are flipped left-right for presentation and analysis. Because there is substantial measurement noise in the outer ring of superpixels, rows 1 and 8 and columns 1 and 8 (Miraftabi et al., 2016), we analyze only the central 6 \(\times\) 6 superpixels as shown in Figure 1. ### Data exploration Let observation \(y_{ijk}\) be the GCC thickness measure in \(\mu\)m of subject \(i\!=\!1,\ldots,\!n\) at visit \(j\!=\!1,\ldots,\!J_{i}\), where \(J_{i}\) is the number of visits for subject \(i\), in superpixel \(k\!=\!1,\ldots,\!K\) observed at time \(t_{ij}\), with \(t_{i1}\!=\!0\) for all subjects. Location \(\mathbf{s}_{k}\!=\!(\text{row}_{k},\text{column}_{k})\) denotes the spatial coordinates of superpixel \(k\) in two-dimensional space. Initially, we remove any zero thickness values \(y_{ijk}\!=\!0\), which indicate errors of measurement. We define a profile for subject \(i\) in superpixel \(k\) as the sequence of observations (\(t_{ij},y_{ijk}\)) from visits \(j\!=\!1,\ldots,\!J_{i}\) and plot profiles of GCC thickness against time by connecting consecutive observations with line segments. For all subjects and superpixels, we plotted data in profile plots, which identified a number of outliers. We applied a semi-automated algorithm to identify pairs of consecutive points that have large differences in GCC thicknesses between the consecutive visits. For each subject and superpixel, we calculated the consecutive-visit absolute differences \(|y_{ijk}-y_{i(j-1)k}|\) and the consecutive-visit centered-slopes \(|y_{ijk}-y_{i(j-1)k}\) (\(t_{ij}-t_{i(j-1)})+0.5|\), which were centered around \(-0.5\)\(\mu\)m/year, the mean of slopes across all pairs of consecutive visits for all subjects and superpixels. We flagged pairs of observations (\(y_{ijk},y_{i(j-1)k}\)) with absolute centered-slopes greater than 24 \(\mu\)m/year with absolute differences greater than 5 \(\mu\)m as candidates for removal. We calculated the sum of absolute visit differences for each profile \(\sum_{j=2}^{J_{i}}|y_{ij}-y_{i(j-1)k}|\) and removed the point that resulted in the largest reduction in the sum of absolute visit differences. For each profile, if two or more observations were identified as outliers, we removed all remaining observations as well. Figure 1: Visualization of the \(8\times 8\) grid of superpixels and labels from the Spectralis posterior pole algorithm. The inner 36 superpixels included in the analysis are shaded in gray and delineated with thicker lines. Superpixels are shown in right eye orientation where rows 1-4 are located in the superior hemiretina and rows 5-8 are located in the inferior hemiretina; the temple and nose are to the left and right, respectively. Superpixels labels are row number 1-8, a dot, then column number 1-8. The black dot indicates the foveal center for visual orientation. Eyes enrolled in the AGPS had moderate to severe glaucoma, thus exhibit a range of glaucomatous damage. Figure 2 shows profile plots after outlier removal of GCC thickness in \(\mu\)m against time in years since baseline visit for 10 subjects at all 36 superpixels. Baseline GCC varies across subjects within superpixels, with maximum differences in thicknesses between any of the AGPS subjects ranging from 40 to 100 \(\mu\)m across superpixels. From Figure 2, we note that intercepts are spatially correlated and repeated thickness measurements for each subject at each superpixel are highly correlated. The leftmost, temporal superpixels tend to have lower baseline thicknesses and smaller spread than rightmost, nasal superpixels and nasal superpixels show more variability both within and between subjects. Figure 3 shows heatmaps of GCC measurements over time for four subjects. Each row represents a different subject and each block of \(6\times 6\) superpixels displays the GCC thicknesses observed in rows 2-7 and columns 2-7 at the labeled follow-up time above the block. The range of baseline thicknesses across superpixels varies across subjects, with the first subject's baseline values ranging between 53 and 82 \(\mu\)m, while the third subject's baseline values range between 59 and 115 \(\mu\)m. Changes in GCC thickness over time also differ between Subject 1 and Subject 3. Subject 3 has noticeable decrease in thickness, thinning over time in many superpixels (e.g., 2.7, 3.3, and 4.3), while Subject 1 is more stable over time. Within subjects, there is a range of baseline thicknesses and changes over time across superpixels. These data characteristics motivate the need to model spatially varying random intercepts and slopes. Analyzing longitudinal GCC data separately in each superpixel, Mohammadzadeh et al. (2021) show that models with subject-specific residual SDs perform better than models with fixed residual SDs. Figure 4 shows heatmaps of estimated slopes (top) and residual SDs (bottom) from SLR of GCC thickness on time since baseline in each superpixel for the same four subjects as in Figure 3, where each column is a different subject. Estimated slopes and residual SDs appear spatially correlated. Figure 2: Profile plots of ganglion cell complex (GCC) thickness measurements for 10 subjects across 36 superpixels against follow-up time in years since baseline visit. Each color represents a different subject. These profiles illustrate the variability in baseline GCC thickness across the 10 subjects within superpixels, with a range within a superpixel of up to 84 \(\mu\)m. The average baseline thicknesses over subjects vary across superpixels, generally increasing from the temporal to nasal regions (left to right). Figure 4: Heatmaps of (a) estimated slopes (\(\mu\)m/year) and (b) residual standard deviations (SD) (\(\mu\)m) for the same 4 subjects as in Figure 3 using simple linear regressions of ganglion cell complex (GCC) thicknesses on time since baseline in each superpixel. Each column is a different subject. Estimated slopes appear spatially correlated within subjects. Subject 3 has particularly steep negative slopes in the upper half of the eye, while Subjects 1 and 2 have more stable slopes across superpixels. The estimated residual SDs vary within subject by superpixel location. Subjects 1 and 4 have more uniform residual SDs across locations while Subjects 2 and 3 have some superpixels with much higher residual SDs. Figure 3: Heatmaps of ganglion cell complex (GCC) thickness measurements (\(\mu\)m) across 8 visits for 4 subjects for all 36 superpixels (top left 2.2 to bottom right 7.7). Each row is a different subject. The follow-up time of each visit is labeled at the top of each block. All maps share a common color scale for comparison. GCC measurements are highly correlated within subjects over time, illustrated by similar color patterns over time. The color patterns also highlight the spatial correlation between locations. GCC measurements are highly variable across subjects, as seen by the difference in color shades. Over time, the third row subject has noticeable thinning in many superpixels while the other subjects are more stable in comparison. Bryan et al. (2015) model errors that affect all locations at a visit in glaucomatous VFs as global visit effects. Similar to VF data, we suspect there are spatially correlated errors in GCC measurements. We speculate these effects arise from the imaging process and segmentation errors that affect multiple locations. To better visualize these effects, we plot empirical residuals \(y_{ijk}-\overline{y}_{ik}\), where \(\overline{y}_{ik}=\sum y_{ijk}/J_{i}\). Empirical residual profile plots allow us to better see time trends within and across superpixels. Figure 5 provides an example of correlated errors across superpixels, where there is a noticeable increase at four years of follow-up. It is unlikely that such an increase is due to thickening of GCC, but rather due to errors in the imaging process or layer segmentation. Figure 5 shows spatially correlated slopes noticeable in the region from superpixels 3.4 to 3.7 down to 6.4 to 6.7. ### Modeling goals We are interested in estimating individual rates of change at the superpixel level and predicting future GCC observations. To this end, we explicitly model the correlations between intercepts, slopes, and residual SDs at both the population and subject level. The intercepts are correlated with the magnitude of the slopes; as the baseline thickness increases, rates of change are faster (Rabiolo et al., 2020). Healthier eyes tend to have more thickness at baseline, with more potential for progression but also more opportunities for clinicians to intervene and prevent vision loss. Accounting for the relationships between measurement variability and either baseline thickness or slopes may help to better estimate the rates of progression and elucidate whether increased noise is associated with worsening disease. As glaucoma progresses, the ganglion cell and inner plexiform layers, two sublayers of GCC, show increased measurement variability especially as measures tend towards their floor (Miraftabi et al., 2016). ## 3 Methods This section reviews the MGP priors we use to model the spatially varying visit effects and coefficients, constructs the SHREVE model, defines the priors, and introduces model comparison metrics. Figure 5: Empirical residual profile plots (superpixel mean subtracted from ganglion cell complex (GCC) thickness) for a single subject across 36 superpixels. There is an increase at four years for many superpixel locations suggesting visit-specific spatially correlated errors. ### Gaussian processes A Gaussian spatial process (Williams and Rasmussen, 2006; Bogachev, 1998; Banerjee, Carlin and Gelfand, 2014) is a stochastic process \(\{z(\mathbf{s}):\mathbf{s}\in\mathbb{R}^{d}\}\) in which any finite collection of real-valued random variables \(\{z(\mathbf{s}_{1}),\ldots,z(\mathbf{s}_{K})\}\) is distributed as multivariate normal for every set of \(K\geq 1\) spatial locations \(\mathbf{s}_{1},\ldots,\mathbf{s}_{K}\in\mathbb{R}^{d}\), for dimension \(d\geq 1\); we work only with \(d=2\). We denote a GP as \[z(\mathbf{s})\sim\text{GP}(m(\mathbf{s}),C(\mathbf{s},\mathbf{s}^{\prime})),\] with mean function \(m(\mathbf{s})=\mathbb{E}[z(\mathbf{s})]\) and covariance function \(C(\mathbf{s},\mathbf{s}^{\prime})=\text{cov}[z(\mathbf{s}),z(\mathbf{s}^{ \prime})]\) for two locations \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\), which may be the same or distinct. The covariance function \(C(\mathbf{s},\mathbf{s}^{\prime})\) models how similar outcomes \(z(\mathbf{s})\) and \(z(\mathbf{s}^{\prime})\) are. We assume stationary and isotropic covariance functions \(C(\mathbf{s},\mathbf{s}^{\prime})\). Stationarity means \(C(\mathbf{s},\mathbf{s}^{\prime})\) depends only on the spatial separation vector \(\mathbf{s}-\mathbf{s}^{\prime}\) between points, and isotropy means \(C(\mathbf{s},\mathbf{s}^{\prime})\) depends only on the distance between locations \(h=\|\mathbf{s}-\mathbf{s}^{\prime}\|\), where \(\|\cdot\|\) is the Euclidean norm, i.e., \(C(\mathbf{s},\mathbf{s}^{\prime})\equiv C(h)\). We use Matern covariance functions of the form \(\sigma^{2}M(h|\nu,\ell)\), where \(\sigma^{2}>0\) is the variance and \(M(h|\nu,\ell)\) is the Matern correlation function (Matern, 1986) \[M(h|\nu,\ell)=\frac{2^{1-\nu}}{\Gamma(\nu)}(\sqrt{2\nu}h/\ell)^{\nu}K_{\nu}( \sqrt{2\nu}h/\ell),\] where \(\nu>0\) is the smoothness parameter, \(\ell>0\) is the lengthscale, and \(K_{\nu}\) is the modified Bessel function of the second kind of order \(\nu\)(Abramowitz and Stegun, 1964). In general, the process is \(m\) times mean square differentiable if and only if \(\nu>m\)(Williams and Rasmussen, 2006). The lengthscale parameter \(\ell\) controls how quickly the correlation decays as a function of distance with larger \(\ell\) indicating slower correlation decay. ### Multivariate Gaussian processes Let \(\mathbf{z}(\mathbf{s})=(z_{1}(\mathbf{s}),\ldots,z_{P}(\mathbf{s}))^{T}\) be a \(P\times 1\) stochastic process where each component \(z_{p}(\mathbf{s})\) for \(p=1,\ldots,P\) is a scalar random variable at location \(\mathbf{s}\). Then \(\mathbf{z}(\mathbf{s})\) is an MGP if any random vector \((\mathbf{z}(\mathbf{s}_{1})^{T},\ldots,\mathbf{z}(\mathbf{s}_{K})^{T})^{T}\) from any set of \(K\geq 1\) locations \(\mathbf{s}_{1},\ldots,\mathbf{s}_{K}\) has a multivariate normal distribution. The MGP is an extension of the univariate GP where the random variables \(\mathbf{z}(\mathbf{s})\) are vector-valued. We denote an MGP as \[\mathbf{z}(\mathbf{s})\sim\text{MGP}(\mathbf{m}(\mathbf{s}),\mathbf{C}( \mathbf{s},\mathbf{s}^{\prime})),\] with \(P\times 1\) mean vector \(\mathbf{m}(\mathbf{s})\) and \(P\times P\) cross-covariance matrix function \(\mathbf{C}(\mathbf{s},\mathbf{s}^{\prime})=\text{cov}[\mathbf{z}(\mathbf{s}),\mathbf{z}(\mathbf{s}^{\prime})]=\{C_{pq}(\mathbf{s},\mathbf{s}^{\prime})\}_ {p,q=1}^{P}\). Functions \(C_{pq}(\mathbf{s},\mathbf{s}^{\prime})=\text{cov}[z_{p}(\mathbf{s}),z_{q}( \mathbf{s}^{\prime})]\), for \(p,q=1,\ldots,P\), are called marginal covariance functions when \(p=q\) and cross-covariance functions when \(p\neq q\). We want to allow each marginal process to have its own spatial correlation function. Each marginal covariance function \(C_{pp}\) is modeled with a Matern correlation function, \(C_{pp}(h)=\sigma_{pp}^{2}M(h|\nu_{pp},\ell_{pp})\), for \(p=1,\ldots,P\), with variance parameter \(\sigma_{pp}^{2}>0\), smoothness parameter \(\nu_{pp}\), and lengthscale parameter \(\ell_{pp}\). We model each cross-covariance function \(C_{pq}\) with a Matern correlation function, \(C_{pq}(h)=\sigma_{pq}M(h|\nu_{pq},\ell_{pq})\), for \(1\leq p\neq q\leq P\), with covariance parameter \(\sigma_{pq}\), smoothness parameter \(\nu_{pq}\), and lengthscale parameter \(\ell_{pq}\). We assume marginal covariance \(C_{pp}\) and cross-covariance \(C_{pq}\) functions to be Matern following sufficient conditions on parameters \(\nu_{pp}\), \(\nu_{pq}\), \(\ell_{p}\), \(\ell_{pq}\), \(\sigma_{pp}\), and \(\sigma_{pq}\) that result in a nonnegative definite cross-covariance function (Apanasovich, Genton and Sun, 2012). We use the simplest parameterization, where no additional parameters beyond \(\sigma_{pp}^{2}\), \(\nu_{pp}\), and \(\ell_{pp}\) are required to model the smoothness and lengthscale parameters for the cross-covariances. The cross-covariance function \(\mathbf{C}(\mathbf{s},\mathbf{s}^{\prime})\) is nonnegative definite when \[\nu_{pq}(\nu_{pp},\nu_{qq})=\frac{\nu_{pp}+\nu_{qq}}{2},\] \[\ell_{pq}(\ell_{p},\ell_{q}) =\sqrt{\frac{2}{\ell_{p}^{-2}+\ell_{q}^{-2}}}, \tag{2}\] \[\sigma_{pq}(\nu_{pp},\nu_{qq},\ell_{p},\ell_{q},\sigma_{pp},\sigma _{qq},R_{pq}) =\sigma_{pp}\sigma_{qq}\frac{\ell_{pq}(\ell_{p},\ell_{q})}{\sqrt {\ell_{p}\ell_{q}}}\frac{\Gamma(\nu_{pq}(\nu_{pp},\nu_{qq}))}{\Gamma^{1/2}(\nu _{pp})+\Gamma^{1/2}(\nu_{qq})}R_{pq}, \tag{1}\] where \(\mathbf{R}=\{R_{pq}\}\) is a nonnegative definite \(P\times P\) correlation matrix with diagonal elements equal to 1 and nondiagonal elements in the closed interval [-1, 1]. The cross-correlation \(\rho_{pq}=\sigma_{pq}/\sigma_{pp}\sigma_{qq}=\text{corr}(z_{p}(\mathbf{s}),z_ {q}(\mathbf{s}))\) is the correlation between \(z_{p}(\mathbf{s})\) and \(z_{q}(\mathbf{s})\). ### Model specification for a spatially varying hierarchical random effects with visit effects model The proposed SHREVE model allows random intercepts, slopes, and log residual SDs to be correlated within and across locations while accounting for within-subject variability and spatially correlated visit-specific errors. For ease of notation, we specify the model assuming no missing data but note that complete data is not a requirement. We model \(y_{ijk}\) as \[y_{ijk} =\alpha_{0k}+\alpha_{1k}t_{ij}+\beta_{0ik}+\beta_{1ik}t_{ij}+ \gamma_{ijk}+\epsilon_{ijk}\] \[\epsilon_{ijk}|\tau_{ik}^{2} \sim\text{N}(0,\tau_{ik}^{2}),\] \[\log\tau_{ik} =\phi_{k}+\sigma_{ik},\] where \(\alpha_{0k}\), \(\alpha_{1k}\), and \(\phi_{k}\) are the superpixel \(k\) population-level intercept, slope, and log residual SD processes, respectively, \(\beta_{0ik}\), \(\beta_{1ik}\), and \(\sigma_{ik}\) are subject-specific intercept, slope, and log residual SD processes, respectively, in superpixel \(k\) and \(\gamma_{ijk}\) is the visit effect process at location \(\mathbf{s}_{k}\) for subject \(i\) visit \(j\). Figure 6 presents the model graphically. Let \(\mathbf{\alpha}_{k}=(\alpha_{0k},\alpha_{1k},\phi_{k})^{T}\) denote the population-level (PL) multivariate spatial process, which we model with MGP \(\mathbf{\alpha}_{k}|\mathbf{\mu},\mathbf{\theta}_{\alpha}\sim\text{MGP}(\mathbf{\mu},\mathbf{ \mathbf{C}}_{\alpha}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}}))\), with mean vector \(\mathbf{\mu}=(\mu_{0},\mu_{1},\mu_{0})^{T}\) and PL cross-covariance matrix function \(\mathbf{C}_{\alpha}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})\) with hyperparameters \(\mathbf{\theta}_{\alpha}=\{\sigma_{\alpha,pp},\nu_{\alpha,p},\ell_{\alpha,p}, \mathbf{R}_{\alpha},p\in\{1,2,3\}\}\). The parameters \(\mu_{0}\), \(\mu_{1}\), and \(\mu_{\phi}\) are the Figure 6: Plate diagram of the proposed model. Blue nodes are latent variables, red nodes are observed variables, gray nodes are deterministic nodes, GP stands for Gaussian process, and MGP stands for multivariate Gaussian process. Plates are used to group variables repeated together over subjects, time, and space, where \(i=1,\ldots,N\) indexes subjects, \(j=1,\ldots,J_{i}\) indexes subject \(i\)’s visits, and \(k=1,\ldots,K\) indexes superpixel locations. global grand mean intercept, slope, and log residual SD, respectively. PL marginal covariance functions \(C_{\alpha,pp}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})=\sigma_{\alpha,pp}^{2}M(h| \nu_{\alpha,p},\ell_{\alpha,p})\), for \(p=1,...,3\) have PL marginal variances \(\sigma_{\alpha,pp}^{2}\), PL smoothness parameters \(\nu_{\alpha,p}\), and PL lengthscales \(\ell_{\alpha,p}\). PL cross covariance functions \(C_{\alpha,pq}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})=\sigma_{\alpha,pq}M(h| \nu_{\alpha,pq},\ell_{\alpha,pq})\) have covariance parameters \(\sigma_{\alpha,pq}\) between processes \(p\) and \(q\), smoothness parameters \(\nu_{\alpha,pq}\), and lengthscales \(\ell_{\alpha,pq}\). Here \(h=\|\mathbf{s}_{k}-\mathbf{s}_{k^{\prime}}\|\) is the distance between two superpixel locations, \(\sigma_{\alpha,pq}\equiv\sigma_{pq}(\nu_{\alpha,p},\nu_{\alpha,q},\ell_{ \alpha,p},\ell_{\alpha,q},\sigma_{\alpha,pp},\sigma_{\alpha,qq},R_{\alpha,pq})\) is a function of \(\sigma_{\alpha,pp}\) and \(\sigma_{\alpha,qq}\) as defined in (2), and \(\ell_{\alpha,pq}\equiv\ell_{pq}(\ell_{\alpha,p},\ell_{\alpha,q})\) is a function of \(\ell_{\alpha,p}\) and \(\ell_{\alpha,q}\) as in (1). The \(3\times 3\) cross-correlation matrix \(\mathbf{R}_{\alpha}\) is an unknown symmetric matrix with 1's on the diagonal and with \((p,q)\)th element the correlation parameter \(R_{\alpha,pq}=R_{\alpha,qp}\). Similarly, we model random effects (RE) \(\mathbf{\beta}_{ik}=(\beta_{0ik},\beta_{1ik},\sigma_{ik})^{T}\) as \(\mathbf{\beta}_{ik}|\mathbf{\theta}_{\beta}\sim\text{MGP}(\mathbf{0},\mathbf{C}_{\beta }(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}}))\), with mean vector \(\mathbf{0}\) and cross-covariance matrix function \(\mathbf{C}_{\beta}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})\) with hyperparameters \(\mathbf{\theta}_{\beta}=\{\sigma_{\beta,pp},\nu_{\beta,p},\ell_{\beta,p},\mathbf{ R}_{\beta,p}\in\{1,2,3\}\}\). RE marginal covariance functions \(C_{\beta,pp}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})=\sigma_{\beta,pp}^{2}M(h| \nu_{\beta,p},\ell_{\beta,p})\) for \(p=1,...,3\) have RE marginal variances \(\sigma_{\beta,pp}^{2}\), smoothness parameters \(\nu_{\beta,p}\), and lengthscales \(\ell_{\beta,p}\). RE cross-covariance functions \(C_{\beta,pq}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})=\sigma_{\beta,pq}M(\mathbf{ h}|\nu_{\beta,pq},\ell_{\beta,pq})\) have RE covariance parameters \(\sigma_{\beta,pq}\equiv\sigma_{pq}(\nu_{\beta,p},\nu_{\beta,q},\ell_{\beta,p}, \ell_{\beta,q},\sigma_{\beta,pp},\sigma_{\beta,qq},R_{\beta,pq})\), lengthscales \(\ell_{\beta,pq}\equiv\ell_{pq}(\ell_{\beta,p},\ell_{\beta,q})\), and unknown cross-correlation matrix \(\mathbf{R}_{\beta}\) as defined in (1) and (2). We model the spatially varying visit effects \(\gamma_{ijk}\) with mean 0 GPs \(\gamma_{ijk}|\sigma_{v},\nu_{v},\ell_{v}\sim\text{GP}(0,C_{v}(\mathbf{s}_{k}, \mathbf{s}_{k^{\prime}}))\), with visit effects covariance function \(C_{v}(\mathbf{s}_{k},\mathbf{s}_{k^{\prime}})=\sigma_{v}^{2}M(\mathbf{h}|\nu_ {v},\ell_{v})\). ### Priors We use weakly informative priors to keep inferences within a reasonable range and allow computations to proceed satisfactorily. The closest two superpixels can be is 1 unit apart, and the largest separation is \(\sqrt{(7-2)^{2}+(7-2)^{2}}\approx 7\) units. We expect lengthscales to plausibly fall in this range. At the same time, we wish to avoid infinitesimal lengthscales. We assign independent and identical inverse gamma priors on all MGP lengthscale parameters \(\ell_{\alpha,1}\), \(\ell_{\alpha,2}\), \(\ell_{\alpha,3}\), \(\ell_{\beta,1}\), \(\ell_{\beta,2}\), \(\ell_{\beta,3}\), \(\ell_{v}\sim IG(2.25,2.5)\) with mean 2 and SD 4. For the MGP SD parameters, we wish to avoid flat priors that could pull the posterior towards extreme values. We assign truncated-normal priors on all MGP SD parameters \(\sigma_{\alpha,11},\sigma_{\beta,11}\sim N^{+}(0,10^{2})\), \(\sigma_{\alpha,22},\sigma_{\alpha,33},\sigma_{\beta,22},\sigma_{\beta,33}, \sigma_{v}\sim N^{+}(0,2.5^{2})\), where \(N^{+}(a,b)\) is a normal distribution restricted to the positive real line with mean \(a\) and variance \(b\). We assign independent normal priors on the global effects \(\mu_{0}\sim N(73,15^{2})\), \(\mu_{1}\sim N(-0.3,0.3^{2})\), \(\mu_{\phi}\sim N(0.7,0.3^{2})\). For the correlation matrices \(\mathbf{R}_{\alpha}\) and \(\mathbf{R}_{\beta}\), we assign marginally uniform priors on the individual correlations derived from the inverse Wishart distribution with \(3\times 3\) identity matrix scale matrix parameter and four degrees of freedom \(IW(\mathbf{I}_{3},4)\)(Barnard, McCulloch and Meng, 2000). When \(\mathbf{\Sigma}\) has a standard inverse-Wishart distribution, we can decompose \(\mathbf{\Sigma}=\mathbf{SRS}\) in terms of the diagonal standard deviation matrix \(\mathbf{S}\) and correlation matrix \(\mathbf{R}\) to obtain the prior for the correlation matrices. We set all MGP smoothness parameters \(\nu_{\alpha,1}\), \(\nu_{\alpha,2}\), \(\nu_{\alpha,3}\), \(\nu_{\beta,1}\), \(\nu_{\beta,2}\), \(\nu_{\beta,3}\), \(\nu_{v}=\frac{1}{2}\) since we obtain measurements from a coarse grid of superpixel locations and expect the processes to be rough. When \(\nu=\frac{1}{2}\), the Matern correlation function reduces to the popular exponential kernel \(M(\mathbf{h}|\frac{1}{2},\ell)=\exp(-\|\mathbf{h}\|/\ell)\). ### Computation and inference For data analysis and visualization, we use the R programming language (R Core Team, 2021) and ggplot2(Wickham, 2016). We use Markov Chain Monte Carlo (MCMC) methods (Metropolis et al., 1953; Robert and Casella, 2005) implemented in nimble v0.13.0 (de Valpine et al., 2017). We specify the model at the observation level and omit observations removed in the data cleaning step. To sample from the posteriors, we use Gibbs sampling and update specific parameters using the automated factor slice sampler or Metropolis-Hastings sampler within Gibbs. We update the global effects \(\mu_{0}\), \(\mu_{1}\) and \(\mu_{\phi}\) using scalar Metropolis-Hastings random walk samplers; the visit effect GP lengthscale \(\ell_{\nu}\) and subject-level residual SD GP SD parameter \(\sigma_{\beta,33}\) together using the automated factor slice sampler (Tibbits et al., 2014); the subject-level random effects \(\beta_{0ik}\), \(\beta_{1ik}\), and \(\sigma_{ik}\) and visit effects \(\gamma_{ijk}\) using multivariate Metropolis-Hastings random walk samplers in sub-blocks. We tested various schemes for sampling sub-blocks of the subject-level random effects and visit effects to improve sampling efficiency (Risser and Turek, 2020). We jointly sample subject-level intercepts, slopes, and the first visit effect in sub-blocks of size 3. We separately sample the subject-level residual SDs in sub-blocks of size 6 and the remaining visit effects in sub-blocks of size 3. Each pair of SD and lengthscale parameters from MGPs and GPs were sampled together (e.g., \((\sigma_{\alpha,11},\ell_{\alpha,1})\)) except for the subject-level residual SDs and visit effects where opposites were paired together \((\sigma_{\beta,33},\ell_{\nu})\) and \((\sigma_{\nu},\ell_{\beta,3})\). We run all models with 9 chains of 250,000 iterations after a burn-in of 30,000, a thin of 100 for a total of 19,800 posterior samples. Following Vehtari et al. (2021)'s recommendation for assessing convergence, the bulk and tail effective sample sizes were all greater than 100 per chain and the potential scale reduction factor \(\widehat{R}\) were all less than 1.01. Visual assessment of model convergence show satisfactory results. We show efficiency per iteration plots of the 7 parameters with the largest \(\widehat{R}\) in Appendix Figure A1 and summarize convergence diagnostics in Appendix Table A1. ### Model comparison We fit the SHREVE model to the AGPS data and compare model fit of the SHREVE model to 7 nested models and to SLR fit separately for each subject and superpixel location. The 7 submodels were SHREVE omitting (a) the population-level residual SD process \(\phi_{k}\), (b) the subject-specific residual SD process \(\sigma_{ik}\), (c) the spatially varying visit effects \(\gamma_{ijk}\), and all combinations (ab), (ac), (bc), and (abc). We call the SHREVE model without visit effects the spatially varying hierarchical random effects (SHRE) model. For SLR, we run a separate model for each eye and superpixel using flat priors with results equivalent to classical least squares. We compare models with the Watanabe-Akaike (or widely applicable) information criterion (WAIC) (Watanabe and Opper, 2010; Gelman et al., 2013) and approximate leave-one-out cross-validation (LOO) using Pareto Smoothed Importance Sampling (Vehtari, Gelman and Gabry, 2017). We report WAIC \[\text{WAIC}=-2\left[\sum_{i=1}^{n}\sum_{j=1}^{J_{i}}\sum_{k=1}^{K}\log\left( \frac{1}{S}\sum_{s=1}^{S}p(y_{ijk}|\theta^{s})\right)-\sum_{i=1}^{n}\sum_{j=1} ^{J_{i}}\sum_{k=1}^{K}V_{s=1}^{S}\left(\log p(y_{ijk}|\theta^{s})\right)\right]\] summing over all data points \(y_{ijk}\), where \(p(y_{ijk}|\theta)\) is the pointwise predictive density, \(\theta\) are the model parameters, superscript \(s\) denotes parameters drawn at the \(s\)th iteration for \(s=1,\ldots,S\) posterior samples, and \(V_{s=1}^{S}\) denotes the sample variance over \(S\) posterior samples. We report approximate LOO \[\text{LOO}=-2\sum_{i=1}^{n}\sum_{j=1}^{J_{i}}\sum_{k=1}^{K}\log\left(\frac{ \sum_{s=1}w_{ijk}^{s}p(y_{ijk}|\theta^{s})}{\sum_{s=1}w_{ijk}^{s}}\right)\] where \(w_{ijk}^{s}\), \(s=1,\ldots,S\) is a vector of importance weights for data point \(y_{ijk}\) at iteration \(s\) and \(w_{ijk}^{s}=\left(p(y_{ijk}|\theta^{s})\right)^{-1}\) except for extreme weights. Approximate LOO estimates the out-of-sample predictive accuracy of the model (Stone, 1977). Lower WAIC and LOO indicate better fit. To assess predictive accuracy of the proposed model, we compare models on mean squared prediction error \[\text{MSPE}=\frac{\sum_{s=1}^{S}\sum_{i=1}^{n}\sum_{k\in\mathcal{K}_{i}}(y_{iJ,i}k-\hat{y}_{iJ,k}^{s})^{2}}{SN_{pred}}\] for \(s=1,\ldots,S\) posterior MCMC samples, \(i=1,\ldots,n\) subjects, \(k\in\mathcal{K}_{i}\) held out superpixels for subject \(i\), held out observations \(y_{iJ_{i}k}\), and predicted observations for each posterior sample \(\hat{y}^{s}_{iJ_{i}k}\), of \(N_{pred}\) total held out observations after fitting the models. We randomly sample and hold out 7 observations \(y_{iJ_{i}k}\), or approximately 20%, at the last visit for each of 110 subjects and 6 observations for one subject because they only had 32 observations available at the last visit, for a total of \(N_{pred}=111\times 7-1=776\) observations, and fit models with the remaining observations. Not all observations are available at all superpixels because we remove some observations in the data cleaning step. For the SHREVE models, we define a predicted observation at each posterior sample \(s\) as \[\hat{y}^{s}_{iJ_{i}k}=\alpha^{s}_{0k}+\alpha^{s}_{1k}t_{iJ_{i}}+\beta^{s}_{0ik} +\beta^{s}_{1ik}t_{iJ_{i}}+\gamma_{iJ_{i}k}, \tag{3}\] where \(t_{iJ_{i}}\) is the time observed and \(\gamma_{iJ_{i}k}\) is the visit effect for the held out observation at the \(i\)th subject's last visit. For the SHRE models, there is no \(\gamma_{iJ_{i}k}\) visit effect term in (3). ## 4 Advanced Glaucoma Progression Study After identifying and removing approximately 0.5% of the data as outliers, we analyze 29,179 observations from 111 subjects over 36 superpixels. Table 1 gives the WAIC, LOO, and MSPE of models considered. The SHREVE model has the lowest WAIC and LOO. Comparing pairs of SHREVE and SHRE models with and without the (a) population-level residual SD process and (b) subject-level residual SD process, omitting (a) increases WAIC (LOO) by up to 421 (238) while omitting (b) increases WAIC (LOO) by up to 4,964 (4,573). Omitting visit effects increases WAIC (LOO) by up to 18,361 (12,899). SLR has lower WAIC than the two SHRE models without (b), but SLR still has higher LOO. Having subject-specific residual SDs is more important for models without a visit effect component, as the difference in WAIC (LOO) between SHRE and SHRE-(b) is larger by 1553 (918) than the difference between SHREVE and SHREVE-(b). For predictions, the MSPE for SLR is 6.0 times that of the SHREVE model (39.7 vs. 6.6 \(\mu\)m\({}^{2}\)) and 5.5 times that of the SHRE model (39.7 vs. 7.2 \(\mu\)m\({}^{2}\)). Among the hierarchical models, the biggest distinction in MSPE is between models with and without visit effects. Comparing pairs of SHREVE and SHRE models, omitting the subject-level residual SD process consistently increases the MSPE, while omitting the population-level residual SD process has a negligible effect on MSPE. Figure 7 plots profiles and posterior mean fitted lines from the SHREVE model and SLR for one subject for superpixels that had the last (7th) observation \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Model** & **Joint** & **Visit** & **Superpixel** & **Subject** & **WAIC** & **LOO** & **MSPE** \\ & **Model** & **Effects** & **Residual SD** & **Residual SD** & & & (\(\mu\)m\({}^{2}\)) \\ \hline SHREVE & ✓ & ✓ & ✓ & ✓ & ✓ & **107,581.6** & **113,323.1** & 6.6 \\ SHREVE-(a) & ✓ & ✓ & ✗ & ✓ & 108,002.2 & 113,560.7 & **6.5** \\ SHREVE-(b) & ✓ & ✓ & ✓ & ✗ & 110,992.3 & 116,978.1 & 6.8 \\ SHREVE-(ab) & ✓ & ✓ & ✗ & ✗ & 113,238.3 & 118,647.5 & 6.9 \\ SHRE & ✓ & ✗ & ✓ & ✓ & 124,389.5 & 125,304.7 & 7.2 \\ SHRE-(a) & ✓ & ✗ & ✗ & ✓ & 124,468.8 & 125,461.2 & 7.1 \\ SHRE-(b) & ✓ & ✗ & ✓ & ✗ & 129,353.2 & 129,877.3 & 7.5 \\ SHRE-(ab) & ✓ & ✗ & ✗ & ✗ & 130,188.4 & 130,732.1 & 7.5 \\ SLR & ✗ & ✗ & ✗ & ✓ & 128,870.2 & 132,916.3 & 39.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Model fit comparison with widely applicable information criterion (WAIC), approximate leave-one-out cross-validation with Pareto Smoothed Importance Sampling (LOO), and mean squared prediction error (MSPE) of predictions. For predictions, we hold out 7 randomly sampled observations \(y_{iJ_{i}k}\) at the last visit of each of 110 AGPS subjects and 6 observations from one subject. Models with visit effects perform better than models without visit effects. SLR performs noticeably worse compared to the hierarchical models. The smallest WAIC, LOO, and MSPE values are bolded. held out. The SHREVE model better estimates slopes for noisy superpixels like 2.3 and 5.6. All predictions of the last visit in the 6 superpixels by the SHREVE model are closer to the GCC observed at \(t_{ij}=3.6\) than those by SLR. Table 2 gives posterior means and 95% credible intervals (CrI) for parameters of interest from the SHREVE and SHRE models. The SHREVE global log residual SD parameter has a smaller posterior mean than SHRE (0.35 vs 0.66 \(\mu\)m), although CrIs overlap; global intercepts and slopes have similar posterior means and CrIs. The SHREVE subject-level slopes and log residual SDs MGP lengthscales are shorter than for the SHRE model, implying that the spatial correlation of subject-level slopes and log residual SDs decays faster after including visit effects, allowing random effects to vary more across the macula. The SHREVE subject-level MGP SD parameter is larger than from SHRE, meaning the variability of subject-specific residual SDs is higher within a superpixel for the SHREVE model. All other subject-level MGP parameters are similar between the models. Appendix Table 2 gives posterior means and 95% CrIs for the population-level MGP parameters. The population-level MGP parameters are similar between the two models. Figure 8 plots spatial correlations \(M(h)\) as a function of distance \(h\) between superpixels for the SHREVE and SHRE models. At 4.2 units distance, the spatial correlation of subject-specific slopes drops to \(\exp(-1)\approx 0.37\) for the SHREVE model but is \(\exp(-0.62)\approx 0.54\) for the SHRE model. At 1.9 units distance, the spatial correlation of subject-specific log residual SDs is 0.37 for the SHREVE model but around 0.60 for the SHRE model. The shorter lengthscales in the SHREVE model result in reduced correlation at similar distances between superpixels. Figure 9 presents heatmaps of the posterior means and SDs of the log residual SDs from the SHREVE and SHRE models. For most superpixels, the SHREVE model uniformly reduces log residual SDs by approximately 0.5 compared to the SHRE model. The four central superpixels (4.4, 4.5, 5.4, and 5.5) and superpixels in the 7th column have higher log residual Figure 7: Comparison of predicted observations and model fit from the SHREVE model and simple linear regression (SLR) after holding out the last observation at 3.6 years follow-up of this subject. The gray line plots the raw data, the red line is the posterior mean fitted line from the SHREVE model without adding in the visit effects, and the blue line shows the fitted line from SLR. The SHREVE model is able to better estimate slopes and predict the last observation in noisy superpixels like 2.3 and 5.6 than SLR. SDs and have smaller differences in log residual SDs between the models. SHREVE breaks down measurement error into two components, spatially correlated errors due to the imaging process and general measurement noise. By accounting for visit effects, we reduce residual variance, leading to substantial improvement in model fit. We compare subject-specific slopes estimated from the SHREVE model to those estimated using SLR. We declare a slope to be significantly negative or positive when the upper bound or lower bound of the 95% CrI is less than or greater than 0, respectively. Across the 3,990 \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{SHREVE Model} & \multicolumn{2}{c}{SHRE Model} \\ \cline{2-6} Parameters & Symbols & Mean & 95\% CrI & Mean & 95\% CrI \\ \hline \multicolumn{6}{c}{Global Parameters} \\ Intercept & \(\mu_{0}\) & 70.02 & (54.47, 84.21) & 71.22 & (56.83, 84.80) \\ Slope & \(\mu_{1}\) & -0.30 & (-0.59, 0.02) & -0.30 & (-0.60, 0.04) \\ Log Residual SD & \(\mu_{\phi}\) & 0.35 & (0.05, 0.86) & 0.66 & (0.39, 0.97) \\ \hline \multicolumn{6}{c}{Subject-Level MGP SD Parameters} \\ Intercept & \(\sigma_{\beta,11}\) & 16.17 & (15.11, 17.39) & 16.33 & (15.24, 17.57) \\ Slope & \(\sigma_{\beta,22}\) & 0.94 & (0.87, 1.03) & 1.00 & (0.92, 1.09) \\ Log Residual SD & \(\sigma_{\beta,33}\) & 0.45 & (0.42, 0.49) & 0.34 & (0.32, 0.37) \\ \hline \multicolumn{6}{c}{Subject-Level MGP Lengthscale Parameters} \\ Intercept & \(\ell_{\beta,1}\) & 5.42 & (4.67, 6.32) & 5.58 & (4.80, 6.51) \\ Slope & \(\ell_{\beta,2}\) & 4.20 & (3.41, 5.16) & 6.79 & (5.48, 8.46) \\ Log Residual SDs & \(\ell_{\beta,3}\) & 1.87 & (1.57, 2.24) & 3.71 & (3.02, 4.61) \\ \hline \multicolumn{6}{c}{Subject-Level MGP Correlation Parameters} \\ Intercepts/Slopes & \(\rho_{\beta,12}\) & -0.14 & (-0.19, -0.10) & -0.13 & (-0.18, -0.08) \\ Intercepts/Log Residual SDs & \(\rho_{\beta,13}\) & 0.12 & (0.08, 0.16) & 0.17 & (0.11, 0.22) \\ Slopes/Log Residual SDs & \(\rho_{\beta,23}\) & -0.21 & (-0.28, -0.14) & -0.24 & (-0.31, -0.17) \\ \hline \multicolumn{6}{c}{Visit Effect Parameters} \\ Lengthscale & \(\ell_{v}\) & 3.54 & (3.07, 4.10) & & \\ SD & \(\sigma_{v}\) & 1.42 & (1.37, 1.48) & & \\ \hline \hline \end{tabular} \end{table} Table 2: Posterior mean and 95% credible interval (CrI) for global parameters and subject-level multivariate Gaussian process (MGP) parameters comparing the SHREVE and SHRE models. Figure 8: Posterior mean (line) and 95% pointwise credible intervals (colored bands) of correlation as a function of distance \(h\) between superpixels for subject-specific intercepts, slopes, and log residual SDs from the SHREVE (Visit Effects) and SHRE (No Visit Effects) models. The correlations decay faster in the SHREVE model with shorter lengthscales for slopes and log residual SDs. The dashed line indicates where the correlation is \(\exp(-1)\) and the distance between superpixels is equal to the lengthscale in the exponential kernel. subject-superpixel profiles, the SHREVE model detects a higher proportion of significant negative slopes (21.4% vs 18.0%) and lower proportion of significant positive slopes (3.1% vs 4.3%) as compared to SLR. Figure 10 shows the proportion of significant negative slopes by superpixel, and Appendix Figure B1 shows the proportion of significant positive slopes by superpixel. The SHREVE model detects 10% more significant negative slopes in 6 of 36 superpixels and 5% less significant positive slopes in 5 of 36 superpixels. Because glaucoma is an irreversible disease, GCC thicknesses are not expected to increase over time. These findings indicate SHREVE is more sensitive in detecting worsening slopes and possibly reduces false positive rates as compared to SLR. ## 5 Discussion We motivate and develop a Bayesian hierarchical model with population- and subject-level spatially varying coefficients and show that including visit effects reduces error in predicting future observations and greatly improves model fit. In current practice, ophthalmologists use SLR to assess slopes for individual subject-superpixel profiles, using information from only a single subject and location at a time. To better estimate subject-specific slopes, we include information from the whole cohort; explicitly model the correlations between subject-specific intercepts, slopes, and log residual SDs; allow population parameters and random effects to be spatially correlated; and account for visit-specific spatially correlated errors. Using information from the entire cohort, our proposed model leads to decreased noise in estimating subject-specific slopes, having smaller posterior SDs in 79% of subject-superpixel slopes as compared to SLR. There are many sources of error in obtaining the GCC thickness measurements from OCT scans. By separating measurement errors into visit-specific spatially correlated errors and other measurement noise, we are better able to detect eye-superpixels where GCC thicknesses are progressing most rapidly. Our approach will help identify progression of glaucoma for more individualized treatment plans. Other methods for modeling spatial variation over discrete locations include CAR models, where random effect distributions are conditional on some neighboring values (Betz-Stablein Figure 9: Heatmap of the log residual standard deviations (SD) comparing the SHREVE (Visit Effects) and SHRE (No Visit Effects) models. The values shown are the posterior mean (posterior SD) across the 36 superpixels. The log residual SDs from the SHREVE model are uniformly reduced across all superpixels compared to those from the SHRE model. The white dot is the fovea. et al., 2013; Berchuck, Mwanza and Warren, 2019). Instead, we model spatial correlation between all locations with GPs, where the spatial correlation depends only on the distance between any two locations. In addition to our a priori specification of \(\nu=\frac{1}{2}\), we fit our model using Matern correlation functions with \(\nu=\frac{3}{2}\), \(\nu=\frac{5}{2}\), and \(\nu=\infty\)(squared exponential kernel, Williams and Rasmussen, 2006). These early exploratory analyses had difficulty in MCMC convergence. One limitation of using GPs is the increasing difficulty in fitting when the number of locations is large. Fitting GP models involves matrix inversion which increases computational complexity in cubic order with the number of locations. When the number of locations is too large, approximations for the processes could be considered (Banerjee et al., 2008). Nonetheless, we expect these model developments will benefit ophthalmologists as they seek to better estimate subject-specific slopes from structural thickness measurements. We developed the current model specifically for GCC macular thickness measurements. Of further interest is to simultaneously model all the inner retinal layers that make up GCC to identify which sublayers may be worsening faster than others while accounting for between-layer correlations. Future extensions of the SHREVE model could include working with multivariate outcomes, which may pose additional computational challenges. ## Appendix A Convergence Assessment of the Shreve Model We provide more details on convergence of the SHREVE model as mentioned in Section 3.5. Following Vehtari et al. (2021)'s recommendation on assessing convergence, we monitored the potential scale reduction factor \(\widehat{R}\) and the bulk and tail effective sample sizes (ESS) for all model parameters. We found \(\widehat{R}\) was less than 1.01 and bulk and tail ESS were all greater than 100 per chain for all parameters, leading us to believe that our MCMC has Figure 10: Bar charts of the proportion of significant negative slopes detected by the SHREVE model and simple linear regression (SLR) across the 36 superpixels. The difference (\(\Delta=\text{SHREVE}-\text{SLR}\)) in proportion is labeled at the top of each subplot. Across all locations, the SHREVE model detects a higher proportion of significant negative slopes (21.4% vs 18.0%) than SLR. converged satisfactorily. Figure A1 shows the efficiency per iteration of the bulk ESS and potential reduction factor \(\widehat{R}\) of 7 model parameters with the largest \(\widehat{R}\) in the SHREVE model. The bulk ESS increases linearly with increasing iterations indicating that the relative efficiency is constant over different numbers of draws. \(\widehat{R}\) decreases exponentially with increasing iterations and are all less than 1.01. Table A1 gives the sampling efficiency of all model parameters in terms of bulk and tail ESS and \(\widehat{R}\). ## Appendix B Additional results for AGPS analysis We provide additional results mentioned in Section 4. Table B1 presents the posterior mean and 95% CrIs for population-level MGP parameters for the SHREVE and SHRE models. All population-level MGP parameter posterior means and 95% CrIs are similar between the models. Figure B1 plots the proportions of significant positive slopes for the SHREVE model and SLR in each of the 36 superpixels. Across all locations, the SHREVE model detects a lower proportion of significant positive slopes (3.1% vs 4.3%) than SLR. **Funding.** This work was supported by an NIH R01 grant (R01-EY029792), an unrestricted Departmental Grant from Research to Prevent Blindness, and an unrestricted grant from Heidelberg Engineering. AJH was supported by NIH grant K25 AI153816, NSF grant DMS 2152774, and a generous gift from the Karen Toffler Charitable Trust.
2301.10330
Off-Policy Evaluation for Action-Dependent Non-Stationary Environments
Methods for sequential decision-making are often built upon a foundational assumption that the underlying decision process is stationary. This limits the application of such methods because real-world problems are often subject to changes due to external factors (passive non-stationarity), changes induced by interactions with the system itself (active non-stationarity), or both (hybrid non-stationarity). In this work, we take the first steps towards the fundamental challenge of on-policy and off-policy evaluation amidst structured changes due to active, passive, or hybrid non-stationarity. Towards this goal, we make a higher-order stationarity assumption such that non-stationarity results in changes over time, but the way changes happen is fixed. We propose, OPEN, an algorithm that uses a double application of counterfactual reasoning and a novel importance-weighted instrument-variable regression to obtain both a lower bias and a lower variance estimate of the structure in the changes of a policy's past performances. Finally, we show promising results on how OPEN can be used to predict future performances for several domains inspired by real-world applications that exhibit non-stationarity.
Yash Chandak, Shiv Shankar, Nathaniel D. Bastian, Bruno Castro da Silva, Emma Brunskil, Philip S. Thomas
2023-01-24T22:21:16Z
http://arxiv.org/abs/2301.10330v1
# Off-Policy Evaluation for Action-Dependent Non-Stationary Environments ###### Abstract Methods for sequential decision-making are often built upon a foundational assumption that the underlying decision process is stationary. This limits the application of such methods because real-world problems are often subject to changes due to external factors (_passive_ non-stationarity), changes induced by interactions with the system itself (_active_ non-stationarity), or both (_hybrid_ non-stationarity). In this work, we take the first steps towards the fundamental challenge of on-policy and off-policy evaluation amidst structured changes due to active, passive, or hybrid non-stationarity. Towards this goal, we make a _higher-order stationarity_ assumption such that non-stationarity results in changes over time, but the way changes happen is fixed. We propose, OPEN, an algorithm that uses a double application of counterfactual reasoning and a novel importance-weighted instrument-variable regression to obtain both a lower bias and a lower variance estimate of the structure in the changes of a policy's past performances. Finally, we show promising results on how OPEN can be used to predict future performances for several domains inspired by real-world applications that exhibit non-stationarity. ## 1 Introduction Methods for sequential decision making are often built upon a foundational assumption that the underlying decision process is stationary (Sutton and Barto, 2018). While this assumption was a cornerstone when laying the theoretical foundations of the field, and while is often reasonable, it is seldom true in practice and can be unreasonable (Dulac-Arnold et al., 2019). Instead, real-world problems are subject to non-stationarity that can be broadly classified as (a) _Passive:_ where the changes to the system are induced only by external (exogenous) factors, (b) _Active:_ where the changes result due to the agent's past interactions with the system, and (c) _Hybrid:_ where both passive and active changes can occur together (Khetarpal et al., 2020). There are many applications that are subject to active, passive, or hybrid non-stationarity, and where the stationarity assumption may be unreasonable. Consider methods for automated healthcare where we would like to use the data collected over past decades to find better treatment policies. In such cases, not only might there have been passive changes due to healthcare infrastructure changing over time, but active changes might also occur because of public health continuously evolving based on the treatments made available in the past, thereby resulting in hybrid non-stationarity. Similar to automated healthcare, other applications like online education, product recommendations, and in fact almost all human-computer interaction systems need to not only account for the continually drifting behavior of the user demographic, but also how the preferences of users may change due to interactions with the system (Theocharous et al., 2020). Even social media platforms need to account for the partisan bias of their users that change due to both external political developments and increased self-validation resulting from previous posts/ads suggested by the recommender system itself (Cinelli et al., 2021; Gillani et al., 2018). Similarly, motors in a robot suffer wear and tear over time not only based on natural corrosion but also on how vigorous past actions were. However, conventional off-policy evaluation methods (Precup, 2000; Jiang and Li, 2015; Xie et al., 2019) predominantly focus on the stationary setting. These methods assume availability of either (a) _resetting assumption_ to sample multiple sequences of interactions from a stationary environment with a fixed starting state distribution (i.e., episodic setting), or (b) _ergodicity assumption_ such that interactions can be sampled from a steady-state/stationary distribution (i.e., continuing setting). For the problems of our interest, methods based on these assumptions may not be viable. For e.g., in automated healthcare, we have a single long history for the evolution of public health, which is neither in a steady state distribution nor can we reset and go back in time to sample another history of interactions. As discussed earlier, because of non-stationarity the transition dynamics and reward function in the future can be different from the ones in the past, and these changes might also be dependent on past interactions. In such cases, how do we even address the fundamental challenge of _off-policy evaluation_, i.e., using data from past interactions to estimate the performance of a new policy in the future? Unfortunately, if the underlying changes are arbitrary, even amidst only passive non-stationarity it may not be possible to provide non-vacuous predictions of a policy's future performance (Chandak et al., 2020). Thankfully, for many real-world applications there might be (unknown) structure in the underlying changes. In such cases, can the _effect_ of the underlying changes on a policy's performance be inferred, _without_ requiring estimation of the underlying model/process? Prior work has only shown that this is possible in the passive setting. This raises the question that we aim to answer: _How can one provide a unified procedure for (off) policy evaluation amidst active, passive, or hybrid non-stationarity, when the underlying changes are structured?_ **Contributions:** To the best of our knowledge, our work presents the first steps towards addressing the fundamental challenge of off-policy evaluation amidst structured changes due to active or hybrid non-stationarity. Towards this goal, we make a _higher-order stationarity_ assumption, under which the non-stationarity can result in changes over time, but the way changes happen is fixed. Under this assumption, we propose a model-free method that can infer the _effect_ of the underlying non-stationarity on the past performances and use that to predict the future performances for a given policy. We call the proposed method OPEN: off-policy evaluation for non-stationary domains. On domains inspired by real-world applications, we show that OPEN often provides significantly better results not only in the presence of active and hybrid non-stationarity, but also for the passive setting where it even outperforms previous methods designed to handle only passive non-stationarity. OPEN primarily relies upon two key insights: **(a)** For active/hybrid non-stationarity, as the underlying changes may depend on past interactions, the structure in the changes observed when executing the data collection policy can be different than if one were to execute the evaluation policy. To address this challenge, OPEN makes uses counterfactual reasoning twice and permits reduction of this off-policy evaluation problem to an auto-regression based forecasting problem. **(b)** Despite reduction to a more familiar auto-regression problem, in this setting naive least-squares based estimates of parameters for auto-regression suffers from high variance and can even be asymptotically biased. Finally, to address this challenge, OPEN uses a novel importance-weighted instrument-variable (auto-)regression technique to obtain asymptotically consistent and lower variance parameter estimates. ## 2 Related Work Off-policy evaluation (OPE) is an important aspect of reinforcement learning (Precup, 2000; Thomas et al., 2015; Sutton and Barto, 2018) and various techniques have been developed to construct efficient estimators for OPE (Jiang and Li, 2015; Thomas and Brunskill, 2016; Munos et al., 2016; Harutyunyan et al., 2016; Espeholt et al., 2018; Xie et al., 2019). However, these work focus on the stationary setting. Similarly, there are various methods for tackling non-stationarity in the bandit setting (Moulines, 2008; Besbes et al., 2014; Seznec et al., 2018; Wang et al., 2019). In contrast, the proposed work focuses on methods for sequential decision making. Literature on off-policy evaluation amidst non-stationarity for sequential decision making is sparse. Perhaps the most closely related works are by Thomas et al. (2017); Chandak et al. (2020); Xie et al. (2020); Poiani et al. (2021); Liotet et al. (2021). While these methods present an important stepping stone, such methods are for passive non-stationarity and, as we discuss using the toy example in Figure 1, may result in undesired outcomes if used as-is in real-world settings that are subject to active or hybrid non-stationarity. Here, methods for tackling _passive_ non-stationarity will track the best policy under the assumption that the changes due to damages are because of external factors and would fail to attribute the cause of damage to the agent's decisions. Therefore, as on any given day 'running' will always be better, every day these methods will prefer 'running' over 'walking' and thus _aggravate_ the damage. Since the outcome on each day is dependent on decisions made during previous days this leads to active non-stationarity, where 'walking' is better in the long run. Finding a better policy first requires a method to evaluate a policy's (future) performance, which is the focus of this work. Notice that the above problem can also be viewed as a task with effectively a _single_ lifelong episode. However, as we discuss later in Section 4, approaches such as modeling the problem as a large stationary POMDP or as a continuing average-reward MDP with a single episode may not be viable. Further, non-stationarity can also be observed in multi-agent systems and games due to different agents/players interacting with the system. However, often the goal in these other areas is to search for (Nash) equilibria, which may not even exist under hybrid non-stationarity. Non-stationarity may also result due to artifacts of the learning algorithm even when the problem is stationary. While relevant, these other research areas are distinct from our setting of interest and we discuss them and others in more detail in Appendix B. ## 3 Non-Stationary Decision Processes We build upon the formulation used by past work (Xie et al., 2020; Chandak et al., 2020) and consider that the agent interacts with a lifelong sequence of partially observable Markov decision processes (POMDPs), \((M_{i})_{i=1}^{\infty}\). However, unlike prior problem formulations, we account for active and hybrid non-stationarity by considering POMDP \(M_{i+1}\) to be dependent on _both_ on the POMDP \(M_{i}\) and the decisions made by the agent while interacting with \(M_{i}\). We provide a control graph for this setup in Figure 2. For simplicity of presentation, we will often ignore the dependency of \(M_{i+1}\) on \(M_{i-k}\) for \(k>0\), although our results can be extended for settings with \(k>0\). **Notation:** Let \(\mathcal{M}\) be a finite set of POMDPs. Each POMDP \(M_{i}\in\mathcal{M}\) is a tuple \((\mathcal{O},\mathcal{S},\mathcal{A},\Omega_{i},P_{i},R_{i},\mu_{i})\), where \(\mathcal{O}\) is the set of observations, \(\mathcal{S}\) is the set of states, and \(\mathcal{A}\) is the set of actions, which are the same for all the POMDPs in \(\mathcal{M}\). For simplicity of notation, we assume \(\mathcal{M},\mathcal{S},\mathcal{O},\mathcal{A}\) are finite sets, although our results can be extended to settings where these sets are infinite or continuous. Let \(\Omega_{i}:\mathcal{S}\times\mathcal{O}\rightarrow[0,1]\) be the observation function, \(P_{i}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) be the transition function, \(\mu_{i}:\mathcal{S}\rightarrow[0,1]\) be the starting state distribution, and \(R_{i}:\mathcal{S}\times\mathcal{A}\rightarrow[-R_{\max},R_{\max}]\) be the reward function with \(0\leq R_{\max}<\infty\). Let \(\pi:\mathcal{O}\times\mathcal{A}\rightarrow[0,1]\) be any policy and \(\Pi\) be the set of all policies. Let \(H_{i}\coloneqq(O_{i}^{t},A_{i}^{t},R_{i}^{t})_{t=1}^{T}\) be a sequence of at most \(T\) interactions in \(M_{i}\), where \(O_{i}^{t},A_{i}^{t},R_{i}^{t}\) are the random variables corresponding to the observation, action, and reward at the step \(t\). Let \(G_{i}\coloneqq\sum_{t=1}^{T}R_{i}^{t}\) be an observed return and \(J_{i}(\pi)\coloneqq\mathbb{E}_{\pi}[G_{i}|M_{i}]\) be the performance of \(\pi\) on \(M_{i}\). Let \(\mathcal{H}\) be the set of possible interaction sequences, and finally let \(\mathcal{T}:\mathcal{M}\times\mathcal{H}\times\mathcal{M}\rightarrow[0,1]\) be the transition function that governs the non-stationarity in the POMDPs. That is, \(\mathcal{T}(m,h,m^{\prime})=\Pr(M_{i+1}{=}m^{\prime}|M_{i}{=}m,H_{i}{=}h)\). Figure 1: RoboToy domain. Figure 2**(Left)** depicts the control graph for a stationary POMDP, where each column corresponds to one time step. Here, _multiple, independent_ episodes from the _same_ POMDP can be resampled. **(Right)** Control graph that we consider for a non-stationary decision process, where each column corresponds to one episode. Here, the agent interacts with a _single_ sequence of related POMDPs \((M_{i})_{i=1}^{n}\). Absence or presence of the red arrows indicates whether the change from \(M_{i}\) to \(M_{i+1}\) is independent of the decisions in \(M_{i}\) (passive non-stationarity) or not (active non-stationarity). **Problem Statement:** We look at the fundamental problem of evaluating the performance of a policy \(\pi\) in the presence of non-stationarity. Let \((H_{i})_{i=1}^{n}\) be the data collected in the past by interacting using policies \((\beta_{i})_{i=1}^{n}\). Let \(D_{n}\) be the dataset consisting of \((H_{i})_{i=1}^{n}\) and the probabilities of the actions taken by \((\beta_{i})_{i=1}^{n}\). Given \(D_{n}\), we aim to evaluate the expected future performance of \(\pi\) if it is deployed for the _next_\(L\) episodes (each a different POMDP), that is \(\mathscr{J}(\pi)\coloneqq\mathbb{E}_{\pi}\left[\sum_{k=n+1}^{n+L}J_{k}(\pi) \middle|(H_{i})_{i=1}^{n}\right].\) We call it the _on-policy_ setting if \(\forall i,\beta_{i}=\pi\), and the _off-policy_ setting otherwise. Notice that even in the on-policy setting, naively aggregating observed performances from \((H_{i})_{i=1}^{n}\) may not be indicative of \(\mathscr{J}(\pi)\) as \(M_{k}\) for \(k>n\) may be different than \(M\in(M_{i})_{i=1}^{n}\) due to non-stationarity. ## 4 Understanding Structural Assumptions A careful reader would have observed that instead of considering interactions with a sequence of POMDPs \((M_{i})_{i=1}^{n}\) that are each dependent on the past POMDPs and decisions, an equivalent setup might have been to consider a 'chained' sequence of interactions \((H_{1},H_{2},...,H_{n})\) as a _single_ episode in a'mega' POMDP comprised of all \(M\in\mathcal{M}\). Consequently, \(\mathscr{J}(\pi)\) would correspond to the expected future return given \((H_{i})_{i=1}^{n}\). Tackling this single long sequence of interactions using the continuing/average-reward setting is not generally viable because methods for these settings rely on an ergodicity assumption (which implies that all states can always be revisited) that may not hold in the presence of non-stationarity. For instance, in the earlier example of automated healthcare, it is not possible to revisit past years. To address the above challenge, we propose introducing a different structural assumption. Particularly, framing the problem as a sequence of POMDPs allows us to split the single sequence of interactions into multiple (dependent) fragments, with additional structure linking together the fragments. Specifically, we make the following intuitive assumption. **Assumption 1**.: \(\forall m\in\mathcal{M}\) _such that the performance \(J(\pi)\) associated with \(m\) is \(j\),_ \[\forall\pi,\pi^{\prime}\in\Pi^{2},\forall i,\:\Pr(J_{i+1}(\pi)=j_{i+1}|M_{i}=m ;\pi^{\prime})=\Pr(J_{i+1}(\pi)=j_{i+1}|J_{i}(\pi)=j;\pi^{\prime}). \tag{1}\] Assumption 1 characterizes the probability that \(\pi\)'s performance will be \(j_{i+1}\) in the \(i+1\)th episode when the policy \(\pi^{\prime}\) is executed in the \(i\)th episode. To understand Assumption 1 intuitively, consider a'meta-transition' function that characterizes \(\Pr(J_{i+1}(\pi)|J_{i}(\pi),\pi^{\prime})\) similar to how the standard transition function in an MDP characterizes \(\Pr(S_{t+1}|S_{t},A_{t})\). While the underlying changes actually happen via \(\mathcal{T}\), Assumption 1 imposes the following two conditions: **(a)** A _higher-order stationarity_ condition on the meta-transitions under which non-stationarity can result in changes over time, but _the way the changes happen is fixed_, and **(b)** Knowing the past performance(s) of a policy \(\pi\) provides _sufficient_ information for the meta-transition function to model how the performance will change upon executing any (possibly different) policy \(\pi^{\prime}\). For example, in the earlier toy robot domain, given the current performance there exists an (unknown) oracle that can predict the performance for the next day if the robot decides to 'run/'walk'. Assumption 1 is beneficial as it implicitly captures the effect of both the underlying passive and active non-stationarity by modeling the conditional distribution of the performance \(J_{i+1}(\pi)\) given \(J_{i}(\pi)\), when executing any (different) policy \(\pi^{\prime}\). At the same time, notice that it generalizes **(a)** the stationary setting, where \(\forall\pi\in\Pi,\:\forall i>0,J_{i+1}(\pi)=J_{i}(\pi)\), and **(b)** only passive non-stationarity, Figure 2: Control Graph for the non-stationary process. See text for symbol definitions. which is a special case of (1) wherein \(\pi^{\prime}\) does not influence the outcome, i.e., \[\forall\pi_{a},\pi_{b}\in\Pi^{2},\;\forall i>0,\;\Pr(J_{i+1}(\pi)=J_{i+1}|J_{i}( \pi)=j;\pi_{a})=\Pr(J_{i+1}(\pi)=J_{i+1}|J_{i}(\pi)=j;\pi_{b}).\] **Remark 1**.: _In some cases, it may be beneficial to relax Assumption \(1\) such that instead of using \(\Pr(J_{i+1}(\pi)|J_{i}(\pi);\pi^{\prime})\) in (1), one considers \(\Pr(J_{i+1}(\pi)|(J_{i-k}(\pi))_{k=0}^{p};\pi^{\prime})\). This can be considered similar to the \(p\)-Markov MDP where the transitions are characterized using \(\Pr(S_{t+1}|(S_{t-i})_{i=0}^{p},A_{t})\). While we consider this general setting for our empirical results, for simplicity, to present the key ideas we will consider (1). We provide a detailed discussion on cases where we expect such an assumption to be (in)valid, and also other potential assumptions in Appendix C._ ## 5 Model-Free Policy Evaluation In this section we discuss how under Assumption \(1\), we can perform model-free off-policy evaluation amidst passive, active, or hybrid non-stationarity. The high level idea can be decomposed into the following: **(a)** Obtain estimates of \((J_{i}(\pi))_{i=1}^{n}\) using \((H_{i})_{i=1}^{n}\) (red arrows in Figure 3), and **(b)** Use the estimates of \((J_{i}(\pi))_{i=1}^{n}\) to infer the _effect_ of the underlying non-stationarity on the performance, and use that to predict \((J_{i}(\pi))_{i=n+1}^{n+L}\) (blue arrows in Figure 3). ### Counterfactual Reasoning \((J_{i}(\pi))_{i=1}^{n}\) could have been directly estimated if we had access to \((M_{i})_{i=1}^{n}\). However, how do we estimate \((J_{i}(\pi))_{i=1}^{n}\) when we only have \((H_{i})_{i=1}^{n}\) collected using interactions via possibly different data collecting policies \((\beta_{i})_{i=1}^{n}\)? To estimate \((J_{i}(\pi))_{i=1}^{n}\), we use the collected data \(D_{n}\) and aim to answer the following counterfactual question: _what would the performance of \(\pi\) would have been, if \(\pi\) was used to interact with \(M_{i}\) instead of \(\beta_{i}\)_? To answer this, we make the following standard support assumption (Thomas et al., 2015; Thomas and Brunskill, 2016; Xie et al., 2019) that says that any action that is likely under \(\pi\) is also sufficiently likely under the policy \(\beta_{i}\) for all \(i\). **Assumption 2**.: \(\forall o\in\mathcal{O},\forall a\in\mathcal{A}\)_, and \(\forall i\leq n\), \(\frac{\pi(o,a)}{\beta_{i}(o,a)}\) is bounded above by a (unknown) constant \(c\)._ Under Assumption \(2\), an unbiased estimate of \(J_{i}(\pi)\) can be obtained using common off-policy evaluation methods like importance sampling (IS) or per-decision importance sampling (PDIS) (Precup, 2000), \(\forall i,\widehat{J}_{i}(\pi)\coloneqq\sum_{t=1}^{T}\rho_{i}^{t}R_{i}^{t},\) where, \(\rho_{i}^{t}\coloneqq\prod_{j=1}^{t}\frac{\pi(O_{i}^{j},A_{i}^{j})}{\beta_{i}( O_{i}^{j},A_{i}^{j})}\). This \(\widehat{J}_{i}(\pi)\) provides an estimate of \(J_{i}(\pi)\) associated with \(\mathrm{each}\,M_{i}\) and policy \(\pi\), as needed for the red arrows in Figure 3. ### Double Counterfactual Reasoning Having obtained the estimates for \((J_{i}(\pi))_{i=1}^{n}\), we now aim to estimate how the performance of \(\pi\) changes due to the underlying non-stationarity. Recall that under active or hybrid non-stationarity, changes in a policy's performance due to the underlying non-stationarity is dependent on the past actions. From Assumption \(1\), let \[\forall i>0,\quad F_{\pi}(x,\pi^{\prime},y)\coloneqq\Pr(J_{i+1}(\pi)=y|J_{i} (\pi)=x;\pi^{\prime})\] denote how the performance of \(\pi\) changes between episodes, if \(\pi^{\prime}\) was executed. Here \(J_{i+1}(\pi)\) is a random variable because of stochasticity in \(H_{i}\) (i.e., how \(\pi^{\prime}\) interacts in \(M_{i}\)), as well as in the meta-transition from POMDP \(M_{i}\) to \(M_{i+1}\). Similarly, let \[\forall i>0,\quad f(J_{i}(\pi),\pi^{\prime};\theta_{\pi})\coloneqq\mathbb{E}_ {\pi^{\prime}}\left[J_{i+1}(\pi)|J_{i}(\pi)\right]=\sum\nolimits_{y\in\mathbb{R }}F_{\pi}(J_{i}(\pi),\pi^{\prime},y)y\] be some (unknown) function parameterized by \(\theta_{\pi}\in\Theta\), which denotes the _expected_ performance of \(\pi\) in episode \(i+1\), if in episode \(i\), \(\pi\)'s performance was \(J_{i}(\pi)\) and \(\pi^{\prime}\) was executed. Parameters \(\theta_{\pi}\) depend on \(\pi\) and thus \(f\) can model different types of changes to the performance of different policies. Recall from Figure 3 (blue arrows), if we can estimate \(f(\cdot,\pi;\theta_{\pi})\) to infer how \(J_{i}(\pi)\) changes due to the underlying non-stationarity when interacting with \(\pi\), then we can use it to predict \((J_{i}(\pi))_{i=n+1}^{n+L}\) when \(\pi\) is deployed in the future. In the following, we will predominantly focus on estimating \(f(\cdot,\pi;\theta_{\pi})\) using past data \(D_{n}\). Therefore, for brevity we let \(f(\cdot;\theta_{\pi})\coloneqq f(\cdot,\pi;\theta_{\pi})\). Figure 3: High-level idea. If pairs of \((J_{i}(\pi),J_{i+1}(\pi))\) are available when the transition between \(M_{i}\) and \(M_{i+1}\) occurs due to execution of \(\pi\), then one could auto-regress \(J_{i+1}(\pi)\) on \(J_{i}(\pi)\) to estimate \(f(\cdot;\theta_{\pi})\) and model the changes in the performance of \(\pi\). However, the sequence \((\widehat{J}_{i}(\pi))_{i=1}^{n}\) obtained from counterfactual reasoning cannot be used as-is for auto-regression. This is because the changes that occurred between \(M_{i}\) and \(M_{i+1}\) are associated with the execution of \(\beta_{i}\), not \(\pi\). For example, recall the toy robot example in Figure 1. If data was collected by mostly 'running', then the performance of 'walking' would decay as well. Directly auto-regressing on the past performances of 'walking' would result in how the performance of 'walking' would change _when actually executing 'running'_. However, if we want to predict performances of 'walking' in the future, what we actually want to estimate is how the performance of 'walking' changes _if 'walking' is actually performed_. To resolve the above issue, we ask another counter-factual question: _What would the performance of \(\pi\) in \(M_{i+1}\) have been had we executed \(\pi\), instead of \(\beta_{i}\), in \(M_{i}\)?_ In the following theorem we show how this question can be answered with a second application of the importance ratio \(\rho_{i}\coloneqq\rho_{i}^{T}\). **Theorem 1**.: _Under Assumptions \(1\) and \(2\), \(\forall m\in\mathcal{M}\) such that the performance \(J(\pi)\) associated with \(m\) is \(j\), \(\mathbb{E}_{\pi}\left[J_{i+1}(\pi)|J_{i}(\pi)=j\right]=\mathbb{E}_{\beta_{i}, \beta_{i+1}}\big{[}\rho_{i}\widehat{J}_{i+1}(\pi)\big{|}M_{i}=m\big{]}\)._ See Appendix D.1 for the proof. Intuitively, as \(\beta_{i}\) and \(\beta_{i+1}\) were used to collect the data in \(i\) and \(i+1^{\text{th}}\) episodes, respectively, Theorem 1 uses \(\rho_{i}\) to first correct for the mismatch between \(\pi\) and \(\beta_{i}\) that influences how \(M_{i}\) changes to \(M_{i+1}\) due to interactions \(H_{i}\). Secondly, \(\widehat{J}_{i+1}\) corrects for the mismatch between \(\pi\) and \(\beta_{i+1}\) for the sequence of interactions \(H_{i+1}\) in \(M_{i+1}\). 5.3 Importance-Weighted IV-RegressionAn important advantage of Theorem 1 is that given \(J_{i}(\pi)\), \(\rho_{i}\widehat{J}_{i+1}(\pi)\) provides an unbiased estimate of \(\mathbb{E}_{\pi}\left[J_{i+1}(\pi)|J_{i}(\pi)\right]\), even though \(\pi\) may not have been used for data collection. This permits using \(Y_{i}\coloneqq\rho_{i}\widehat{J}_{i+1}(\pi)\) as a target for predicting the next performance given \(X_{i}\coloneqq J_{i}(\pi)\), i.e., to estimate \(f(J_{i}(\pi);\theta_{\pi})\) through regression on \((X_{i},Y_{i})\) pairs. However, notice that performing regression on the pairs \((X_{i}=J_{i}(\pi),Y_{i}=\rho\widehat{J}_{i+1}(\pi))_{i=1}^{n-1}\) may not be directly possible as we do not have \(J_{i}(\pi)\); only unbiased _estimates_\(\widehat{J}_{i}(\pi)\) of \(J_{i}(\pi)\). This is problematic because in least-squares regression, while noisy estimates of the _target_ variable \(Y_{i}\) are fine, noisy estimates of the _input_ variable \(X_{i}\) may result in estimates of \(\theta_{\pi}\) that are _not even_ asymptotically consistent _even_ when the underlying \(f\) is a linear function of its inputs. To see this clearly, consider the following naive estimator, \[\hat{\theta}_{\texttt{naive}}\in\operatorname*{argmin}_{\theta\in\Theta}\ \sum\nolimits_{i=1}^{n-1}\left(f\left(\widehat{J}_{i}(\pi);\theta\right)- \rho_{i}\widehat{J}_{i+1}(\pi)\right)^{2}.\] Because \(\widehat{J}_{i}(\pi)\) is an unbiased estimate of \(J_{\pi}\), without loss of generality, let \(\widehat{J}_{i}(\pi)=J_{i}(\pi)+\eta_{i}\), where \(\eta_{i}\) is mean zero noise. Let \(\mathbb{N}\coloneqq[\eta_{1},\eta_{2},...,\eta_{n-1}]^{\top}\) and \(\mathbb{J}\coloneqq[J_{1}(\pi),J_{2}(\pi),...,J_{n-1}(\pi)]^{\top}\). \(\theta_{\texttt{naive}}\) can now be expressed as (see Appendix D.2), \[\hat{\theta}_{\texttt{naive}}\xrightarrow{a.s.}\left(\mathbb{J}^{\top} \mathbb{J}+\mathbb{N}^{\top}\mathbb{N}\right)^{-1}\mathbb{J}^{\top}\mathbb{J} \theta_{\pi}\xrightarrow{a\bullet}\theta_{\pi}. \tag{2}\] Observe that \(\mathbb{N}^{\top}\mathbb{N}\) in (2) relates to the variances of the mean zero noise variables \(\eta_{i}\). The greater the variances, the more \(\hat{\theta}_{\texttt{naive}}\) would be biased towards zero (if \(\forall i,\,\eta_{i}=0\), then the true \(\theta_{\pi}\) is trivially recovered). Intuitively, when the variance of \(\eta_{i}\) is high, noise dominates and the structure in the data gets suppressed even in the large-sample regime. Unfortunately, the importance sampling based estimator \(\widehat{J}_{i}(\pi)\) in the sequential decision making setting is infamous for extremely high variance (Thomas et al., 2015). Therefore, \(\hat{\theta}_{\texttt{naive}}\) can be extremely biased and will not be able to capture the trend in how performances are changing, _even in the limit of infinite data and linear_\(f\). The problem may be exacerbated when \(f\) is non-linear. 5.3.1 Bias ReductionTo mitigate the bias stemming from noise in input variables, we introduce a novel instrument variable (IV) (Pearl et al., 2000) regression method for tackling non-stationarity. Instrument variables \(Z\) represent some side-information and were originally used in the causal literature to mitigate any bias resulting due to spurious correlation, caused by unobserved confounders, between the input and the target variables. For mitigating bias in our setting, IVs can intuitively be considered as some side-information to 'denoise' the input variable before performing regression. For this IV-regression, an ideal IV is _correlated_ with the input variables (e.g., \(\widehat{J}_{i}(\pi)\)) but _uncorrelated_ with the noises in the input variable (e.g., \(\eta_{i}\)). We propose leveraging statistics based on past performances as an IV for \(\widehat{J}_{i}(\pi)\). For instance, using \(Z_{i}\coloneqq\widehat{J}_{i-1}(\pi)\) as an IV for \(\widehat{J}_{i}(\pi)\). Notice that while correlation between \(J_{i-1}(\pi)\) and \(J_{i}(\pi)\) can directly imply correlation between \(\widehat{J}_{i-1}(\pi)\) and \(\widehat{J}_{i}(\pi)\), values of \(J_{i-1}(\pi)\) and \(J_{i}(\pi)\) are dependent on non-stationarity in the past. Therefore, we make the following assumption, which may easily be satisfied when the consecutive performances do not change arbitrarily. **Assumption 3**.: \(\forall i,\quad\operatorname{Cov}\big{(}\widehat{J}_{i-1}(\pi),\widehat{J}_{ i}(\pi)\big{)}\neq 0\)_._ However, notice that the noise in \(\widehat{J}_{i}(\pi)\) can be _dependent_ on \(\widehat{J}_{i-1}(\pi)\). This is because non-stationarity can make \(H_{i-1}\) and \(H_{i}\) dependent, which are in turn used to estimate \(\widehat{J}_{i-1}(\pi)\) and \(\widehat{J}_{i}(\pi)\), respectively. Nevertheless, perhaps interestingly, we show that despite not being independent, \(\widehat{J}_{i-1}(\pi)\) is _uncorrelated_ with the noise in \(\widehat{J}_{i}(\pi)\). **Theorem 2**.: _Under Assumptions \(1\) and \(2\), \(\forall i,\quad\operatorname{Cov}\big{(}\widehat{J}_{i-1}(\pi),\widehat{J}_{ i}(\pi)-J_{i}(\pi)\big{)}=0\)._ See Appendix D.3 for the proof. Finally, as IV regression requires learning an additional function \(g\coloneqq\mathbb{R}\to\mathbb{R}\) parameterized by \(\varphi\in\Omega\) (intuitively, think of this as a denoising function), we let \(\widehat{J}_{i-1}(\pi)\) be an IV for \(\widehat{J}_{i}(\pi)\) and propose the following IV-regression based estimator, \[\hat{\varphi}_{n}\in\operatorname*{argmin}_{\varphi\in\Omega} \ \sum\nolimits_{i=2}^{n}\Big{(}g\left(\widehat{J}_{i-1}(\pi);\varphi \right)-\widehat{J}_{i}(\pi)\Big{)}^{2} \tag{3}\] \[\hat{\theta}_{n}\in\operatorname*{argmin}_{\theta\in\Theta}\sum \nolimits_{i=2}^{n-1}\Big{(}f\left(g\left(\widehat{J}_{i-1}(\pi);\hat{\varphi }_{n}\right);\theta\right)-\rho_{i}\widehat{J}_{i+1}(\pi)\Big{)}^{2}\!\!. \tag{4}\] **Theorem 3**.: _Under Assumptions \(1\), \(2\), and \(3\), if \(f\) and \(g\) are linear functions of their inputs, then \(\hat{\theta}_{n}\) is a strongly consistent estimator of \(\theta_{\pi}\), i.e., \(\hat{\theta}_{n}\xrightarrow{a.s.}\theta_{\pi}\). (See Appendix D.3 for the proof.)_ **Remark 2**.: _Other choices of instrument variables \(Z_{i}\) (apart from \(Z_{i}=\widehat{J}_{i-1}(\pi)\)) are also viable. We discuss some alternate choices in Appendix E. These other IVs can be used in (3) and (4) by replacing \(\widehat{J}_{i-1}(\pi)\) with the alternative \(Z_{i}\)._ **Remark 3**.: _As discussed earlier, it may be beneficial to model \(J_{i+1}(\pi)\) using \((J_{k}(\pi))_{k=i-p+1}^{i}\) with \(p>1\). The proposed estimator can be easily extended by making \(f\) dependent on multiple past terms \((X_{k})_{k=i-p+1}^{i}\), where \(\forall k,\,X_{k}\coloneqq g((\widehat{J}_{i}(\pi))_{k=k-p}^{k-1};\hat{\phi})\). We discuss this in more detail in Appendix E. The proposed procedure is also related to methods that use lags of the time series as instrument variables (Bellemare et al., 2017; Wilkins, 2018; Wang and Bellemare, 2019)._ **Remark 4**.: _An advantage of the model-free setting is that we only need to consider changes in \(J(\pi)\), which is a_ **scalar** _statistic. For scalar quantities,_ linear _auto-regressive models have been known to be useful in modeling a wide variety of time-series trends. Nonetheless,_ non-linear _functions like RNNs and LSTMs (Hochreiter and Schmidhuber, 1997) may also be leveraged using deep instrument variable methods (Hartford et al., 2017; Bennett et al., 2019; Liu et al., 2020; Xu et al., 2020)._ As required for the blue arrows in Figure 3, \(f(\cdot;\hat{\theta}_{n})\) can now be used to estimate the expected value \(\mathbb{E}_{\pi}\left[J_{i+1}(\pi)|J_{i}(\pi)\right]\) under hybrid non-stationarity. Therefore, using \(f(\cdot;\hat{\theta}_{n})\) we can now auto-regressively forecast the future values of \((J_{i}(\pi))_{i=n+1}^{n-L}\) and obtain an estimate for \(\mathscr{J}(\pi)\). A complete algorithm for the proposed procedure is provided in Appendix E.1. #### 5.3.2 Variance Reduction As discussed earlier, importance sampling results in noisy estimates of \(J_{i}(\pi)\). During regression, while high noise in the input variable leads to high bias, high noise in the target variables leads to high variance parameter estimates. Unfortunately, (3) and (4) have target variables containing \(\rho_{i}\) (and \(\rho_{i+1}\)) which depend on the product of importance ratios and can thus result in extremely large values leading to higher variance parameter estimates. The instrument variable technique helped in mitigating bias. To mitigate variance, we draw inspiration from the reformulation of weighted-importance sampling presented for the _stationary_ setting by Mahmood et al. (2014), and propose the following estimator, \[\tilde{\varphi}_{n}\in\operatorname*{argmin}_{\varphi\in\Omega}\ \sum_{i=2}^{n}\bar{\rho}_{i}\left(g\left(\widehat{J}_{i-1}(\pi); \varphi\right)-G_{i}(\pi)\right)^{2}, \text{where}\ \ \bar{\rho}_{i}\coloneqq\frac{\rho_{i}}{\sum_{j=2}^{n}\rho_{j}} \tag{5}\] \[\tilde{\theta}_{n}\in\operatorname*{argmin}_{\theta\in\Theta} \sum_{i=2}^{n-1}\rho_{i}^{\dagger}\left(f\left(g\left(\widehat{J}_{i-1}(\pi) ;\tilde{\varphi}_{n}\right);\theta\right)-G_{i+1}(\pi)\right)^{2}, \text{where}\ \ \rho_{i}^{\dagger}\coloneqq\frac{\rho_{i}\rho_{i+1}}{\sum_{j=2}^{n-1}\rho_{j} \rho_{j+1}} \tag{6}\] where \(G_{i}\) is the return observed for \(M_{i}\). Intuitively, instead of importance weighting the _target_, we importance weight the squared error, proportional to how likely that _error_ would be if \(\pi\) was used to collect the data. Since dividing by any constant does not affect \(\tilde{\varphi}_{n}\) and \(\tilde{\theta}_{n}\), the choice of \(\bar{\rho}_{i}\) and \(\rho_{i}^{\dagger}\) ensures that both \(\bar{\rho}_{i}\) and \(\rho_{i}^{\dagger}\in[0,1]\), thereby mitigating variance but still providing consistency. **Theorem 4**.: _Under Assumptions \(1\), \(2\), and \(3\), if \(f\) and \(g\) are linear functions of their inputs, then \(\tilde{\theta}_{n}\) is a strongly consistent estimator of \(\theta_{\pi}\), i.e., \(\tilde{\theta}_{n}\stackrel{{ d.s.}}{{\longrightarrow}}\theta_{\pi}\). (See Appendix D.3 for the proof.)_ ## 6 Empirical Analysis This section presents both qualitative and quantitative empirical evaluations using several environments inspired by real-world applications that exhibit non-stationarity. In the following paragraphs, we first briefly discuss different algorithms being compared and answer three primary questions.1 Footnote 1: Code is available at [https://github.com/yashchandak/activeNS](https://github.com/yashchandak/activeNS) **1. OPEN:**: We call our proposed method OPEN: off-policy evaluation for non-stationary domains with structured passive, active, or hybrid changes. \(\overline{\text{f}}\) is based on our bias and variance reduced estimator developed in (5) and (6). Appendix E.1 contains the complete algorithm. **2. Pro-WLS:**: For the baseline, we use Prognosticator with weighted least-squares (Pro-WLS) (Chandak et al., 2020). This method is designed to tackle only passive non-stationarity. **3. WIS:**: A weighted importance sampling based estimator that ignores presence of non-stationarity completely (Precup, 2000). **4. SWIS:**: Sliding window extension of WIS which instead of considering all the data, only considers data from the recent past. Figure 4: An illustration of the stages in the proposed method for the RoboToy domain of Figure 1. Here, evaluation policy \(\pi\) chooses to ‘run’ more often, whereas the data collecting policy \(\beta\) chooses to ‘walk’ more often. **(Left)** This results in a slow decline of performance for \(\pi\) initially, followed by a faster decline once \(\pi\) is deployed after episode \(2000\). The blue and gray curves are unknown to the algorithm. **(Middle)** OPEN first uses historical data to obtain counterfactual estimates of \(J_{i}(\pi)\) for the past episodes. One can see the high-variance in these estimates **(notice the change in the y-scale)** due to the use of importance sampling. **(Right)** Intuitively, before naively auto-regressing, OPEN first denoises past performance estimates using the first stage of IV regression (i.e., converts black dots to green dots). It can be observed that OPEN successfully denoises the importance sampling estimates. Using these denoised estimates and a second use of counterfactual reasoning, OPEN performs the second stage of IV regression. It is able to estimate that once \(\pi\) is deployed, performances in the future will decrease more rapidly compared to what was observed in the past. ### Q1. (Qualitative Results) What is the impact of the two stages of the OPEN algorithm? In Figure 4 we present a step by step breakdown of the intermediate stages of a single run of OPEN on the RoboToy domain from Figure 1. It can be observed that OPEN is able to extract the effect of the underlying active non-stationarity on the performances and also detect that the evaluation policy \(\pi\) that 'runs' more often will cause an active harm, if deployed in the future. ### Q2. (Quantitative Results) What is the effect of different types and rates of non-stationarity? Besides the toy robot from Figure 1, we provide empirical results on three other domains inspired by real-world applications that exhibit non-stationarity. Appendix E.3 contains details for each, including how the evaluation policy and the data collecting policy were designed for them. Non-stationary Mountain Car:In real-world mechanical systems, motors undergo wear and tear over time based on how vigorously they have been used in the past. To simulate similar performance degradation, we adapt the classic (stationary) mountain car domain (Sutton and Barto, 2018). We modify the domain such that after every episode the effective acceleration force is decayed proportional to the average velocity of the car in the current episode. This results in active non-stationarity, where the change in the system is based on the actions taken by the agent in the past. Type-1 Diabetes Management:Personalised automated healthcare systems for individual patients should account for the physiological and lifestyle changes of the patient over time. To simulate such a scenario we use an open-source implementation (Xie, 2019) of the U.S. Food and Drug Administration (FDA) approved Type-1 Diabetes Mellitus simulator (T1DMS) (Man et al., 2014) for the treatment of Type-1 diabetes, where we induced non-stationarity by oscillating the body parameters (e.g., rate of glucose absorption, insulin sensitivity, etc.) between two known configurations available in the simulator. This induces passive non-stationarity, that is, changes are not dependent on past actions. MEDEVAC:This domain stands for medical evacuation using air ambulances. This domain was developed by Robbins et al. (2020) for optimally routing air ambulances to provide medical assistance in regions of conflict. Based on real-data, this domain simulates the arrival of different events, from different zones, where each event can have different priority levels. Serving higher priority events yields higher rewards. A good controller decides whether to deploy, and which MEDEVAC to deploy, to serve any event (at the risk of not being able to serve a new high-priority event if all ambulances become occupied). Here, the arrival rates of different events can change based on external incidents during conflict. Similarly, the service completion rate can also change based on how frequently an ambulance is deployed in the past. To simulate such non-stationarity, we oscillate the arrival rate of Figure 5: Comparison of different algorithms for predicting the future performance of evaluation policy \(\pi\) on domains that exhibit active/hybrid non-stationarity. On the x-axis is the speed which corresponds to the rate of non-stationarity; higher speed indicates faster rate of change and a speed of zero indicates stationary domain. On the y-axes are the absolute bias **(Top row)** and the mean-squared error **(Bottom row)** of the predicted performance estimate _(lower is better everywhere)_. For each domain, for each speed, for each algorithm, 30 trials were executed. the incoming high-priority events, which induces passive non-stationarity. Further, to induce wear and tear, we decay the service rate of an ambulance proportional to how frequently the ambulance was used in the past. This induces active non-stationarity. The presence of both active and passive changes makes this domain subject to hybrid non-stationarity. Figure 5 presents the (absolute) bias and MSE incurred by different algorithms for predicting the future performance of the evaluation policy \(\pi\). As expected, the baseline method WIS that ignores the non-stationarity completely fails to capture the change in performances over time. Therefore, while WIS works well for the stationary setting, as the rate of non-stationarity increases, the bias incurred by WIS grows. In comparison, the baseline method Pro-WLS that can only account for passive non-stationarity captures the trend better than WIS, but still performs poorly in comparison to the proposed method OPEN that is explicitly designed to handle active/hybrid non-stationarity. Perhaps interestingly, for the Diabetes domain which only has passive non-stationarity, we observe that OPEN performs better than Pro-WLS. As we discuss later, this can be attributed to the sensitivity of Pro-WLS to its hyper-parameters. While OPEN incorporated one variance reduction technique, it can be noticed when the rate of non-stationarity is high, variance can sometimes still be high thereby leading to higher MSE. We discuss potential negative impacts of this in Appendix A. Incorporating (partial) knowledge of the underlying model and developing doubly-robust version of OPEN could potentially mitigate variance further. We leave this extension for future work. _Q3. (Ablations Results) How robust are the methods to hyper-parameters?_ Due to space constraints, we defer the empirical results and discussion for this to Appendix E.5. Overall, we observe that the proposed method OPEN being an auto-regressive method can extrapolate/forecast better and is thus more robust to hyper-parameters (number of past terms to condition, as discussed in Remark 3) than Pro-WLS that uses Fourier bases for regression (where the hyper-parameter is the order of Fourier basis) and is not as good for extrapolation. ## 7 Conclusion We took the first steps for addressing the fundamental question of off-policy evaluation under the presence of non-stationarity. Towards this goal we discussed the need for structural assumptions and developed a model-free procedure OPEN and presented ways to mitigate its bias and variance. Empirical results suggests that OPEN can now not only enable practitioners to predict future performances amidst non-stationarity but also identify policies that may be actively causing harm or damage. In the future, OPEN can also be extended to enable _control_ of non-stationary processes. ## 8 Acknowledgements Research reported in this paper was sponsored in part by a gift from Adobe, NSF award #2018372. This work was also funded in part by the U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory under Cooperative Agreement W911NF-17-2-0196 and Support Agreement No. USMA21050. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Military Academy, the United States Army, the Department of Defense, or the United States Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
2310.06213
GeoLLM: Extracting Geospatial Knowledge from Large Language Models
The application of machine learning (ML) in a range of geospatial tasks is increasingly common but often relies on globally available covariates such as satellite imagery that can either be expensive or lack predictive power. Here we explore the question of whether the vast amounts of knowledge found in Internet language corpora, now compressed within large language models (LLMs), can be leveraged for geospatial prediction tasks. We first demonstrate that LLMs embed remarkable spatial information about locations, but naively querying LLMs using geographic coordinates alone is ineffective in predicting key indicators like population density. We then present GeoLLM, a novel method that can effectively extract geospatial knowledge from LLMs with auxiliary map data from OpenStreetMap. We demonstrate the utility of our approach across multiple tasks of central interest to the international community, including the measurement of population density and economic livelihoods. Across these tasks, our method demonstrates a 70% improvement in performance (measured using Pearson's $r^2$) relative to baselines that use nearest neighbors or use information directly from the prompt, and performance equal to or exceeding satellite-based benchmarks in the literature. With GeoLLM, we observe that GPT-3.5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset. Our experiments reveal that LLMs are remarkably sample-efficient, rich in geospatial information, and robust across the globe. Crucially, GeoLLM shows promise in mitigating the limitations of existing geospatial covariates and complementing them well. Code is available on the project website: https://rohinmanvi.github.io/GeoLLM
Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, Stefano Ermon
2023-10-10T00:03:23Z
http://arxiv.org/abs/2310.06213v2
# GeoLLM: Extracting Geospatial ###### Abstract The application of machine learning (ML) in a range of geospatial tasks is increasingly common but often relies on globally available covariates such as satellite imagery that can either be expensive or lack predictive power. Here we explore the question of whether the vast amounts of knowledge found in Internet language corpora, now compressed within large language models (LLMs), can be leveraged for geospatial prediction tasks. We first demonstrate that LLMs embed remarkable spatial information about locations, but naively querying LLMs using geographic coordinates alone is ineffective in predicting key indicators like population density. We then present GeoLLM, a novel method that can effectively extract geospatial knowledge from LLMs with auxiliary map data from OpenStreetMap. We demonstrate the utility of our approach across multiple tasks of central interest to the international community, including the measurement of population density and economic livelihoods. Across these tasks, our method demonstrates a 70% improvement in performance (measured using Pearson's \(r^{2}\)) relative to baselines that use nearest neighbors or use information directly from the prompt, and performance equal to or exceeding satellite-based benchmarks in the literature. With GeoLLM, we observe that GPT-3.5 outperforms LLama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset. Our experiments reveal that LLMs are remarkably sample-efficient, rich in geospatial information, and robust across the globe. Crucially, GeoLLM shows promise in mitigating the limitations of existing geospatial covariates and complementing them well. ## 1 Introduction Geospatial predictions with ML are widely used across various domains, including poverty estimation (Jean et al., 2016; Chi et al., 2021), public health (Nilsen et al., 2021; Areed et al., 2022; Chang et al., 2022), food security (Nakalembe, 2018), biodiversity preservation (Mai et al., 2023c, 2023), and environmental conservation (Sofaer et al., 2019). The covariates used in these predictions include geographical coordinates, remote sensing data, satellite imagery, human mobility data (Chang et al., 2022), and phone metadata (Blumenstock et al., 2015; Burke et al., 2019). While having access to quality covariates is essential, it can be challenging due to limited spatiotemporal coverage, high costs, and accessibility barriers (Ball et al., 2017). For example, researchers often use publicly available satellite images to estimate socio-economic indicators like asset wealth, population density, and infrastructure access, because they are free and globally available (Yeh et al., 2020; Robinson et al., 2017; Head et al., 2017). However, model predictive power can be limited due to the fact that important features may not be visible from space. Large language models (LLMs) have proven to be highly effective foundation models that can be fine-tuned or prompted to perform tasks in various domains including healthcare, education, law, finance, and scientific research (Bommasani et al., 2021; Zhao et al., 2023). This is because LLMs have compressed the knowledge contained in their training corpus, which includes billions or trilions of tokens of data from the internet (Del'etang et al., 2023). Here, we seek to understand whether LLMs possess geospatial knowledge and explore methods to extract this knowledge to offer a novel set of geospatial covariates that can enhance various geospatial ML tasks. As shown in fig. 0(a), a substantial amount of geospatial knowledge in LLMs can be revealed simply by querying them to describe an address. However, extracting this knowledge from LLMs is not trivial. While the most natural interface involves using geographic coordinates like latitude and longitude to retrieve specific location information, this often yields poor results as shown in fig. 0(b). The inherent challenge lies in the LLMs' capability to understand and interpret these numeric coordinates in relation to real-world locations, especially when the location is remote or lesser-known. In this paper, we introduce GeoLLM, a novel method that can efficiently extract the vast geospatial knowledge contained in LLMs by fine-tuning them on prompts constructed with auxiliary map data from OpenStreetMap OpenStreetMap contributors (2017). With our prompting strategy shown in fig. 0(b), we can pinpoint a location and provide the LLM with enough spatial context information, thereby enabling it to access and leverage its vast geospatial knowledge. We find that including information from nearby locations in the prompt can improve GPT-3.5's (OpenAI, 2023b) performance by 3.3x, measured in Pearson's \(r^{2}\), over simply providing the target location's coordinates. We find that popular LLM models such as GPT-3.5 and Llama 2 (Touvron et al., 2023) can be fine-tuned to achieve state-of-the-art performance on a variety of large-scale geospatial datasets for tasks including assessing population density, asset wealth, mean income, women's education and more. With GeoLLM, GPT-3.5, Llama 2, and RoBERTa (Liu et al., 2019), show a 70%, 43%, and 13% improvement in Pearson's \(r^{2}\) respectively over baselines that use nearest neighbors and use information directly from the prompt, suggesting that the models' geospatial knowledge scales with their size and the size of their pretraining dataset. Our experiments demonstrate LLMs' remarkable sample efficiency and consistency worldwide. Importantly, GeoLLM could mitigate the limitations of existing geospatial covariates and complement them well. We present the following main research contributions: Figure 1: Example prompts and corresponding GPT responses. In fig. 0(a) we show GPT-3.5 demonstrate its geospatial knowledge by asking it to describe an address. However, in fig. 0(b) (top) prompting GPT-3.5 with just coordinates and finetuning it on population density is insufficient. We demonstrate our prompting strategy in fig. 0(b) (bottom) with which a finetuned GPT-3.5 is able solve the task correctly (the expected value is 9.0). 1. Language models possess substantial geospatial knowledge and our method unlocks this knowledge across a range of models and tasks. 2. Constructing the right prompt is key to extracting geospatial knowledge. Our ablations find that prompts constructed from map data allow the models to more efficiently access their knowledge. ## 2 Related Work Identifying KnowledgePre-training instills large amounts of knowledge into language models. Since the introduction of pre-trained language models such as BERT (Devlin et al., 2019), many works try to quantify the amount of knowledge they can encode (Roberts et al., 2020; Jiang et al., 2019) and see if they can be used as knowledge bases (Petroni et al., 2019). Other works have looked for more specific types of knowledge such as commonsense (Feldman et al., 2019), temporal (Dhingra et al., 2021; Liu et al., 2021), biomedical (Sung et al., 2021), numerical (Lin et al., 2020), scale (Zhang et al., 2020), and many others. With the recent introduction of instruction-finetuned LLMs, directly querying knowledge with prompts is a potentially simpler method. In this paper, we focus on showing the quantity and quality of geospatial knowledge in pre-trained language models. Knowledge ExtractionMany works aim to improve knowledge extraction from language models by fine-tuning or prompt tuning. Works on prompt tuning explore mining prompts from the web (Jiang et al., 2019), optimizing prompts in the discrete space of words and tokens (Shin et al., 2020), optimizing prompts in the continuous embedding space (Zhong et al., 2021), and using adapters (Newman et al., 2021). However, prompts are often difficult to craft (Qin and Eisner, 2021; Jiang et al., 2019), and are sensitive to small changes. Fine-tuning language models for downstream tasks has shown to be an effective and simple method to tune and elicit specific knowledge for evaluation (Radford and Narasimhan, 2018; Dong et al., 2019). It is also shown that fine-tuning may result in higher performance gains compared to prompt tuning (Fichtel et al., 2021). Additionally, LoRA and quantization are shown to greatly reduce the computational cost of fine-tuning (Hu et al., 2021; Dettmers et al., 2023). For these reasons, we choose to extract geospatial knowledge with fine-tuning. NLP for Geospatial TasksNatural language processing (NLP) has been used for many geospatial tasks with various data sources. Hu et al. (2017) extracted semantics relations among cities by leveraging news articles. Sheehan et al. (2019) utilized Wikipedia to predict poverty. Xu et al. (2016) accessed traffic by using OpenStreetMap data. Signorini et al. (2011); Paul and Dredze (2011) utilized tweets to predict track-levels disease activities. Recently, researchers started to explore LLMs' capability on various geospatial tasks. Mai et al. (2023) demonstrated the usability of large language models on various geospatial applications including fine-grained address recognition, time-series dementia record forecasting, urban function prediction, and so on. GeoGPT (Zhang et al., 2023) has been proposed as a GPT-3.5-based autonomous AI tool that can conduct geospatial data collection, processing, and analysis in an autonomous manner with the instruction of only natural language. However, both works mainly relied on the pre-trained LLMs and did not explore the possibility of fine-tuning LLMs for a geo-specific foundation model. Deng et al. (2023) developed an LLM in geoscience, so-called K2, by fine-tuning on a geoscience text corpus which has shown remarkable performances on various NLP tasks in the geoscience domain. However, the K2 geoscience language foundation model is still limited to the common NLP tasks such as question answering, summarization, text classification, etc. In contrast, in this work, we conduct a deep investigation into the possibility of extracting geospatial knowledge from LLMs and whether we can use it for various real-world geospatial tasks like predicting population density. By fine-tuning various LLMs, we demonstrate the quantity, quality, and scalable nature of geospatial knowledge contained in LLMs. ## 3 Method Abstractly, we want to map geographic coordinates (latitude, longitude) and additional features to a response variable, such as population density or asset wealth. A naive approach would be to spatially interpolate using the coordinates. However, this is far from ideal, especially for small sample sizes. Features or covariates can help, but such data can be difficult to obtain. Given this challenge, we aim to determine how much LLMs know about these coordinates and how we can use that knowledge to better predict response variables of interest. In this section, we describe our approach of extracting this knowledge with the fine-tuning of language models with prompts generated using map data. ### Minimum Viable Geospatial Prompt A minimum viable prompt needs to specify the location and the prediction task. Our basic prompt consists of geographic coordinates and a description of the task. The use of geographic coordinates is essential to extracting high-resolution data as they are a universal and precise interface for knowledge extraction. The description comprises the task's name and the scale indicating the range of possible completions. The completion, which is the ground truth label, is also added when training an LLM. We present an example of this simple structure in the top part of fig. 0(b). While we find that adding the name of the task helps, more specific information, such as the name of the dataset, generally does not help. This suggests that basic context sufficiently assists LLMs in efficiently accessing the knowledge in its weights. Since the completion has to be specified through text for LLMs, we use classification instead of regression. However, we find that a classification task closer to regression (e.g., \(0.0\) to \(9.9\) as opposed to \(1\) to \(5\)) is beneficial, despite the fact that LLMs use softmax. If the distribution in the original dataset already has an approximate uniform distribution, we simply scale the values to the range from \(0\) to \(9.9\) and round to one decimal place. If the dataset is not inherently uniform, we distribute the values evenly among 100 bins to maintain a uniform distribution. Each bin is then associated with the range of values from \(0.0\) to \(9.9\). ### Prompt with Map Data The prompt described above, however, has a major limitation. Through zero-shot prompting, one can easily find that LLMs struggle to identify the locations of coordinates. Not knowing where the coordinates are located would be detrimental for geospatial tasks. This is why we focus on making sure the prompt contains additional context that helps the model understand where the coordinates are located. We also design our prompt to be descriptive and useful for a range of geographical scopes such as ZIP code areas or a single square kilometer. Our prompt, shown in the lower part of fig. 0(b), consists of two additional components constructed using map data: 1. Address: A detailed reverse-geocoded description, encompassing place names from the neighborhood level up to the country itself. 2. Nearby Places: A list of the closest 10 locations in a 100 kilometer radius with their respective names, distances, and directions. We use OpenStreetMap (OpenStreetMap contributors, 2017) for our map data as it can be queried for free using various APIs, although any map data can be used. In our testing, map data from Google is of higher quality, but was prohibitively expensive. We generate addresses using reverse geocoding from Nominatim (Hoffmann, 2023) and the names and locations of nearby places from Overpass API (Olbricht, 2023). ### Fine-tuning and Inference with Language Models RoBERTaRoBERTaBASE is a 125 million parameter encoder-only transformer model trained on 160 GB worth of text (Liu et al., 2019). We choose this as it and other encoder-only models like BERT (Devlin et al., 2019) are often used for classification and regression tasks (Bommasani et al., 2021). We fine-tune the model with all of its parameters being trainable. It takes in the entire prompt and outputs a continuous value which we round to the nearest bin. Llama 2Llama 27B is a 7 billion parameter decoder-only transformer model trained on 2 trillion tokens (Touvron et al., 2023). Despite being unsuited for regression tasks, it has the potential to perform well as it has been trained on significantly more data. Using QLoRA (Dettmers et al., 2023), we fine-tune Llama 2 on the prompt and completion concatenated. During inference time, we provide it with the prompt and let it generate the three tokens required for the completion (e.g., "9", "\(\cdots\)", and "9"). QLoRA allows us to greatly reduce the computational cost and effectively fine-tune with just 33 million trainable parameters. However, since it also quantizes the frozen model to 4-bit, this comes with the potential cost of diluting the knowledge contained in the weights. We find that fine-tuning for 4 epochs with QLoRA and various regularization techniques works well. Interestingly, the model's performance still improves even when the loss on the validation set increases by the fourth epoch. This suggests that the model is less prone to overfitting on the prediction tasks compared to the prompts. Gpt-3.5GPT-3.5(gpt-3.5-turbo-0613) (OpenAI, 2023b) is a chat based LLM. It is the successor of GPT-3 which is a decoder-only transformer model with 175 billion parameters (Brown et al., 2020). GPT-3.5 shows promise in containing a substantial amount of knowledge (Bian et al., 2023). We fine-tune GPT-3.5 through OpenAI's fine-tuning API (OpenAI, 2023a). Similar to Llama 2, we provide it the prompt and let it generate the three tokens required for the completion. We find that 4, 3, and 2 epochs work well for 100, 1,000, and 10,000 training samples respectively. It is worth noting that the specific fine-tuning procedures and internal architecture of GPT-3.5 are not publicly disclosed by OpenAI. ## 4 Experiments We construct a comprehensive benchmark that encompasses a wide range of geospatial prediction tasks. This benchmark includes various domains and geographies, ranging from women's education levels in developing countries to housing prices in the US. We develop a robust set of baselines to demonstrate the performance attainable using the information directly in the prompt. Finally, we present our results and an ablation study. ### Tasks and Sources WorldPopWorldPop(Tatem, 2017) is a research group focused on providing spatial demographic data for the use of health and development applications, including addressing the Sustainable Development Goals. We use their population counts dataset that covers the whole world for the population density task. More specifically, we use their unconstrained global mosaic for 2020 that has a resolution of 30 arc (approximately 1km at the equator) (WorldPop & CIESIN, 2018). To ensure a \begin{table} \begin{tabular}{l l|c|c c c|c c c} \hline \hline Task & Source & Samples & GPT-3.5 & Llama 2 & RoBERTa & XGBoost-FT & XGBoost & k-NN \\ \hline \multirow{3}{*}{Population} & \multirow{3}{*}{WorldPop} & 10,000 & \(\mathbf{0.78}\) & \(0.73\) & \(0.60\) & \(0.53\) & \(0.48\) & \(0.36\) \\ & & 1,000 & \(\mathbf{0.73}\) & \(0.63\) & \(0.41\) & \(0.34\) & \(0.32\) & \(0.11\) \\ & & 100 & \(\mathbf{0.61}\) & \(0.43\) & \(0.00\) & \(0.14\) & \(0.19\) & \(0.02\) \\ \hline \multirow{2}{*}{Asset Wealth} & \multirow{2}{*}{DHS} & 10,000 & \(\mathbf{0.75}\) & \(0.74\) & \(0.69\) & \(0.64\) & \(0.63\) & \(0.59\) \\ & & 1,000 & \(\mathbf{0.71}\) & \(0.62\) & \(0.55\) & \(0.46\) & \(0.48\) & \(0.44\) \\ \hline \multirow{2}{*}{Women Edu} & \multirow{2}{*}{DHS} & 10,000 & \(\mathbf{0.64}\) & \(0.61\) & \(0.58\) & \(0.55\) & \(0.55\) & \(0.54\) \\ & & 1,000 & \(\mathbf{0.60}\) & \(0.52\) & \(0.47\) & \(0.36\) & \(0.40\) & \(0.40\) \\ \hline \multirow{2}{*}{Sanitation} & \multirow{2}{*}{DHS} & 10,000 & \(\mathbf{0.65}\) & \(0.63\) & \(0.62\) & \(0.61\) & \(0.57\) & \(0.57\) \\ & & 1,000 & \(\mathbf{0.64}\) & \(0.51\) & \(0.49\) & \(0.43\) & \(0.42\) & \(0.43\) \\ \hline \multirow{2}{*}{Women BMI} & \multirow{2}{*}{DHS} & 10,000 & \(\mathbf{0.63}\) & \(0.63\) & \(0.60\) & \(0.59\) & \(0.59\) & \(0.58\) \\ & & 1,000 & \(\mathbf{0.58}\) & \(0.56\) & \(0.50\) & \(0.43\) & \(0.46\) & \(0.50\) \\ \hline \multirow{2}{*}{Population} & \multirow{2}{*}{USCB} & 1,000 & \(\mathbf{0.70}\) & \(0.50\) & \(0.34\) & \(0.24\) & \(0.27\) & \(0.21\) \\ & & 100 & \(\mathbf{0.62}\) & \(0.38\) & \(0.00\) & \(0.13\) & \(0.15\) & \(0.08\) \\ \hline \multirow{2}{*}{Mean Income} & \multirow{2}{*}{USCB} & 1,000 & \(\mathbf{0.55}\) & \(0.43\) & \(0.21\) & \(0.15\) & \(0.18\) & \(0.22\) \\ & & 100 & \(\mathbf{0.46}\) & \(0.29\) & \(0.01\) & \(0.03\) & \(0.04\) & \(0.10\) \\ \hline \multirow{2}{*}{Hispanic Ratio} & \multirow{2}{*}{USCB} & 1,000 & \(\mathbf{0.74}\) & \(0.65\) & \(0.55\) & \(0.50\) & \(0.56\) & \(0.57\) \\ & & 100 & \(\mathbf{0.64}\) & \(0.32\) & \(0.11\) & \(0.19\) & \(0.23\) & \(0.40\) \\ \hline \multirow{2}{*}{Home Value} & \multirow{2}{*}{Zillow} & 1,000 & \(\mathbf{0.87}\) & \(0.74\) & \(0.50\) & \(0.46\) & \(0.50\) & \(0.59\) \\ & & 100 & \(\mathbf{0.74}\) & \(0.43\) & \(0.00\) & \(0.13\) & \(0.27\) & \(0.40\) \\ \hline \hline \end{tabular} \end{table} Table 1: Pearson’s \(r^{2}\) for all models across all tasks and training sample sizes. Figure 2: Plots of absolute error comparing the best baselines and GPT-3.5 on tasks from each source with 1,000 samples. We also provide high-resolution plots from various locations around the world for the population density task from WorldPop. We show that GeoLLM not only outperforms baselines on a variety of tasks (table 1) but also demonstrates remarkable geographic consistency across the world. comprehensive representation of populations, we employed importance sampling, weighted by population. This allows us to capture a wide range of populations; without it, our samples would largely consist of sparsely populated areas. To construct a learning curve for this task, we use training sample sizes of 100, 1,000, and 10,000. DhsSustainBench (Yeh et al., 2021) includes tasks derived from survey data from the Demographic and Health Surveys (DHS) program (DHS, 2023). These surveys constitute nationally representative household-level data on assets, housing conditions, and education levels, among other attributes across 48 countries. While DHS surveys provide household-level data, SustainBench summarizes the survey data into "cluster-level" labels, where a "cluster" roughly corresponds to a village or local community. The geocoordinates are "jittered" randomly up to 2km for urban clusters and 10km for rural clusters to protect survey participant privacy. From their datasets, we use asset wealth index, women educational attainment, sanitation index, and women BMI. Since their datasets cover a wide scope, we use training sample sizes of 1,000 and 10,000. UscbThe United States Census Bureau (USCB) (US, 2023) is responsible for producing data about the American people and economy. In addition to the decennial census, the Census Bureau continually conducts annual demographics surveys through the American Community Survey (ACS) program. From these two sources, we get tasks for population, mean income, and Hispanic to Non-Hispanic ratio by what they call ZIP Code Tabulation Areas. Since their data only covers the US, we use training sample sizes of 100 and 1,000. ZillowZillow provides the Zillow Home Value Index (ZHVI) (ZR, 2023): A measure of the typical home value and market changes across a given region and housing type. It reflects the typical value for homes in the 35th to 65th percentile range. We use the smoothed, seasonally adjusted measure that they provide. Like with the USCB data, we also use this data on a ZIP code level. This serves as our home value task. Since this dataset also only covers the US, we use training sample sizes of 100 and 1,000. When explicit coordinates are not given in the datasets, we approximate them by determining the center of relevant areas. For instance, we approximate the center of each of the ZIP code area so that we can generate prompts for the tasks from USCB or Zillow. We evaluate performance using the squared Pearson correlation coefficient \(r^{2}\) between the predictions and labels so that comparisons can be made with previous literature (Perez et al., 2017; Jean et al., 2016; Yeh et al., 2020; Yeh et al., 2021). We use 2,000 samples for our test and validation sets across all tasks. We split the data into training, test, and validation partitions early in the process before sampling different sizes. We also explicitly check again for any overlap between the train and test samples before fine-tuning or training. ### Baselines The purpose of our three baselines is to demonstrate the performance attainable using only the information in the prompt. Pre-trained language models can surpass the baselines only if they effectively use the knowledge embedded in their weights. This helps us understand the extent of knowledge encapsulated within pre-trained language models. k-NNOur simplest baseline is k-NN. It takes the mean of the 5 closest neighbors. This is a reasonable baseline as physical distance is very relevant for geospatial tasks. XGBoostOur second baseline uses XGBoost (Chen & Guestrin, 2016). This baseline incorporates all numerical and categorical data from the prompt, including coordinates, distances, and directions. It could outperform k-NN since it leverages not just the coordinates, but also the distance and direction data. We established this baseline believing that the direction and distance information could be covariates for some of the tasks we use. XGBoost-FTOur third baseline, XGBoost-FT, leverages both XGBoost and fastText (Joulin et al., 2016). This baseline uses all information contained in the prompt. In addition to using the coordinates, distances, and directions, it also uses embeddings generated from the address and the names of the nearby places. We use fastText to learn sentence embeddings on any given dataset using strings created by concatenating the address and the names of the nearby places. All baselines treat the tasks as a regression problem. Their output is rounded to the nearest bin (\(0.0\) to \(9.9\)) for classification. Any improvements over these baselines would likely come from knowledge about the address or list of places. ### Performance on Tasks From the results presented in table 1, it is evident that LLMs are the best-performing models. Among them, GPT-3.5 stands out as the best-performing LLM. Its Pearson's \(r^{2}\) ranges from 0.46 to 0.74, 0.55 to 0.87, and 0.63 to 0.78 for sample sizes of 100, 1,000, and 10,000, respectively. Improvements over the best baselines range from a Pearson's \(r^{2}\) of 0.24 to 0.47, 0.08 to 0.44, and 0.04 to 0.25 for those same sample sizes. While these variations indicate that the tasks differ in difficulty, LLMs generally outperform all the baselines. This suggests that LLMs possess a substantial amount of geospatial knowledge. Not only does GPT-3.5 outperform all other models on every test, but its performance is also relatively consistent across tasks and sample sizes. This shows that GPT-3.5 is resilient to the size of the areas of prediction (e.g., square kilometer vs ZIP code area), and any added noise (e.g., "jittered" coordinates). Llama 2 also does better than all baselines for 18 out of the 19 total tests and consistently does better than RoBERTa. RoBERTa consistently does better than all baselines with 10,000 training samples, but struggles at lower sample sizes. All models experience a significant drop in performance when the sample size is reduced to 100. However, GPT-3.5 and Llama 2 retain a much more acceptable level of performance compared to the others, emphasizing their sample efficiency. Specifically, with 100 samples, GPT-3.5 does 3.1 times better than the best baseline and Llama 2 does 1.8 times better. GPT-3.5 and Llama 2 do especially well with the population tasks from WorldPop and the USCB compared to the baselines. GPT-3.5 is also especially impressive with the home value task from Zillow with a Pearon's \(r^{2}\) of up to 0.87. However, the difference in performance between the models is less pronounced for the tasks from the DIKs. This might be due to the noise that is added when the coordinates for these tasks are "jittered" up to 5 kilometers. With the added noise, it is potentially more difficult to achieve a high Pearson's \(r^{2}\). As shown in fig. 3 GPT-3.5, Llama 2, and RoBERTa, perform 70%, 43%, and 13% better on average than the best baseline (XGBoost) respectively with 1,000 samples, indicating that the method scales well. Fig. 4 again shows that the sample efficiencies of Llama 2 and GPT-3.5 are exceptional, especially when making predictions on a global scale. Note that with larger sample sizes the gaps in performance will decrease as the physical distances between the training coordinates and test coordinates become smaller. It is visually clear in fig. 2 that GPT-3.5 does substantially better than the best baselines for each task. It also appears that GPT-3.5 generally only struggles where the baselines struggle, suggesting that those areas might just be difficult to estimate rather than it being a specific failing of the model. Notably, GPT-3.5 shows has impressive geographic consistency. ### Ablations on the Prompt The ablation study shown in table 2 demonstrates the importance of using a highly descriptive prompt that helps the model understand where the coordinates are located and efficiently access the knowledge contained in its weights. It also suggests that all components of the prompt contribute to the overall performance of the models. In addition to performing better than the other language models, GPT-3.5 is more resilient to the removal of the map data. This is evident from the fact that prompt without map data is completely ineffective for Llama 2 and RoBERTa. This suggests that GPT-3.5 can more effectively access its knowledge. ## 5 Discussion LLMs not only match but also exceed the performance of methods that use satellite imagery. Satellite-based methods achieve Pearson's \(r^{2}\) of 0.56 (Jean et al., 2016), 0.71 (Perez et al., 2017), and 0.67 (Yeh et al., 2020) in predicting asset wealth in Africa. With 10,000 training samples (ap proximately 3,700 of which are from Africa), a fraction of the over 87,000 data points available from the DHS, GPT-3.5 achieves an \(r^{2}\) of 0.72 when evaluated in Africa. The performance of the LLMs is particularly notable considering an inherent disadvantage they have relative to the baselines and RoBERTa. Generating multiple tokens with softmax is far less intuitive than regression which the other models use. Despite this, there are substantial improvements in performance and sample efficiency with the LLMs. It also appears that the significant increase in performance is not simply due to the number of parameters being trained during fine-tuning. With QLoRA, Llama 2 has 33 million trainable parameters as opposed to RoBERTa's 125 million trainable parameters. This suggests that the performance increase is due to the knowledge contained in Llama 2's frozen weights rather than the parameters that are being trained. This method's performance appears to scale with both the size of the model's pretraining dataset and its capacity for data compression, a capability influenced by the model's size. As both of these factors generally enhance language models (Kaplan et al., 2020), our method likely scales with the general capability of the model as well. For example, GPT-4 (OpenAI, 2023c) would likely outperform GPT-3.5, which is especially apparent when prompting the models to describe an address as shown in fig. 0(a). Since LLMs are trained on internet data, they likely have biases similar to the ones present in this data. One could even potentially use our method to better understand the biases of LLMs and their training corpora. Surprisingly, the performance across countries and continents appears to be consistent with the LLMs, as evident from the maps of absolute errors shown in fig. 2. This work lays the foundation for the future use of LLMs for geospatial tasks. Additional dimensions could extend the applicability of our method. For instance, one could augment prompts with names of amenities or roads to enrich the geospatial context and allow for even higher-resolution knowledge extraction. Temporal data such as dates could be integrated into the prompts for making forecasts. Multimodal LLMs could potentially also leverage satellite imagery or street view images. Overall, there are likely many ways to extend GeoLLM. ## 6 Conclusion In this work, we introduced GeoLLM, a novel method that can efficiently extract the vast geospatial knowledge contained in LLMs by fine-tuning them on prompts constructed with auxiliary map data from OpenStreetMap. We found that including information from nearby locations in the prompt greatly improves LLMs' performance. We demonstrated the utility of our approach across various tasks from multiple large-scale real-world datasets. Our method demonstrated a 70% improvement in Pearson's \(r^{2}\) over traditional baselines including k-NN and XGBoost, exceeding the performance of satellite-based methods. With GeoLLM, we observed that GPT-3.5 outperforms Llama 2 by 19% and RoBERTa by 51%, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset. Our simple method revealed that LLMs are sample-efficient, rich in geospatial information, and robust across the globe. Crucially, GeoLLM shows promise in substantially mitigating the limitations of traditional geospatial covariates.
2303.11131
Cocktail HuBERT: Generalized Self-Supervised Pre-training for Mixture and Single-Source Speech
Self-supervised learning leverages unlabeled data effectively, improving label efficiency and generalization to domains without labeled data. While recent work has studied generalization to more acoustic/linguistic domains, languages, and modalities, these investigations are limited to single-source speech with one primary speaker in the recording. This paper presents Cocktail HuBERT, a self-supervised learning framework that generalizes to mixture speech using a masked pseudo source separation objective. This objective encourages the model to identify the number of sources, separate and understand the context, and infer the content of masked regions represented as discovered units. Cocktail HuBERT outperforms state-of-the-art results with 69% lower WER on multi-speaker ASR, 31% lower DER on diarization, and is competitive on single- and multi-speaker tasks from SUPERB.
Maryam Fazel-Zarandi, Wei-Ning Hsu
2023-03-20T14:07:13Z
http://arxiv.org/abs/2303.11131v1
# Cocktail Hubert: Generalized Self-supervised Pre-training ###### Abstract Self-supervised learning leverages unlabeled data effectively, improving label efficiency and generalization to domains without labeled data. While recent work has studied generalization to more acoustic/linguistic domains, languages, and modalities, these investigations are limited to single-source speech with one primary speaker in the recording. This paper presents Cocktail HuBERT, a self-supervised learning framework that generalizes to mixture speech using a masked pseudo source separation objective. This objective encourages the model to identify the number of sources, separate and understand the context, and infer the content of masked regions represented as discovered units. Cocktail HuBERT outperforms state-of-the-art results with \(69\%\) lower WER on multi-speaker ASR, \(31\%\) lower DER on dariarization, and is competitive on single- and multi-speaker tasks from SUPERB. Maryam Fazel-Zarandi and Wei-Ning Hsu Meta AI - FAIR {maryamfazel,wnhsu}@meta.com Self-supervised pre-training, dariarization, multi-speaker ASR, source separation, cocktail party, mixture speech ## 1 Introduction Self-supervised learning (SSL) has greatly advanced speech processing over the past few years [1, 2, 3, 4, 5]. Supervised fine-tuning from a pre-trained model enjoys better label efficiency, achieving performance on par with supervised models using hundredths fewer labeled data [6]. The representations learned with pre-trained models are more universal: in contrast to those from supervised learning [7, 8, 9], self-supervised representations benefit a wider range of tasks [10, 11]. Self-supervised representations also enable many novel applications, such as unsupervised speech recognition and synthesis [12, 13, 14], disentangled speech codec [15], text-free spoken language models and prompting [16, 17]. A key advantage of self-supervised pre-training is that it uses unlabeled data instead of labeled data, such that a model can be pre-trained on data covering more domains [18, 19, 20, 21]. Consequently, the fine-tuned model is more robust to domain shift, suffering milder degradation when evaluated on domains unseen during fine-tuning [18]. Generalizing this idea, recent work also extends self-supervised pre-training to multi-modal speech [22, 23] and demonstrates a multi-modal speech recognition system can be built with only labeled unimodal data. However, up until now, speech pre-training has been designed for single-source speech, which contains one primary speaker in each sample whereas other sources are assumed noise [5], leaving mixture speech unattended. Mixture speech, where multiple speakers may speak at the same time, occurs frequently in conversational scenarios. These scenarios impose greater challenges to applications common to single-source speech (e.g., recognizing the "target" speech from a mixture), and also generate applications specific to mixture speech, including speech diarization, source separation, multi-speaker speech recognition (transcribe everything) and more. Pre-trained models designed for single-source speech are likely to be sub-optimal for these tasks. In an effort to broaden the applicability of pre-trained models to wider speech varieties, this paper presents Cocktail HuBERT (C-HuBERT), a self-supervised framework that pre-trains on both single-source and mixture speech with a unified objective. The pre-training objective can be summed as _masked pseudo source separation_, which predicts automatically discovered units of randomly masked spans for each source in the mixture given unmasked speech context. The speech mixture contains one or more sources, created by artificially mixing single source samples. To excel at this task, the model is required to perform three tasks jointly: source separation, acoustic modeling, and language modeling. Evaluation on multi-speaker automatic speech recognition (MS-ASR) and speech diarization (SD) verifies that the proposed objective is particularly effective for downstream tasks concerning mixture speech, outperforming state-of-the-art results by large margins (7.8% vs 24.9% WER on Libri2Mix MS-ASR, and 3.86% vs 5.62% DER on Libri(2+3)Mix SD). Evaluation on SUPERB [10] also shows strong performance on additional mixture tasks and slight-to-none degradation on single-source tasks. ## 2 Background This paper is built upon Hidden Unit BERT (HuBERT) [3], one of the state-of-the-art speech pre-training frameworks. The pre-training objective of HuBERT is masked cluster prediction. Similar to BERT, it masks part of the speech input, and predicts given the context (unmasked part) some label derived from the masked input. While the label used in BERT is the input token itself, HuBERT proposes to obtain discrete labels via clustering audio features and refine the label iteratively. Concretely, let \(y\) be waveform, \(x_{t}=f_{t}(y)\) be HuBERT local feature extractor \(f\) (CNN) output at time \(t\), \(c_{t}^{l}=g_{t}^{l}(x)\) be contextualized feature extractor \(g\) (\(L\)-layer Transformer) output at time \(t\) layer \(l\), and \(z_{t}\) be the target unit at time \(t\). HuBERT (\(f\) and \(g\)) is pre-trained by predicting \(z_{t}\) from \(g_{t}^{l}(\text{MASK}(x))\) for time steps \(t\) that are masked, where \(\text{MASK}(\cdot)\) is an operation that randomly samples spans of 10 frames and replaces the features in those spans with a learned masked embedding [MSK] following wav2vec2.0 [6]. In the first iteration, \(z_{t}\) are obtained by clustering MFCC features. In the subsequent iterations, the latest iteration HuBERT representations \(c_{t}^{l}\) are used for clustering, which produces higher quality cluster assignments than those from raw or earlier iteration HuBERT features. Intuitively, HuBERT pre-training solves acoustic and language modeling task jointly, where the model needs to understand the content from observed regions (acoustic model) and then infer the label for the masked frames (language model). ## 3 Cocktail HuBERT The "cocktail party problem" describes the setup where sounds from different sources are mixed prior to being perceived, such that estimation of individual sources given the mixture is required to perform downstream tasks. Human brains have impressive ability to focus on a particular stimulus leveraging structural properties of single sources. Researchers have also attempted to reproduce this capability and develop applications like source separation [24] and multi-speaker speech recognition [25]. In this paper, we present Cocktail HuBERT, a self-supervised learning objective that trains the model to tackle the cocktail party problem, as depicted in Figure 0(a). Conceptually, the model takes as input a masked mixture which contains one or more speech sources with spans of frames randomly masked, similar to HuBERT. The model processes the masked mixture and predicts the discrete labels for each source for only the masked frames. The discrete labels can be viewed as automatically discovered frame-level phonetic transcripts. Thus, the pre-training task is analogous to pseudo source separation that predicts a proxy frame representation for each source. ### Model and Training Objective Cocktail HuBERT adopts the same local and contextualized feature encoder (\(f\) and \(g\)) as HuBERT, but instead of having only one projection head to predict the posterior over units for a single source, Cocktail HuBERT has \(K\) project heads to predict units for at most \(K\) sources. Let \(y_{mix}\) be a mixture containing up to \(K\) sources and \(z^{i}_{mix}\) for \(i\in[K]\) be the target unit sequence for source \(i\) (some corresponds to silent sources). The model outputs \(K\) streams of predictions, where \(p^{i}_{t}(\cdot\mid g(\text{MASK}(f(y_{mix})))\) denotes the \(j\)-th stream prediction at time step \(t\). The loss of the \(j\)-th stream prediction with respect to the \(i\)-th source unit sequence \(z^{i}_{mix}\) is: \[L^{j,i}_{m}=\sum_{t\in M}\log p^{j}(z^{i}_{mix,t}\mid g(\text{MASK}(f(y_{mix} ))), \tag{1}\] where \(M\) denotes the masked time steps. Since the order of the model predictions \(j\) do not necessarily align with the order of the sources \(i\), permutation invariant training (PIT) loss [26] is deployed which finds the alignment with the minimum loss. Let \(\mathcal{P}\) be all permutations of \([K]\), the masked pseudo source separation objective is \[\frac{1}{K}\arg\min_{\pi\in\mathcal{P}}\sum_{j=1}^{K}L^{j,\pi(j)}_{m}. \tag{2}\] ### Mixture Simulation For training our models, overlapped speech for a maximum of \(K\) speakers is simulated as follows (See Fig 0(b)). A batch of \(B\) utterances where \(B\geq K\) is sampled from the dataset. For each utterance \(y\) and its units \(z\), the number of additional sources \(n\in\{0,\cdots K-1\}\) is sampled with \(P(n=0)=1-p_{mix}\) and \(P(n=k)=p_{mix}/(K-1)\) for \(\forall k\neq 0\). An additional source is either an utterance from the batch, or a non-speech noise sampled from a noise dataset. The probability of selecting noise is \(p_{noise}\). For each additional source \(y^{k}_{extra}\), a tuple of length ratio, energy ratio, and offset \((r_{l},r_{e},o)\) are sampled from some uniform distributions, which are used to chunk, scale, and shift \(y^{k}_{extra}\) with respect to \(y\). Let \(\hat{y}^{k}_{extra}\) be the resulting source and \(\hat{z}^{k}_{extra}\) be the units chunked correspondingly if \(y^{k}_{extra}\) is not noise. The resulting mixture \(y_{mix}=y+\sum_{k=1}^{n}\hat{y}^{k}_{extra}\). Note that each source is right-padded to the maximum length among \(y\) and \(y^{k}_{extra}\)\(\forall k\) with silence. A special token [SIL] is used for frames corresponding to padded silence (including the offset at the beginning). The first \(n+1\) unit sequences correspond to the [SIL]-padded \(z\) and \(\hat{z}^{k}_{extra}\) for non-noise \(k\). The remaining \(K-(n+1)\) sequences and those corresponding to noise samples are set to [SIL] sequences. ## 4 Related Work Cocktail HuBERT is most related to HuBERT [3] and WavLM [5], which are self-supervised speech pre-training frameworks based on masked cluster prediction. Similar to Cocktail HuBERT, WavLM also stochastically mixes single source speech with noise and/or other single source speech. However, WavLM is effectively HuBERT with data augmentation, which pre-trains the model using the same objective as HuBERT where only the units of the "primary" speech are predicted: the added noise and speech are both treated as noise and should be ignored by the model. Consequently, for the model to differentiate primary speech from interfering speech, WavLM deploys a more restrictive mixing strategy, where the interfering audio is at most half as long as the primary speech. On the other hand, Cocktail HuBERT and [24] also share similar training objectives. Inspired by the capability of converting units Figure 1: (a) C-HuBERT (\(K=2\)) predicts hidden units of the masked frames for each source in the input audio mix generated by k-means clustering. (b) Mixture simulation with \(K=3\), \(n=1\), \((r_{l},r_{e},o)=(0.75,2.0,640)\). Step i: sample the number of extra sources \(n\) and then sample additional sources \(z^{1,n}_{extra}\). Step ii: chunk, scale, and shift according to sampled \((r_{l},r_{e},o)\). \(e(y)\) denotes the energy of \(y\). Step iii: mix audio and pad target units with [SIL] for silent frames (last frame of \(z^{1}_{mix}\) and first two frames of \(z^{2}_{mix}\)) and silent streams (\(z^{3}_{mix}\)). back to speech [15] and the connection between single/multi-speaker speech recognition to speech enhancement/separation, [24] proposes an alternative approach to speech enhancement and source separation by predicting units instead of spectral masks or waveforms. Comparing Cocktail HuBERT with [24], the former are designed for pre-training, which masks partial input and predicts units only for masked frames, requiring the model to perform language modeling. It is also evaluated on a wide range of downstream tasks including both mixture and single-source speech processing. In contrast, the latter is designed for particular downstream tasks (enhancement and separation) that predict units for all frames without masking the input. It is unclear how the resulting model performs on other tasks. ## 5 Experimental Setup For unsupervised pre-training, we use 960 hours of LibriSpeech audio [27] for the Base model and 60k hours of Libri-light audio [28] for the Large model. We extract features from the \(9\)-th transformer layer of the HuBERT Base model for K-means with \(500\) clusters and use those labels to train the Cocktail HuBERT models. This ensures that we have high quality labels. We apply data mixing to \(20\%\), \(60\%\), and \(100\%\) of the data, where the mixing noise probability is set to \(10\%\). The Cocktail Base and Large models are trained on \(32\) and \(128\) GPUs, respectively, for \(400\)k and \(800\)k steps. The batch sizes are at most \(87.5\) and \(56.25\) seconds of audio per GPU. Adam [29] optimizer is used with \(\beta=(0.9,0.98)\), and learning rate ramps up linearly from \(0\) to peak learning for the first \(32\)k training steps, and then decays back to zero. The peak learning rates are 5e-4/1e-4 for Cocktail Base/Large models. \(K/p_{mix}\) are set to 5/1.0 for Base and 3/1.0 for Large unless otherwise specified. We evaluate our pre-trained models specifically on multi-speaker speech recognition and diarization tasks, as they involve multiple speakers. We use the LibriMix data [30] which contains multi-speaker overlapped speech simulated by mixing utterances from the LibriSpeech corpus. We focus on the two- and three-speaker scenarios, and mixes with the "max mode" where the shortest utterance is padded to the longest one. For MS-ASR, we fine-tune our models on multi-speaker labeled data using the connectionist temporal classification (CTC) [31] loss for each (output stream, label stream) pair and compute the PIT-CTC loss, to fine-tune the whole model except for the local feature extractor. The projection layers are removed and replaced with a randomly initialized softmax layer for each stream. The CTC target vocabulary includes 26 English characters, a space token, an apostrophe, and a special CTC blank symbol. We fine-tune each model on 8 GPUs on the train-100-mix-clean subset of Libri2Mix for the 2-speaker scenario. The batch sizes per GPU are at most 200/80 seconds of audio for Base/Large models. We sweep over peak learning rate ([1e-5, 1e-4]) for each model size using the PIT word error rate (WER) on the dev_mix_clean subset as criterion. All other hyperparameters are based on [3], except that we set _freeze-step_ to zero. We use beam search decoding with a 4-gram language model and a beam size of \(500\). For SD, we use a similar setup to SUPERB [10], where we freeze the pre-trained model and weight-sum the representations from different \(g\) layers with learnable weights as the input to the diarization model. The diarization model uses a single layer 512-unit LSTM and is trained with the PIT loss. We train each model on 1 GPU on the train-100-mix-both subset of Libri2Mix and Libri3Mix, and a mixture of both datasets. We use a batch size of 8, train for 30k steps, and sweep over learning rate ([1e-2, 1e-4]) for each model using accuracy on the dev_mix_both subset of each dataset as criterion for model selection. For the evaluation metric, we use the diarization error rate (DER) [32]. We also compare our models to other strong pre-trained models on a subset of SUPERB tasks [10, 11], including Phoneme Recognition (PR), Automatic Speech Recognition (ASR), Keyword Spotting (KS), Query by Example Spoken Term Detection (QbE), Intent Classification (IC), Slot Filling (SF), Emotion Recognition (ER), Speech Enhancement (SE), and Speech Separation (SS) following the protocol. Overall score is computed following [5]. ## 6 Results ### Multi-speaker and Single-speaker ASR We first evaluate Cocktail HuBERT models on multi-speaker ASR and compare them to three state-of-the-art supervised baselines: (1) the end-to-end ASR model trained with PIT-CTC [34], (2) the end-to-end ASR model trained with the extended Graph-based temporal classification [25] loss (GTC-e), and (3) the Conditional-Conformer-CTC model [33] that generates CTC predictions conditioning on past CTC predictions for other streams. To understand how pre-training objectives affect the multi-speaker ASR performance, we also fine-tune HuBERT Base and Large, which have the identical model architecture as C-HuBERT models, as our self-supervised baselines. Results are reported in Table 1. First, we observe that both Cocktail HuBERT Base and Large significantly outperform the baselines, with Large reducing the WER by 69% relative (24.9% \(\rightarrow\) 7.8%). More importantly, there is a considerable gap between the HuBERT and Cocktail HuBERT performance (37.2% vs 13.7% for Base, and 35.2% vs 7.8% for Large), validating that the proposed masked pseudo source separation objective brings significant gain to the multi-speaker downstream task. We further investigate the performance of our MS-ASR models on single speaker input by comparing with models fine-tuned for single-speaker ASR (Table 2). Overall, we see degradation in performance across all settings when using MS-ASR models to transcribe single-speaker utterances. However, compared to HuBERT, C-HuBERT models are more robust to variation in the number of speakers: the WER of Large increases by 1.2% and 4.1% on test-clean and test-other when fine-tuned on Libri2Mix instead of LS-10h, lower than the 5.1%/9.1% WER increases for HuBERT. \begin{table} \begin{tabular}{l|c|c|c} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**LM**} & \multicolumn{2}{c}{**Libri2Mix**} \\ & & **dev-mix** & **test-mix** \\ \hline PIT-CTC [25] & Transformer & \(24.0\) & \(26.3\) \\ GTC-e [25] & Transformer Transformer & \(32.7\) & \(33.7\) \\ Cond-Conformer-CTC [33] & greedy & \(24.5\) & \(24.9\) \\ \hline HuBERT Base & 4-gram & \(35.8\) & \(37.2\) \\ HuBERT Large & 4-gram & \(34.4\) & \(35.2\) \\ \hline C-HuBERT Base & 4-gram & \(12.7\) & \(13.7\) \\ C-HuBERT Large & 4-gram & **6.6** & **7.8** \\ \hline \end{tabular} \end{table} Table 1: WERs of multi-speaker ASR models trained on Libri2Mix and tested on dev-mix-clean (dev-mix) and test-mix-clean (test-mix). C-HuBERT Base \(p_{mix}=0.6\). ### Speech Diarization Table 4 shows results on speech diarization for two-, three-, and a mix of two- and three-speaker datasets. When compared against HuBERT and WavLM Base and Large models, the best DERs across the three settings are attained using C-HuBERT models. In fact, the C-HuBERT Base model outperforms the WavLM Large model on Libri2Mix, Libri3Mix, and Libri(2+3)Mix by \(14\%\), \(23\%\), and \(30\%\) relative, respectively. These are impressive gains since the Base model is considerably smaller than the WavLM Large model and was pre-trained on fewer hours of data. Performance on all test sets are further improved when scaling to the Large model. ### Superb We compare C-HuBERT with several state-of-the-art SSL models on the SUPERB tasks (Table 3). C-HuBERT shows strong performance on speech enhancement and source separation, which are closer to the pre-training task of C-HuBERT. It lags behind other models on single-speaker tasks such as PR and ASR. However, the performance can be improved and the gap can be reduced by simply scaling C-HuBERT (on PR, 12% PER reduction (1 - 5.41 / 6.14) between HuBERT and C-HuBERT for BASE and 7% gap for LARGE). ### Ablation Studies We study the effect of mixing probability \(p_{mix}\) and max number of speakers \(K\) on speech diarization (SD), multi-speaker and single-speaker ASR (MS-ASR and ASR) with the C-HuBERT Base model (Table 5). On speech diarization, more aggressive mixing (larger \(K\) and higher \(p_{mix}\)) leads to better results. The trend reverses on single-speaker speech recognition in general. Nevertheless, we observe that C-HuBERT outperforms HuBERT on single-speaker ASR for some configurations (e.g., \(K=5\) and \(p=0.2\) yields 3.9%/9.3% compared to 4.1%/9.4% from HuBERT in Table 1). We report results on both single- and multi-speaker test sets when fine-tuning C-HuBERT on multi-speaker data (columns below MS-ASR). Overall, \(p_{mix}=0.2\) leads to the worst multi-speaker test results (23.5% to 26.3%), which are still better than those from HuBERT (35.8%). The best multi-speaker test result is obtained with \(K=5\) and \(p_{mix}=0.6\). The results on single-speaker test sets (d-c, d-o) are interesting -- with \(K=2\), single-speaker and multi-speaker WERs are negatively correlated, while with \(K=5\) they are positively correlated. We believe the observation arises from the interaction of two factors: how mismatched pre-training and fine-tuning are, and how mismatched pre-training and testing are. ## 7 Conclusion This paper presents Cocktail HuBERT, a large-scale pre-trained model with the objective of masked pseudo source separation. C-HuBERT extends the HuBERT framework to multiple speakers, enabling the models to outperform the state-of-the-art models on MS-ASR and SD tasks, while achieving competitive performance on other SUPERB single-speaker and multi-speaker tasks.
2310.11009
LPFormer: An Adaptive Graph Transformer for Link Prediction
Link prediction is a common task on graph-structured data that has seen applications in a variety of domains. Classically, hand-crafted heuristics were used for this task. Heuristic measures are chosen such that they correlate well with the underlying factors related to link formation. In recent years, a new class of methods has emerged that combines the advantages of message-passing neural networks (MPNN) and heuristics methods. These methods perform predictions by using the output of an MPNN in conjunction with a "pairwise encoding" that captures the relationship between nodes in the candidate link. They have been shown to achieve strong performance on numerous datasets. However, current pairwise encodings often contain a strong inductive bias, using the same underlying factors to classify all links. This limits the ability of existing methods to learn how to properly classify a variety of different links that may form from different factors. To address this limitation, we propose a new method, LPFormer, which attempts to adaptively learn the pairwise encodings for each link. LPFormer models the link factors via an attention module that learns the pairwise encoding that exists between nodes by modeling multiple factors integral to link prediction. Extensive experiments demonstrate that LPFormer can achieve SOTA performance on numerous datasets while maintaining efficiency. The code is available at The code is available at https://github.com/HarryShomer/LPFormer.
Harry Shomer, Yao Ma, Haitao Mao, Juanhui Li, Bo Wu, Jiliang Tang
2023-10-17T05:36:46Z
http://arxiv.org/abs/2310.11009v4
# Adaptive Pairwise Encodings for Link Prediction ###### Abstract. Link prediction is a common task on graph-structured data that has seen applications in a variety of domains. Classically, hand-crafted heuristics were used for this task. Heuristic measures are chosen such that they correlate well with the underlying factors related to link formation. In recent years, a new class of methods has emerged that combines the advantages of message-passing neural networks (MPNN) and heuristics methods. These methods perform predictions by using the output of an MPNN in conjunction with a "pairwise encoding" that captures the relationship between nodes in the candidate link. They have been shown to achieve strong performance on numerous datasets. However, current pairwise encodings often contain a strong inductive bias, using the same underlying factors to classify all links. This limits the ability of existing methods to learn how to properly classify a variety of different links that may form from different factors. To address this limitation, we propose a new method, **L**P**or**mer, which attempts to adaptively learn the pairwise encodings for each link. LPFormer models the link factors via an attention module that learns the pairwise encoding that exists between nodes by modeling multiple factors integral to link prediction. Extensive experiments demonstrate that LPFormer can achieve SOTA performance on numerous datasets while maintaining efficiency. link prediction, graph transformer + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-55503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06...$15.00 + Footnote †: 2018: ACM ISSR 978-1-5503-XXXX-X/18/06. demonstrated to be poor link predictors due to their limited capability to learn effective and expressive link representations (Kang et al., 2018; Wang et al., 2019). To address this issue, recent efforts (Wang et al., 2019; Wang et al., 2019) have attempted to move beyond the node-centric view of traditional MPNNs by equipping them with pairwise information specific to the link being predicted (i.e. the "target link"). This is done by customizing the message passing process to each target link. For example, SEAL (Wang et al., 2019) first extracts the k-hop subgraph around each target link and then initializes the nodes with features that describe their relationship with the target link. Message passing is correspondingly run on this subgraph, with the pooled node representations being used for prediction. It has been shown that SEAL, along with a related method NBFNet (Wang et al., 2019), can learn many LP heuristics such as common neighbors (Wang et al., 2019) and Katz (Katz, 2019). However, a concern with this approach is that it can be prohibitively expensive (Bahdan et al., 2019), as message passing needs to be run for each individual target link. This is as opposed to traditional MPNNs which only run message passing once for all target links. To overcome these inefficiencies, recent methods (Bahdan et al., 2019; Wang et al., 2019; Wang et al., 2019) have instead explored ways to inject pairwise information into the model, without individualizing the message passing to each target link. This is done by decoupling the message passing and link-specific pairwise information. By doing so, the message passing only needs to be done once for all target links. To include pairwise information, these methods, which we refer to as "Decoupled Pairwise MPNNs" (DP-MPNNs), instead learn a "pairwise encoding" to encode the pairwise relationship of the target link. The choice of pairwise encoding is often based on heuristics that correspond to common LP factors (e.g., common neighbors). DP-MPNNs have gained attention as they can achieve promising performance while being much more efficient than methods that customize the message passing mechanism. However, DP-MPNNs are often limited in the choice of pairwise encoding, using a one-size-fits-all solution for all target links. This has two important limitations. **(1)** The pairwise encoding may fail to consider some integral LP factors. For example, NCNC (Wang et al., 2019) only considers the 1-hop neighborhood when computing the pairwise encoding, thereby ignoring the global structural information. _This suggests the need for a pairwise encoding that considers multiple types of LP factors. **(2)** The pairwise encoding uses the same LP factors for all target links. This assumes that all target links need the same factors. However, this may not necessarily be true. Recently, Mao et al. (2019) have shown that different LP factors are necessary to classify different target links. This tells us that even for the same dataset, multiple LP factors are needed to properly predict all target links. This further applies to different datasets, where certain factors are more prominent than others. As such, it is vital to be careful when considering multiple types of LP factors. While one factor may effectively model some target links, it will fail for other target links where those patterns aren't present. _It is therefore desired to consider different LP factor(s) for different target links._ These observations motivate us to ask - _can we design an efficient method that can adaptively determine which LP factors to incorporate for each individual target link?_ This requires a pairwise encoding that (a) models multiple LP factors, (b) can be tailored to fit each individual target link, and (c) is efficient to calculate. By doing so, we can flexibly adapt the pairwise information based on the existing needs of each target link. To achieve this, we propose **LPFormer** - **Link Prediction TransFormer**. LPFormer is a type of graph Transformer (Kang et al., 2018) designed specifically for link prediction. Given a target link \((a,b)\), LPFormer models the pairwise encoding via an attention module that learns how \(a\) and \(b\) relate in the context of various LP factors. This allows for a more customizable set of pairwise encodings that are specific to each target link. Extensive experiments validate that LPFormer can achieve SOTA on a variety of benchmark datasets. We further demonstrate that LPFormer is better at modeling several types of LP factors, highlighting its adaptability, while also maintaining efficiency on denser graphs. ## 2. Background ### Related Work Link prediction (LP) aims to model how links are formed in a graph. The process by which links are formed, i.e., link formation, is often governed by a set of underlying factors (Kang et al., 2018; Wang et al., 2019). We refer to these as "LP factors". Two categories of methods are used for modeling these factors - heuristics and MPNNs. We describe each class of methods. We further include a discussion on existing graph transformers. **Heuristics for Link Prediction**. Heuristics methods (Wang et al., 2019; Wang et al., 2019) attempt to explicitly model the LP factors via hand-crafted measures. Recently, Mao et al. (2019) have shown that there are three main factors that correlate with the existence of a link: (1) local structural information, (2) global structural information, and (3) feature proximity. **Local structural information** only considers the immediate neighborhood of the target link. Representative methods include Common Neighbors (CN) (Wang et al., 2019), Adamic Adar (AA) (Ashman et al., 2016), and Resource Allocation (RA) (Wang et al., 2019). They are predicated on the assumption that nodes that share a greater number neighbors exhibit a higher probability of forming connections. **Global structural information** further considers the global structure of the graph. Such methods include Katz (Katz, 2019) and Personalized PageRank (PPR) (Bahdan et al., 2019). These methods posit that nodes interconnected by a higher number of paths are deemed to have larger similarity and, therefore, are more likely to form connections. Lastly, **feature proximity** assumes nodes with more similar features connect (Wang et al., 2019). Previous work (Wang et al., 2019; Wang et al., 2019) have shown that leveraging the node features are helpful in predicting links. Lastly, we note that Mao et al. (2019) has recently shown that _to properly predict a wide variety of links, it's integral to incorporate all three of these factors_. **MPNNs for Link Prediction**. Message Passing Neural Networks (MPNNs) (Wang et al., 2019) aim to learn node representations via the message passing mechanism. Traditional MPNNs have been used for LP including GCN (Wang et al., 2019), SAGE (Kang et al., 2018), and GAE (Ezawa et al., 2019). However, they have been shown to be suboptimal for LP as they aren't expressive enough to capture important pairwise patterns (Kang et al., 2018; Wang et al., 2019). SEAL (Wang et al., 2019) and NBFNet (Wang et al., 2019) try to address this by customizing the message passing process to each target link. This allows for the message passing to learn pairwise information specific to the target link. However, these methods have been shown to be unduly expensive as they require a separate round of message passing for each target link. As such, recent methods have been proposed to instead decouple the message passing and pairwise information (Bahdan et al., 2019; Wang et al., 2019; Wang et al., 2019), reducing the time needed to do message passing. Such methods include NCN/NCNC (Nochel et al., 2017) which exploit the common neighbor information and BUDDY (Bumumford et al., 2017) and Neo-GNN (Nochel et al., 2018) which consider the global structural information. **Graph Transformers.** Recent work has attempted to extend the original Transformer (Srivastava et al., 2017) architecture to graph-structured data. Graphormer (Srivastava et al., 2017) learns node representations by attending all nodes to each other. To properly model the structural information, they propose to use multiple types of structural encodings (i.e., structural, centrality, and edge). SAN (Srivastava et al., 2017) further considers the use of the Laplacian positional encodings (LPEs) to enhance the learnt structural information. Alternatively, TokenGT (Krishnan et al., 2017) considers all nodes and edges as tokens in the sequence when performing attention. Due to the large complexity of these models, they are unable to scale to larger graphs. To address this, several graph transformers (Kip ### Modeling Pairwise Encodings via Attention In Section 3.1, we introduced a general formulation for pairwise encodings in Eq. (2), which is able to capture a variety of different LP factors. In this subsection, we further aim to move beyond a one-size-fits-all pairwise encoding, and enable the model to produce customized pairwise encoding for each target link. This allows the model to handle more realistic graphs that often contain multiple prominent LP factors as shown in (Kang et al., 2018). In particular, we consider the following question: _How can we model Eq (2) such that it can customize the used LP factors to each target link?_ We consider parameterizing both \(w(a,b,u)\) and \(h(a,b,u)\). This allows us to learn how to personalize them to each target link. To achieve this, we leverage softmax attention (Beng et al., 2017). This is due to its ability to dynamically learn the relevance of different nodes to the target link. As such, for multiple target links, it can emphasize the contributions of different nodes, thereby flexibly modeling different LP factors. We note that since the attention is between different sequences (i.e., a target link and nodes), it can be considered a form of cross attention (Zhu et al., 2017). To enhance the adaptability of pairwise encoding for various links, it is essential to incorporate various types of information. This allows the attention mechanism to discern and prioritize relevant information for each target link, facilitating the effective modeling of diverse LP factors. In particular, we consider two types of information. The first is the **feature information**. This includes the feature representation of both nodes in the target link and the node being attended to. The node features are included due to their role in link formation and relationship to structural information (Zhu et al., 2017). Second, we consider the **relative positional information**. The relative positional information reflects the relative position in the graph of a node \(u\) to the target link \((a,b)\) in the local and global structural context. Due to the importance of local and global structural information (Zhu et al., 2017; Wang et al., 2018), it is vital to properly encode both. By including both the structural and feature information, we are able to cover the space of potential LP factors (see Section 2.1). We denote the feature representation of a node \(u\) as \(\mathbf{h}_{u}\) and the relative positional encoding (RPE) as \(\mathbf{rpe}_{(a,b,u)}\). The node importance \(\mathsf{w}(a,b,u)\) is modeled via attention as follows: \[\tilde{\mathsf{w}}(a,b,u)=\phi\left(\mathbf{h}_{a},\mathbf{h}_{ b},\mathbf{h}_{u},\ \mathbf{rpe}_{(a,b,u)}\right),\] \[w(a,b,u)=\frac{\exp(\tilde{\mathsf{w}}(a,b,u)}{\sum_{b\in \mathcal{V}(a,b)}\exp(\tilde{\mathsf{w}}(a,b,u)}, \tag{4}\] where \(\mathcal{\bar{V}}(a,b)=\mathcal{V}\setminus\{a,b\}\). The attention weight \(w(a,b,u)\) can be considered as the impact of a node \(u\) on \((a,b)\) relative to all nodes in \(\mathcal{G}\). This allows the model to emphasize different LP factors for each target link. The node encoding \(h(a,b,u)\) includes the features of node \(u\) in conjunction with the RPE and is defined as: \[h(a,b,u)=\mathbf{W}\left[\mathbf{h}_{u}\ \middle|\ \mathbf{rpe}_{(a,b,u)} \right]. \tag{5}\] By substituting Eq. (4) and Eq. (5) into Eq. (2) we can compute the pairwise information \(s(a,b)\). We further define \(\phi(\cdot)\) in Eq. (4) as the GATv2 (Beng et al., 2017) attention mechanism. The feature representations \(\mathbf{h}_{i}\) are computed via a MPNN. We use GCN (Golovolov et al., 2012) in this work. However, it is unclear how to properly encode the RPE of a node \(u\) relative to \((a,b)\), \(\mathbf{rpe}_{(a,b,u)}\). We aim to design the RPE to capture both the local and global structural relationship between the node and target link while also being efficient to calculate. In the next section, we discuss our solution for modeling \(\mathbf{rpe}_{(a,b,u)}\). Figure 2. An overview of LPFormer. 1) Encode the nodes via a MPNN. 2) For a given target link, we determine which nodes to attend to (\(V_{(a,b)}\)) via the PPR-based thresholding technique in Eq. (9). 3) The pairwise encoding is computed by attending to each node, \(u\in V_{(a,b)}\) using the feature and relative positional encoding \(\mathbf{rpe}_{(a,b,u)}\). 4) The pairwise encoding, node representations, and counts of different node types are concatenated and used to compute the final probability of the target link existing. ### PPR-Based Relative Positional Encodings In this section, we introduce our strategy for computing the RPE of a node \(u\) relative to a target link \((a,b)\). Intuitively, we want the RPE to reflect the positional relationship between \(u\) and \((a,b)\) such that different types of information (i.e., local vs. global) are encoded differently. Using Figure 1 as an example, since node 3 is a CN of (source, 5) we expect it to have a much different relationship to the target link than node 6, which is a 2-hop neighbor of both nodes. An enticing option is to use the double radius node labeling (DRNL) trick introduced by Zhang and Chen (Zhang and Chen, 2019). However, Chamberlain et al. (Chamberlain et al., 2019) have shown it to be prohibitively expensive to calculate for larger graphs. Furthermore, existing RPEs are typically infeasible to calculate on larger graphs as they often rely on pairwise distances or the eigenvectors of the Laplacian (Zhou et al., 2019). As such, we seek an RPE that can both distinguish the relationship of different nodes to the target link while also being efficient to calculate. To motivate our RPE design, we draw inspiration from the following Proposition. **Proposition 1**.: _Consider a target link \((a,b)\) and a node \(u\in\mathcal{V}\setminus\{a,b\}\). The PPR (Chamberlain et al., 2019) score of a root node \(i\) and target node \(j\) with teleportation probability \(\alpha\) is denoted by \(\text{ppr}(i,j)\). Let \(r_{a}^{k}(u)\) be the number of walks of length \(k\) that begin at node \(a\) and terminate at \(u\). We define \(r_{a,b}^{k}(u)\coloneqq r_{a}^{k}(u)+r_{b}^{k}(u)\). We also define a weight \(\gamma^{k}\coloneqq\alpha(1-\alpha)^{k}\) for all walks of length \(k\). The PPR scores, \(\text{ppr}(a,u)\) and \(\text{ppr}(b,u)\), along with the aggregate number of walks of disparate lengths, are interconnected through the following relationship._ \[\Gamma(a,b,u)=\text{ppr}(a,u)+\text{ppr}(b,u)=\sum_{k=0}^{\infty}\alpha(1- \alpha)^{k}r_{a,b}^{k}(u). \tag{6}\] The detailed proof is given in Appendix C. From Proposition 1, we can make the following observations: (1) The PPR scores encode the weighted sum of the number of walks between two nodes. (2) Walks of shorter length are given higher importance, as evidenced by the dampening factor \(\gamma^{k}=\alpha(1-\alpha)^{k}\) which decays with the increase in \(k\). These observations imply that - a **larger value of \(\Gamma(a,b,u)\) correlates with the existence of many shorter walks connecting node \(u\) to the both nodes in the target link \((a,b)\)**. These observations suggest that the PPR score can be used as an intuitive and useful method to understand the structural relationship between node \(u\) and both nodes in the target link \((a,b)\). If both scores, \(\text{ppr}(a,u)\) and \(\text{ppr}(b,u)\), are high, there exist many shorter walks connecting \(u\) to both nodes in the target link. This implies that node \(u\) has a stronger impact on the nodes in the target link. On the other hand, if both PPR scores are low, there is likely very little relationship between \(u\) and the target link. This allows for a convenient way of differentiating how a node structurally relates to the target link. Furthermore, we note that the PPR matrix can be efficiently pre-computed using the algorithm introduced by Andersen et al. (Andersen et al., 2019), allowing for easy computation and use. Following this idea, to calculate the RPE of a node \(u\), we use the PPR scores of a node \(u\) relative to both nodes in the target link \((a,b)\). Instead of considering the sum of PPR scores as in Proposition 1, we further parameterize \(\Gamma(\cdot)\) via an MLP, \[\text{rpe}_{(a,b,u)}=\text{MLP}\left(\text{ppr}(a,u),\text{ppr}(b,u)\right). \tag{7}\] By introducing learnable parameters to \(\Gamma(\cdot)\), it allows for the model learn the importance of individual PPR scores and how they interact with each other. To ensure that Eq. (7) is invariant to the order of the nodes in the target link, i.e., \((a,b)\) and \((b,u)\), we further set the RPE to be equal to the summation of the representations given by both \((a,b)\) and \((b,a)\): \[\text{rpe}_{(a,b,u)}=\text{rpe}_{(a,b,u)}+\text{rpe}_{(b,a,u)}. \tag{8}\] However, a concern with Eq. (8) is that it is not guaranteed to be able to distinguish certain types of nodes from each other. For example, it is necessary to clearly distinguish CNs from other nodes due to their important role in link formation (Zhou et al., 2019). To overcome this issue, we fit three separate MLPs for when \(u\) is : CN of \((a,b)\), a 1-hop neighbor of either \(a\) and \(b\), and a \(>\)1-hop neighbor of both \(a\) and \(b\). This ensures that we can properly distinguish between these three types of nodes. We verify the effectiveness of this design in Section 4.4. ### Efficiently Attending to the Graph Context The proposed attention mechanism in Section 3.2 attends to all nodes in the graph, sans those in the link itself. This makes it difficult to scale to large graphs. Motivated by selective (Zhou et al., 2019) and sparse (Zhou et al., 2019) attention, we opt to attend to only a small portion of the nodes. _How do we determine which nodes to attend to?_ Existing methods like NCNC only attend to the enclosed 1-hop neighborhood of the target link. However, this has the potential to ignore vital information to the link. Depending on the type of graph and link, nodes outside the 1-hop radius of the target link may be crucial. On the other hand, SEAL uses the enclosed k-hop neighborhood. Due to the exponential increase in nodes as \(k\) increases, using \(k\)\(>\)1 on larger graphs is infeasible. We therefore desire the flexibility of being able to attend to important non-local nodes while avoiding the inclusion of too many nodes. In Proposition 1, we demonstrate that the PPR score between the node pair \((a,u)\) can be interpreted as the weighted number of walks from \(a\) to \(u\). As such, a higher PPR score indicates that more walks of shorter length tend to connect the pair of nodes, indicating a stronger structural relationship. This concept is used to formulate the RPE in Section 3.3. However, _what if the PPR score of node \(u\) is low compared to both nodes in the target link?_ Since the PPR score can be considered as the influence of node \(u\) on some root node (Chamberlain et al., 2019), it suggests that \(u\) has little structural relationship to both nodes in the target link. This implies that node \(u\) likely has little impact on the pairwise information between both nodes in the target link. Following this philosophy, for a target link \((a,b)\), we filter all nodes \(u\in\mathcal{V}\setminus\{a,b\}\) using the PPR scores of \(u\) relative to both nodes in the target link. Specifically, we remove all nodes where the PPR score is below a threshold \(\eta\) relative to both nodes in the target link. As such, we only keep a node \(u\) if it is related in some capacity to at least one of the nodes in the target link. Similarly to Section 3.3, we treat CN, 1-Hop, and \(>\)1-Hop nodes differently by applying a different threshold for them. The filtered node set for each category of nodes is given by: \[\hat{\mathcal{N}}_{(a,b)}^{\alpha}=\{u\in\mathcal{N}_{(a,b)}^{\alpha}\mid \text{ppr}(a,u)\geq\eta^{\alpha},\ \text{ppr}(b,u)\geq\eta^{\alpha}\}, \tag{9}\] where \(\hat{\mathcal{N}}^{\pi}_{(a,b)}\) is the filtered node set for all nodes of the type \(\pi\in\{\text{CN},1\text{--Hop},>1\text{--Hop}\}\) and \(\eta^{\pi}\) is the corresponding PPR threshold. We corroborate this design in Section 4 by demonstrating that LPFormer can achieve SOTA performance in LP while achieving a faster runtime than the second-best method, NCNC (Narayanan et al., 2017), on denser graphs. This is despite the fact that LPFormer can attend to a wider variety of nodes. We further show in Section 4.5 that filtering \(>\)1\(-\)Hop nodes can actually help improve performance. We conjecture that this is because excessive global information can inject too much noise into the pairwise encoding. ### LPFormer We now define the overall framework - LPFormer. The overall procedure is given in Figure 2: **(1)** We first learn node representations from the input adjacency and node features via an MPNN. We note that this step is agnostic to the target link. **(2)** For a target link \((a,b)\) we extract the nodes to attend to, i.e. \(V_{(a,b)}\). This is done via the PPR thresholding technique defined in Section 3.4. **(3)** We apply \(L\) layers of attention, using the mechanism defined in Section 3.2. The output is the pairwise encoding \(s(a,b)\). **(4)** We generate the prediction of the target link using three types of information: the element-wise product of the node representation, the pairwise encoding, and the number of CN, 1-Hop, and \(>\)1-Hop nodes identified by Eq. 9. The score function is given by: \[\small p(a,b)=\sigma\left(\text{MLP}\left(\mathbf{h}_{a}\odot\mathbf{h}_{b} \ \middle|\ \left|s(a,b)\ \middle|\ \left|\ \hat{\mathcal{N}}^{\text{CN}}_{(a,b)}\ \middle|\ \left|\ \left|\hat{\mathcal{N}}^{\text{i}}_{(a,b)}\ \right|\ \left|\ \left|\hat{\mathcal{N}}^{\text{i}}_{(a,b)}\right|\ \right|\ \left|\hat{\mathcal{N}}^{\text{-1}}_{(a,b)}\right| \right)\right)\right) \tag{10}\] We demonstrate in Table 3 that the inclusion of the node counts is helpful, as it provides complementary information to the pairwise encoding. ## 4. Experiments In this section, we conduct extensive experiments to validate the effectiveness of LPFormer. Specifically, we attempt to answer the following questions: **(RQ1)** Can LPFormer consistently outperform baseline methods on a variety of different benchmark datasets? **(RQ2)** Is LPFormer able to model a variety of different LP factors? **(RQ3)** Can LPFormer be run efficiently on large dense graphs? We further conduct studies ablating each component of our model and analyzing the effect of the PPR-based threshold on performance. ### Experimental Settings **Datasets.** We include Cora, Citeseer, and Pubmed (Citeseer et al., 2017) and ogbl-collab, ogbl-ppa, ogbl-ddi, and ogbl-citation2(Kang et al., 2018). Furthermore, for Cora, Citeseer, and Pubmed we experiment under a single fixed split (see Appendix C.1 for further discussion). The detailed statistics for each dataset are shown in Table 1. **Baseline Models.** We compare LPFormer against a wide variety of baselines including: CN (Zhu et al., 2017), AA (Citeseer et al., 2017), RA (Zhu et al., 2017), GCN (Zhu et al., 2017), SAGE (Kang et al., 2018), GAE (Zhu et al., 2017), SEAL (Zhu et al., 2017), NBFNet (Zhu et al., 2017), Neo-GNN (Zhu et al., 2017), BUDY (Beng et al., 2018), and NCNC (Narayanan et al., 2017). Results on Cora, Citeseer, and Pubmed are taken from Li et al. (Li et al., 2017). Results for the heuristic methods are from Hu et al. (Hu et al., 2017). All other results are either from their respective study or Chamberlain et al. (Chamberlain et al., 2018). **Evaluation Metrics.** Each positive target link is evaluated against a set of given negative links. The rank of the positive link among the negatives is used to evaluate performance. The two types of metrics that are used to evaluate this ranking are: Hits@K and MRR. For the OGB datasets we use the metric used in the original study. This includes Hits@50 for ogbl-collab, Hits@100 for ogbl-ppa and MRR for ogbl-citation2. Lastly, for Cora, Citeseer, Pubmed we follow Li et al. (Li et al., 2017) and use MRR. Lastly, we note that the same set of negative links is used for all positive links except on ogbl-citation2, where (Kang et al., 2018) provides a customized set of 1000 negatives for each individual positive link. Additional experimental settings can be found in Appendix D.1. ### Main Results We present the results of LPFormer compared with baselines on multiple benchmark datasets. Note that we omit ogbl-ddi from the main results due to recent issues discovered by Li et al. (Li et al., 2017) (see Appendix C.2 for more details). The results are shown in Table 2. We observe that LPFormer can achieve SOTA performance on 5/6 datasets, significantly outperforming other baselines. Moreover, LPFormer is also the most consistent of all the methods, achieving strong performance on all datasets. This is as opposed to previous SOTA methods, NCNC and BUDDY, which tend to struggle on Cora and Pubmed. We attribute the consistency of LPFormer to the flexibility of our model, allowing it to customize the LP factors needed to each link and dataset. ### Performance by LP Factor In this section, we measure the ability of different methods to capture a variety of different LP factors. To measure this, we identify all positive target links **when there is only one dominant LP factor**. For example, one group would contain all target links where the only dominant factor is the local structural information. We focus on links that correspond to one of the three groups identified in (Zhu et al., 2017): local structural information, global structural information, and feature proximity. We identify these groups by using popular heuristics as proxies for each factor. For local structural information we use CNs (Zhu et al., 2017), for global structural information we use PPR (Beng et al., 2018) as it's the most computationally efficient of all global methods, and for feature proximity we use the cosine similarity of the features. Using these heuristics, we determine if only one factor is dominant by comparing the relative score of each heuristic. This is done by first computing the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & Cora & Citeseer & Pubmed & ogbl-collab & ogbl-ddi & ogbl-ppa & ogbl-citation2 \\ \hline \#Nodes & 2,708 & 3,327 & 18,717 & 235,868 & 4,267 & 576,289 & 2,927,963 \\ \#Edges & 5,278 & 4,676 & 44,327 & 1,285,465 & 1,334,889 & 30,326,273 & 30,561,187 \\ Split Ratio & 85/5/10 & 85/5/10 & 85/5/10 & 92/4/4 & 80/10/10 & 70/20/10 & 98/1/1 \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset statistics. The split ratio is the % of samples for train/validation/test. score for each factor \(i\) and target link \((a,b)\), \(s^{i}(a,b)\). For each factor, we then compute the score corresponding to the \(p\)-th percentile among all links, \(\hat{s}^{i}\). We choose a larger value of \(p\) (i.e. 90%) such that a score \(\geq\hat{s}^{i}\) indicates that a significant amount of pairwise information exists for that factor. For a single target link, we then compare the score of each factor \(s^{i}(a,b)\) to \(\hat{s}^{i}\). If \(s^{i}(a,b)\geq\hat{s}^{i}\) is true **for only one factor**, this implies that the score for only one factor is "high". Therefore there is a notable amount of pairwise information existing for only one factor for the link \((a,b)\). This ensures that only one factor is strongly expressed. If this is true, we then assign the target link \((a,b)\) to factor \(i\). Please see Appendix D.3 for a more detailed explanation. We demonstrate the results on Cora, Citeseer, and ogbl-collab in Figure 3. We observe that LPFormer typically performs best for each individual LP factor on all datasets. Furthermore, it is also the most consistently well-performing on each factor as compared to other methods. For example, on Cora the other methods struggle for links that correspond to the feature proximity factor. LPFormer, on the other hand, is able to significantly outperform them on those target links, performing around 33% better than the second best method. Lastly, we note that most methods tend to perform well on the links corresponding to the global factor, even if they don't explicitly model such information. This is caused by a strong correlation that tends to exist between local and global structural information, often resulting in considerable overlap between both factors (Zhu et al., 2017). These results show that LPFormer can indeed adapt to multiple types of LP factors, as it can consistently perform well on samples belonging to a variety of different LP factors. Additional results are given in Appendix D.4. ### Ablation Study We further include an ablation study to verify the effectiveness of the proposed components in LPFormer. In particular, we introduce 6 variants of LPFormer. (a) **w/o Learnable Att**: No attention is learned. As such, we set all attention weights to 1 and remove the RPE. (b) **w/o Features in Att**: We remove the node feature information from the attention mechanism. (c) **w/o RPE in Att**: We remove the RPE from the attention mechanism. (d) **w/o PPR RPE**: We replace the PPR-based RPE with a learnable embedding for each of CN, 1-Hop, and \(>\)1-Hop nodes. (e) **w/o PPR RPE by Node Type**: We don't fit a separate function for each node type when determining the PPR RPE (see Section 3.3). Instead we use one for all nodes. (f) **w/o Counts**: We remove the counts of different nodes from the scoring function. The results are shown in Table 3. We include ogbl-collab, Pubmed, and Citeseer. We observe that ablating each component tends to decrease the performance. However, the magnitude of the decrease is dataset-dependent. For example, on ogbl-collab, ablating the feature information in the attention does not affect the performance. However, on Pubmed and Citeseer, removing the feature information results in a large decrease in performance. We further verify the importance of splitting the nodes into different categories (i.e., CN, 1-Hop, \(>\)1-Hop) when computing the RPE. When they aren't split and only one function is used to encode the PPR scores, the performance drops on each dataset. ### Effect of PPR Threshold We examine the effect of the PPR threshold for including nodes \(>\)1 hops away from either node in the target link (\(\eta^{>1}\) in Eq. (9)). Results for Cora, Citeseer, and Pubmed are in Figure 4. We have \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **Cora** & **Citeseer** & **Pubmed** & **ogbl-collab** & **ogbl-ppa** & **ogbl-citation2** \\ \hline Metric & MRR & MRR & MRR & H@50 & H@100 & MRR \\ \hline **CN** & 20.99 \(\pm\)0.00 & 28.34 \(\pm\)0.00 & 14.02 \(\pm\)0.00 & 56.44 \(\pm\)0.00 & 27.65 \(\pm\)0.00 & 51.47 \(\pm\)0.00 \\ **AA** & 31.87 \(\pm\)0.00 & 29.37 \(\pm\)0.00 & 16.66 \(\pm\)0.00 & 64.35 \(\pm\)0.00 & 32.45 \(\pm\)0.00 & 51.89 \(\pm\)0.00 \\ **RA** & 30.79 \(\pm\)0.00 & 27.61 \(\pm\)0.00 & 15.63 \(\pm\)0.00 & 64.00 \(\pm\)0.00 & 49.33 \(\pm\)0.00 & 51.98 \(\pm\)0.00 \\ \hline **GCN** & 32.50 \(\pm\)6.87 & 50.01 \(\pm\)6.04 & 19.94 \(\pm\)4.24 & 44.75 \(\pm\)1.07 & 18.67 \(\pm\)1.32 & 84.74 \(\pm\)0.21 \\ **SAGE** & **37.83** \(\pm\)7.75 & 47.84 \(\pm\)6.39 & 22.74 \(\pm\)5.47 & 48.10 \(\pm\)0.81 & 16.55 \(\pm\)2.40 & 82.60 \(\pm\)0.36 \\ **GAE** & 29.98 \(\pm\)3.21 & 63.33 \(\pm\)3.14 & 16.67 \(\pm\)0.19 & OOM & OOM & OOM \\ \hline **SEAL** & 26.69 \(\pm\)5.89 & 39.36 \(\pm\)4.99 & 38.06 \(\pm\)5.18 & 64.74 \(\pm\)0.43 & 48.80 \(\pm\)3.16 & 87.67 \(\pm\)0.32 \\ **NBFNet** & 37.69 \(\pm\)3.97 & 38.17 \(\pm\)3.06 & 44.73 \(\pm\)2.12 & OOM & OOM & OOM \\ \hline **Neo-GNN** & 22.65 \(\pm\)2.60 & 53.97 \(\pm\)5.88 & 31.45 \(\pm\)3.17 & 57.52 \(\pm\)0.37 & 49.13 \(\pm\)0.60 & 87.26 \(\pm\)0.84 \\ **BUDDY** & 26.40 \(\pm\)4.40 & 59.48 \(\pm\)8.96 & 23.98 \(\pm\)5.11 & 65.94 \(\pm\)0.58 & 49.85 \(\pm\)0.20 & 87.56 \(\pm\)0.11 \\ **NCNC** & 29.01 \(\pm\)3.83 & **64.03** \(\pm\)3.67 & 25.70 \(\pm\)4.48 & **66.61** \(\pm\)0.71 & **61.42** \(\pm\)0.73 & **89.12** \(\pm\)0.40 \\ \hline **LPFormer** & **39.42** \(\pm\)5.78 & **65.42** \(\pm\)4.65 & **40.17** \(\pm\)1.92 & **68.03** \(\pm\)0.46 & **63.32** \(\pm\)0.63 & **89.81** \(\pm\)0.13 \\ \hline \hline \end{tabular} \end{table} Table 2. Results on benchmark datasets. OOM is an out of memory error. Colored are the results ranked first, second, and third. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **ogbl-collab** & **Pubmed** & **Citeseer** \\ \hline w/o Learnable Att & 65.05\(\pm\)0.50 & 27.72\(\pm\)7.44 & 56.23\(\pm\)1.75 \\ w/o Features in Att & **68.04\(\pm\)0.79** & 36.12\(\pm\)4.32 & 53.40\(\pm\)9.30 \\ w/o RPE in Att & 65.26\(\pm\)0.56 & 26.51\(\pm\)3.67 & 56.70\(\pm\)3.79 \\ w/o PPR RPE & 67.09\(\pm\)0.51 & 28.52\(\pm\)1.18 & 51.96\(\pm\)15.2 \\ w/o PPR RPE by Node Type & 67.95\(\pm\)0.54 & 33.80\(\pm\)4.24 & 57.40\(\pm\)5.71 \\ w/o Counts & 67.75\(\pm\)0.41 & 33.12\(\pm\)10.7 & 54.39\(\pm\)5.30 \\ \hline LPFormer & **68.03\(\pm\)0.46** & **40.17\(\pm\)1.92** & **65.42** \(\pm\)4.65 \\ \hline \hline \end{tabular} \end{table} Table 3. Ablation Study on LPFormer multiple observations. The first is that not all datasets benefit from the inclusion of global information. Citeseer achieves the best performance when filtering all nodes \(>\)1 hops away. Second, is that even when global information is helpful, it is not necessarily optimal to include all nodes \(>\)1-Hop away. This is because many nodes \(>\)1-Hop away likely have little impact on the target link, thereby introducing noise into the attention. As such, we don't want to consider those nodes when computing the pairwise encoding. These observations underscore the importance of filtering which nodes to attend to irrespective of efficiency, as different sets of information may be needed to model each dataset effectively. ### Runtime Analysis In this section, we compare the runtime of LPFormer against NCNC, which is the strongest performing baseline. To obtain a fair comparison we use the same batch size, hidden dimension size, and number of message-passing layers on each dataset for both methods. The results are shown in Figure 5 on all four OGB datasets 1. We further include the mean degree of each dataset in parentheses. We observe that LPFormer shines on denser datasets, taking significantly less time to train one epoch. This is despite that LPFormer can attend to nodes beyond the 1-hop radius of the target link. This underscores the importance of the PPR thresholding technique introduced in Section 3.4, as it allows for efficient attention to a wider variety of nodes. Footnote 1: We note that we include ogb-ddi simply as a means of comparing the runtime of our methods. We omit it from the main results. See Appendix C.2 for more discussion. ## 5. Conclusion In this paper we introduce a new framework, LPFormer, that aims to integrate a wider variety of pairwise information for link prediction. LPFormer does this via a specially designed graph transformer, which considers how a node pair relate to each other in the context of the graph. It further works in a adaptive manner, allowing the type of pairwise information to be customized to each target link. Extensive experiments demonstrate that LPFormer can achieve SOTA performance on a wide variety of benchmark datasets while retaining efficiency. We further demonstrate LPFormer's supremacy at modeling multiple types of LP factors. For future work, we plan on exploring other methods of incorporating multiple LP factors with an emphasis on global structural information. We also plan Figure 4. Performance when varying the PPR threshold for \(>\)1-Hop nodes. Figure 5. Runtime Comparison of training time of 1 epoch between LPFormer and NCNC. The mean degree is in parentheses. Figure 3. Performance for target links when there is only one LP factor strongly expressed. Results are on (a) Cora, (b) Citeseer, and (c) ogbl-collab. on evaluating our method on the new harder evaluation setting proposed by [27].
2310.18993
Big pictures of motivic and classical homotopy theories
Motivic homotopy theory is meant to play the role of algebraic topology, in particular homotopy theory, in the context of algebraic geometry. As proved by Oliver Rondigs and Paul Arne Ostvaer, this theory is closely connected to Voevodsky's triangulated category of motives. A connection that is the motivic analogue of the connection between algebraic topology and homological algebra. In this paper, we try to understand the big picture of motivic homotopy theory and its connection to Voevodsky's motives by comparison to the classical counterpart.
Ahmad Rouintan
2023-10-29T12:30:02Z
http://arxiv.org/abs/2310.18993v2
# Big pictures of motivic and classical homotopy theories ###### Abstract. Vladimir Voevodsky and Fabien Morel introduced the motivic homotopy theory in the late 90s, which is a theory meant to play the role of algebraic topology, in particular homotopy theory, in the context of algebraic geometry. This theory, as proved by Rondigs and Ostvcar, is connected with Voevodsky's triangulated category of motives. This connection is the analogue of the connection between algebraic topology and homological algebra. In this paper, we want to understand the big picture of motivic homotopy theory and its connection to motives by comparison to the classical counterpart. ###### Contents * 1 The picture in classical homotopy theory * 1.1 Reduced cohomology theories * 1.2 Homotopy category of spaces * 1.3 Stable homotopy category of spaces * 1.4 Reduced (co)homology theories as spectra * 1.5 Chain complexes of abelian groups * 1.6 The link between spaces and chain complexes of abelian groups * 2 The picture in motivic homotopy theory: Part 1 * 2.1 Homotopy category of sites with an interval * 2.2 Motivic unstable homotopy category * 2.3 Spheres, homotopy sheaves, and suspensions * 2.4 Motivic stable homotopy category * 2.5 Cohomology theories * 3 The picture in motivic homotopy theory: Part 2 * 3.1 Motivic spaces with transfers * 3.2 Unstable homotopy category of motivic spaces with transfers * 3.3 Stable homotopy category of motivic spaces with transfers 3.4 The link to Voevodsky's triangulated category of motives * 4 An introduction to the conjectural motivic \(t\)-structure * 4.1 Definition of \(t\)-structure and a fundamental example * 4.2 The conjectural motivic \(t\)-structure ## Introduction This paper is an expository account of the state of the art around the motivic homotopy theory. The aim is to understand the connections between different parts of this theory and how they connect to Voevodsky's motives. Our approach is to first understand the big picture of classical homotopy theory, connecting algebraic topology to homological algebra. Then, having this picture in mind, study the motivic setting analogously. There are exceptional expository papers available on motivic homotopy theory, such as [15] and [16], both by Marc Levine. This defies the need for writing a new expository paper, but in our case, we will go one step further when discussing the connection with Voevodsky's motives. A similar paper has been written by Charles A. Weibel [27], which is a brilliant and short exposition of the topic. Our aim is to cover almost the same topics, as well as analogues in the classical homotopy, but, go into more detail. Because we have only concerned ourselves with the big picture, there will not be a lot of information regarding finer results in each of its parts. For such information, interested readers should visit [15], [16], and [17]. Here is how this paper is organized: 1. In section SS1, we will discuss the big picture of classical homotopy theory. It will include a picture of what is happening in algebraic topology, homological algebra, and the connection between the two. 2. In section SS2, we will discuss the first part of the big picture of motivic homotopy theory, which is analogous to the picture of homotopy theory in algebraic topology. 3. In section SS3, we will complete our picture of motivic homotopy theory by discussing the parts analogous to homological algebra, and its connection to the first part of the picture. This way we connect the motivic stable homotopy category to Voevodsky's triangulated category of motives. 4. In section SS4, we will briefly discuss the conjectural motivic \(t\)-structure, which builds a way from Voevodsky's motives to the conjectural category of mixed motives. ## 1. The picture in classical homotopy theory In this section, we will give an overview of Classical Homotopy Theory. We will start by stating the axiomatic definition of a reduced cohomology theory and use it to find categories and functors shaping our picture. This backward approach is not how Homotopy Theory was built the first time, but will serve our goal of understanding the picture perfectly. After that, we will take similar steps in Homological Algebra, and then, connect the two pictures together. The resulting big picture will play the role of our map in the next sections. For the material in this section to be completely precise, we need to first choose a convenient category of spaces to work with. Two such choices leading to equivalent homotopy theories would be 1. the category of compactly generated Hausdorff topological space, denoted by \(\mathbf{Top}\), and 2. the category of simplicial sets, denoted by \(s\mathbf{Set}\). From now on, we will avoid discussions regarding convenient categories of spaces as it will not serve our goal, and either of these choices will be used when needed. For this section, I got help from the videos of Peter Arndt's mini-course on Abstract and Motivic Homotopy Theory at the University of Verona as well as the papers [15] and [16]. In the end, I combined the information from these three sources to write this section. ### Reduced cohomology theories **Definition 1.1**.: A _reduced cohomology theory_ is a family of functors \[\{E^{n}:\mathbf{Top}_{\bullet}^{\mathrm{op}}\to\mathbf{Ab}\}_{n=0}^{\infty},\] from non-degenerately pointed topological spaces to abelian groups, satisfying the following axioms. 1. For every weak equivalence \(f:X\to Y\) of topological spaces i.e. a map that induces isomorphisms on all homotopy groups, the induced map \[f^{n}:E^{n}(Y)\to E^{n}(X)\] is an isomorphism for each \(n\geq 0\). 2. For every topological space \(X\) and \(n\geq 0\), there is a natural isomorphism \[\Sigma:E^{n}(X)\to E^{n+1}(\Sigma X).\] Recall that the suspension of a space \(X\), denoted by \(\Sigma X\), is its smash product with \(S^{1}\). 3. It turns the wedge of spaces into the product of their cohomology groups i.e. \[E^{*}(\bigvee_{i\in I}X_{i})\cong\prod_{i\in I}E^{*}(X_{i}).\] 4. For every cofibration \(A\to X\), it induces the following exact sequence: \[E^{*}(X/A)\to E^{*}(X)\to E^{*}(A).\] We will start to build our picture focusing on the axioms mentioned above. ### Homotopy category of spaces The first axiom in definition 1.1 states that reduced cohomology theories see weak equivalences as isomorphisms. Therefore, if we first turn weak equivalences into isomorphisms, and do that in a universal way, this axiom would become trivial. This needs localization with respect to the weak equivalences, which can be done by a model structure on our category of spaces. However, this is not a problem as the usual definitions of weak equivalences and Serre fibrations satisfy the axioms of model categories. So, we get _the pointed (unstable) homotopy category of spaces_, denoted by \(\mathcal{H}_{\bullet}\). After this step, our picture of classical homotopy looks like the following: (1) _Remark 1.2_.: As we mentioned earlier, another choice of a convenient category of spaces is the category of simplicial sets. The usual definitions of weak equivalences and Kan fibrations turn it into a model category, and not only the resulting homotopy category is equivalent to the one for topological spaces, but the two model categories are also Quillen equivalent. This equivalence is given by _the geometric realization functor_ and _the singular complex functor_. Therefore, from now on, we will use either of these homotopy categories when needed. ### Stable homotopy category of spaces For cohomology theories to be stable with respect to suspension we need the suspension functor to be an auto-equivalence. One way to do that is to use the Spanier-Whitehead category of spaces. The idea, naively, is to consider the category of pointed spaces and \[\operatorname{Mor}_{\mathrm{SW}}(X,Y):=\operatorname{colim}_{q}[\Sigma^{q}X, \Sigma^{q}Y],\] where \(X\) and \(Y\) are pointed spaces and by \([-,-]\) we mean the homotopy classes of continuous maps. Although this category is triangulated with suspension playing the role of shift functor, it lacks some properties needed. For example, it does not have infinite coproducts, and hence, can not be turned into a model category. Also, the canonical functor from \(\mathcal{H}_{\bullet}\) to this category does not preserve already existing coproducts. The reason why we care so much about infinite coproducts is that not having them means that the Brown representability theorem would not hold, a theorem that plays an important role later on. The alternative approach we use to stabilize with respect to suspension is to use the category of spectra. **Definition 1.3**.: A _spectrum_ is a sequence of pointed spaces \(E=(E_{0},E_{1},...)\) together with the bonding maps \(\epsilon_{n}:\Sigma X_{n}\to X_{n+1}\). When the bonding maps are weak equivalences, we call our spectrum an \(\Omega\)_-spectrum_. A map of spectra \(f:E\to F\) is a family of maps \(\{f_{n}:E_{n}\to F_{n}\}_{n\geq 0}\), that respect bonding maps of each i.e. the diagram (2) is commutative for each integer \(n\geq 0\). We denote the category of spectra by \(\mathbf{Spt}\). From a pointed space \(X\), we construct an \(\Omega\)_-spectrum_\(\Sigma^{\infty}X=(X,\Sigma X,\Sigma^{2}X,...)\) with identity bonding maps. The \(\Sigma^{\infty}\) functor is a covariant functor from the category of spaces to the category of spectra. It has a right adjoint, denoted by \(\Omega^{\infty}\), that sends a spectrum to its zeroth term and maps of spectra to the maps between their zeroth terms. Now that we have introduced the category of spectra, we want to turn it into a model category and construct its homotopy category. The first step is to define weak equivalences of spectra. But in order to do so, let's go back to a famous question in homotopy theory: _the stable homotopy groups of spheres_. For integers \(n>k+1\), the Freudenthal suspension theorem gives an isomorphism \[\pi_{n+k}(S^{n})\cong\pi_{n+k+1}(\Sigma S^{n})=\pi_{n+k+1}(S^{n+1}).\] So, we can see that the homotopy groups of spheres become stable under suspension when the dimension is large enough. Based on this, we can define the \(n\)-th stable homotopy group of spheres as the colimit \[\pi_{n}^{s}:=\operatorname{colim}_{n}\pi_{n+k}(S^{n}).\] This leads us to define stable homotopy groups of spectra. **Definition 1.4**.: The \(n\)_-th stable homotopy group of a spectrum_\(E\) is defined to be the colimit \[\pi_{n}(E):=\operatorname{colim}_{n}\pi_{n+k}(E_{k}),\] where the maps are given by \[\pi_{n+k}(E_{k})\xrightarrow{\Sigma}\pi_{n+k+1}(E_{k+1})\xrightarrow{\epsilon_ {n*}}\pi_{n+k+1}(E_{n+1}).\] **Example 1.5**.: The \(n\)th stable homotopy group of spheres is just the \(n\)-th stable homotopy group of the spectrum \(\mathbb{S}=(S^{0},S^{1},S^{2},...)\) with the obvious bonding maps. More generally, for a pointed space \(X\), its \(n\)th stable homotopy group is \[\pi_{n}^{s}(X)=\pi_{n}(\Sigma^{\infty}X)=\operatorname{colim}_{n}\pi_{n+k}( \Sigma^{k}X)=\operatorname{colim}_{n}\pi_{n}(\Omega^{k}(\Sigma^{k}X)),\] where \(\Omega\) is the loop-space functor. Having defined stable homotopy groups of spectra, we naturally define stable weak equivalences. **Definition 1.6**.: A _stable weak equivalence_ is a map of spectra that induces isomorphisms on all stable homotopy groups. There is a nice model structure for spectra, where weak equivalences are stable weak equivalences and cofibrations are those maps of spectra \(f:E\to F\) where each \(f_{n}\) is a cofibration in the unstable sense. This allows us to localize this category with respect to the stable weak equivalences and get _the stable homotopy category_, denoted by \(\mathcal{SH}\). Recall that the objects of this homotopy category are bifibrant spectra that are just \(\Omega\)_-spectra_. The pair of adjoint functors \((\Sigma^{\infty}\dashv\Omega^{\infty})\) raises to a pair of adjoint functors between \(\mathcal{H}_{\bullet}\) and \(\mathcal{SH}\) by taking derived functors. The functor \(\Sigma^{\infty}\) preserves weak equivalences. Also, using this functor, we embed \(\mathcal{H}_{\bullet}\) in \(\mathbf{Spt}\) where suspension, which now has become a shift functor, is an auto-equivalence with the obvious inverse, namely the shift-back functor, and this makes \(\mathcal{SH}\) into a triangulated category. After passing to the stable homotopy category both of the first two axioms would be trivial, and our picture of classical homotopy theory looks like the following: (3) ### Reduced (co)homology theories as spectra In this subsection, we discuss the connection between spectra and reduced (co)homology theories. **Definition 1.7**.: Let \(E\) and \(F\) be two spectra. The \(n\)-th \(E\)_-homology_ of \(F\) is defined to be \[E_{n}(F):=\operatorname{Hom}_{\mathcal{SH}}(\Sigma^{n}\mathbb{S},E\wedge F),\] and the \(n\)-th \(E\)_-cohomology_ of \(F\) is defined to be \[E^{n}(F):=\operatorname{Hom}_{\mathcal{SH}}(F,\Sigma^{n}E).\] For a pointed space \(X\), its \(E\)-homology and \(E\)-cohomology are respectively \(E_{*}(\Sigma^{\infty}X)\) and \(E^{*}(\Sigma^{\infty}X)\). These (co)homology theories satisfy the first two axioms in definition 1.1. So, now it suffices to check only the last two axioms. But, the classical Brown representability theorem, states that every functor satisfying these axioms is representable. So, reduced (co)homology theories are functors representable by \(\Omega\)_-spectrum_. In other words, \(\mathcal{SH}\) is where reduced (co)homology theories live. **Example 1.8**.: Let \(K(\mathbb{Z},n)\) be _the Eilenberg-MacLane space_. It is the space with the only non-zero homotopy group \(\pi_{n}(K(\mathbb{Z},n))=\mathbb{Z}\). The _Eilenberg-MacLane spectrum_ is defined to be the spectrum \[H\mathbb{Z}=(K(\mathbb{Z},0),K(\mathbb{Z},1),K(\mathbb{Z},2),...)\] with its bonding maps given by the adjoint of the weak equivalences \(K(\mathbb{Z},n)\to\Omega K(\mathbb{Z},n+1)\). This spectrum represents singular homology and singular cohomology i.e. \[H^{n}(X;\mathbb{Z})\cong\operatorname{Hom}_{\mathcal{SH}}( \Sigma^{\infty}X,\Sigma^{n}H\mathbb{Z}), \tag{5}\] \[H_{n}(X;\mathbb{Z})\cong\operatorname{Hom}_{\mathcal{SH}}( \Sigma^{n}\mathbb{S},H\mathbb{Z}\wedge\Sigma^{\infty}X). \tag{4}\] ### Chain complexes of abelian groups In this subsection, we are looking to take similar steps for chain complexes of abelian groups with non-negative support. The reason for that will be discussed in the next subsection, as linking spaces to the chain complexes of abelian groups plays a big part in our picture of classical homotopy theory. **Definition 1.9**.: A _chain complex of abelian groups_ is a sequence \(A=\{A_{n}\}_{n\in\mathbb{Z}}\) of abelian groups with a family of maps \(\{d_{n}:A_{n+1}\to A_{n}\}_{n\in\mathbb{Z}}\) that \(d^{2}=0\). A chain complex of abelian groups with non-negative support is one that has non-zero terms only when \(n\geq 0\). A map of chain complexes \(f:A\to B\), is a family of maps \(\{f_{n}:A_{n}\to B_{n}\}_{n\in\mathbb{Z}}\) for which the diagrams (6) are commutative for every integer \(n\). We denote this category by \(C_{\geq 0}(\mathbf{Ab})\). When working with spaces, we first localized the category of spaces with respect to weak equivalences. In order to take a similar step here, we need to introduce quasi-isomorphisms, that play the role of weak equivalences for chain complexes. **Definition 1.10**.: Let \(A\) be a chain complex of abelian groups. The \(n\)_-th homology group_ of \(A\) is \[H_{n}(A):=\ker(d_{n})/\mathrm{im}(d_{n+1}).\] **Definition 1.11**.: A _quasi-isomorphism_ is a map of chain complexes that induces isomorphisms on all homology groups. There is a model structure on \(C_{\geq 0}(\mathbf{Ab})\) quasi-isomorphisms playing the role of weak equivalences, so after localizing we get the homotopy category \(\mathcal{H}_{\geq 0}(\mathbf{Ab})\). Now, it is time to find our stable world! As we saw earlier, for topological spaces, the suspension functor eventually became the shift functor. Therefore, it makes sense to let the shift functor, denoted by [1], which shifts complexes one place to the left, play the role of the suspension functor. When working with chain complexes of abelian groups with non-negative support, this functor is not an auto-equivalence. But, this problem can be addressed easily by expanding our category to include all complexes of abelian groups. Because in this category the shift functor has an obvious inverse, the shift-back functor \([-1]\). Let \(A\) be an arbitrary complex of abelian groups. Define \(A_{\geq-n}\) such that it is equal to \(A\) for all \(m\geq-n\) and \(0\) elsewhere. By shifting this complex \(n\) times, we get a complex \(A_{\geq-n}[n]\) with non-negative support. We can recover \(A\) from such complexes as \[A=\mathrm{colim}_{n}A_{\geq-n}[n][-n]\] using the maps \(\epsilon_{n}[-n-1]\) where \(\epsilon_{n}\) is the inclusion \(A_{\geq n}[n][1]\hookrightarrow A_{\geq n}[n+1]\). Therefore, we can construct arbitrary complexes using complexes with non-negative support and the inclusions \(\epsilon_{n}\) for each integer \(n\). _Remark 1.12_.: We saw that expanding our category to include all complexes of abelian groups is basically the same as considering sequences of objects in \(C_{\geq 0}(\mathbf{Ab})\) with the bonding maps \(\epsilon_{n}[-n-1]\). Going back to the definition of spectra of spaces, we can see that the steps taken in both cases are exactly the same. This makes the analogy between the spaces and chain complexes of abelian groups with non-negative support more precise. ### The link between spaces and chain complexes of abelian groups In this subsection, we use the category of simplicial sets as our category of spaces. As we mentioned earlier, this choice is equivalent to the other one, so it would not cause any problems. Instead, it would help us build the link between spaces and chain complexes of abelian groups easily. The category of chain complexes of abelian groups with non-negative support is equivalent to the category of simplicial abelian groups. This equivalence is called the Dold-Kan correspondence and is given by a functor that sends a simplicial abelian group to its normalized chain complex. This equivalence is also an equivalence of model categories between \(C_{\geq 0}(\mathbf{Ab})\) and \(s\mathbf{Ab}\) as a subcategory of \(s\mathbf{Set}_{\bullet}\). On the other hand, there is a pair of adjoint functors between the simplicial sets and the simplicial abelian groups. This is given by the forgetful functor and its left adjoint which is the simplicial version of the free abelian group functor. Moreover, this adjunction factors through the category of pointed simplicial sets. Combining all of these, we get a pair of adjoint functors between the category of chain complexes of abelian groups with non-negative support and the category of pointed simplicial sets. As this adjunction is the free-forgetful adjunction in its heart, we will denote it as the same. Therefore, our picture of classical homotopy becomes (7) where \(D(\mathbf{Ab})\) is the (unbounded) derived category of abelian groups. The free-forgetful adjunction raises all the way up to the spectra. And by taking derived functors, our picture of classical homotopy theory looks like the following: (8) Based on this picture, we can ask what the free-forgetful adjunction means between \(\mathcal{SH}\) and \(D(\mathbf{Ab})\). It turns out that this adjunction identifies \(D(\mathbf{Ab})\) with the homotopy category of \(H\mathbb{Z}\)-modules in \(\mathcal{SH}\). In fact, we need to introduce the notion of symmetric spectra for this to be true, and \(H\mathbb{Z}\) lifts to a commutative ring object in that category which allows us to define \(H\mathbb{Z}\)-modules. An \(H\mathbb{Z}\)-module is a symmetric spectrum \(E\) with an action \(H\mathbb{Z}\wedge E\to E\), satisfying the usual module conditions. So, the forgetful functor becomes the functor that forgets the \(H\mathbb{Z}\)-module structure and the free functor becomes the free \(H\mathbb{Z}\)-module functor. Therefore, we can change our picture to (9) Having this picture in mind, we will track the development of the Motivic Homotopy Theory in the next two sections. Comparing the two pictures, we find gaps in the motivic picture that can be filled with the conjectural motivic \(t\)-structure. ## 2. The picture in motivic homotopy theory: Part 1 Motivic Homotopy Theory, in the most general case, wants to build a homotopy theory for the category, \(Sm/S\), of smooth schemes of finite type over a Noetherian scheme \(S\). As this category does not have the properties needed to become a model category, we can not do homotopy theory in it. For example, it does not contain all the small colimits. Therefore, we should find a bigger category that has the necessary properties and contains \(Sm/S\). One way to address the problem of not having colimits is to embed \(Sm/S\) into \(\mathrm{PSh}(Sm/S)\) using the Yoneda embedding, but this approach is not good enough either. Consider a covering \(X=U\cup V\) of a scheme \(X\) by two open subsets with respect to the Zariski topology. If we define our category of spaces to be \(\mathrm{PSh}(Sm/S)\), the union \(U\cup_{\mathrm{PSh}}V\) is not the same as \(X\). To solve the problem of finding a nice category of spaces, Voevodsky and Morel used the category of Nisnevich simplicial sheaves on \(Sm/S\). This category is equipped with a model structure by Andre Joyal in [14]. This approach has a big positive point as it allows us to use the structures available for simplicial sets. Also, this choice is closely related to the universal homotopy category of \(Sm/S\), which was introduced by Daniel Dugger in [6]. In his paper, Dugger proves that the homotopy theory built by Voevodsky and Morel is a result of performing two localizations to the universal homotopy theory of \(Sm/S\). One of these localizations is with respect to the affine line, \(\mathbb{A}^{1}\), which plays the role of the unit interval, \([0,1]\), in motivic homotopy. This localization would be addressed more precisely later on. The other localization comes from the Nisnevich topology on \(Sm/S\). The choice of this topology is because while it is strictly stronger than the Zariski topology and strictly weaker than the etale topology, it has nice properties of both. To mention a few 1. Similar to the Zariski topology, the Nisnevich cohomological dimension of a scheme of Krull dimension \(d\) is \(d\); 2. Similar to the etale topology, in the Nisnevich topology smooth pairs \((X,Y)\) are locally equivalent to pairs of the form \((\mathbb{A}^{n},\mathbb{A}^{m})\); 3. Similar to the Zariski topology, algebraic \(K\)-theory has Nisnevich descent; 4. Similar to the etale topology, we can use Cech cochains to compute Nisnevich cohomology. In this section, we first introduce Jardine's model structure on a site and discuss how left Bousfield localizations are performed on this model category in 2.1. Then, we turn our focus to the site \((Sm/k)_{\mathrm{Nis}}\), where \(k\) is a field of characteristic zero, and perform a localization with respect to the affine line in 2.2. This results in the motivic unstable homotopy category. After this, we find our way to the motivic stable homotopy category and finish the first part of our motivic picture in 2.3. ### Homotopy category of sites with an interval Let \(T\) be an essentially small site with enough points. As mentioned earlier, we want to work with the category of simplicial sheaves on \(T\), denoted by \(\Delta^{\mathrm{op}}\mathrm{Shv}(T)\). The most important thing about this category is that it contains a version of \(T\) and a version of simplicial sets in it. So, somehow, we are putting both of these categories together in a nice category and then working with that instead. The Yoneda embedding gives us a functor \(T\to\mathrm{PShv}(T)\). For the categories that we work with, representable presheaves are sheaves, so we get a fully faithful embedding \(T\to\mathrm{Shv}(T)\). Also, from a sheaf of sets, we can construct a simplicial object, and get a simplicial sheaf on \(T\). So, every object \(X\in T\) gives us a simplicial sheaf on \(T\), and therefore \(\Delta^{\mathrm{op}}\mathrm{Shv}(T)\) has a version of \(T\) in it. On the other hand, every simplicial set defines a constant simplicial presheaf on \(T\). By considering its associated sheaf, we can see that \(\Delta^{\mathrm{op}}\mathrm{Shv}(T)\) has a version of \(s\mathbf{Sets}\) in it. We call the objects of \(\Delta^{\mathrm{op}}\mathrm{Shv}(T)\) motivic spaces. The same steps above can be done for pointed simplicial sheaves on \(T\). As a result, we get the category of pointed motivic spaces, denoted by \(\Delta^{\mathrm{op}}\mathrm{Shv}_{\bullet}(T)\). There is a pair of adjoint functors between motivic spaces and pointed motivic spaces, given by the free-forgetful adjunction. Working with simplicial sheaves has a big advantage for us. Constructions for simplicial sheaves extend section-wise to our spaces, including limits, colimits, etc. Here, we discuss some of them. Let \((X,x)\) and \((Y,y)\) be two pointed spaces. The wedge sum of them is the sheaf associated with the presheaf \[((X,x)\vee(Y,y))(U):=(X,x_{U})(U)\vee(Y,y_{U})(U).\] The smash product of them is the sheaf associated with the presheaf \[((X,x)\wedge(Y,y))(U):=(X,x_{U})(U)\wedge(Y,y_{U})(U).\] The internal hom objects in the category of pointed spaces are given by the right adjoint of the smash product. The suspension functor is defined as the smash product with the simplicial sheaf \(S^{1}\), where \(S^{1}\) is the simplicial set \(\Delta^{1}/\partial\Delta^{1}\). The right adjoint of the suspension functor gives the loop space functor. In a letter to Alexander Grothendieck, [14], Andre Joyal gave a model structure for this category which uses point-wise weak equivalences. Before we define this model structure, we need to recall the definition of a point of \(T\). **Definition 2.1**.: A _point_ of \(T\) is a functor \(x:\mathrm{Shv}(T)\to\mathbf{Sets}\), that commutes with all colimits and finite limits. **Definition 2.2**.: A morphism of spaces \(f:X\to Y\) is called a _point-wise weak equivalence_ if for any point \(x\) of \(T\) the morphism of simplicial sets \(x^{*}(f):x^{*}(X)\to x^{*}(Y)\) is a weak equivalence. **Theorem 2.3** (Joyal [14], Jardine [11] and [12]).: _The category of spaces with the classes of_ 1. _Weak equivalences: the point-wise weak equivalences from definition_ 2.2__ 2. _Cofibrations: monomorphisms_ _is a proper closed simplicial model category._ Following Morel and Voevodsky, we call this model structure the _simplicial model structure_ and denote its homotopy category with \(\mathcal{H}_{s}(T)\). The canonical functor from simplicial sets to spaces preserves weak equivalences. A similar theorem can be stated for pointed spaces. Hence we get the pointed homotopy category denoted by \(\mathcal{H}_{s,\bullet}(T)\). The free-forgetful adjunction preserves weak equivalences in both directions, therefore we get an adjunction between \(\mathcal{H}_{s}(T)\) and \(\mathcal{H}_{s,\bullet}(T)\). _Remark 2.4_.: In [11], John Frederic Jadrine proves that the weak equivalences and cofibrations of theorem 2.3 make the category of simplicial presheaves on \(T\) a model category. Moreover, he proved that this model category is Quillen equivalent to the simplicial model category. This equivalence is given by the inclusion and the sheafification functors. Now we have a model structure for our category of spaces. However, this model structure would not be satisfying in many cases, as there would be morphisms in \(T\) that we want to be weak equivalences. So, we need a procedure that allows us to invert a class of morphisms and still get a model category. This procedure is called (left) Bousfield localization. Let \(A\) be a set of morphisms that we want to invert. First, we need to notice that only inverting the morphisms in \(A\) would not be enough. For example, for a morphism \(f:X\to Y\) in \(A\) we expect the induced map \(X\times Z\to Y\times Z\) to be inverted as well. So, we need to expand \(A\) to a set of morphisms we call \(A\)-local morphisms. **Definition 2.5**.: An object \(X\in\mathcal{H}_{s}(T)\) is called \(A\)-local if for every \(Y\in\mathcal{H}_{s}(T)\) and for every morphism \(f:Z_{1}\to Z_{2}\in A\) the induced map \[\operatorname{Hom}_{\mathcal{H}_{s}(T)}(Y\times Z_{2},X)\to\operatorname{Hom} _{\mathcal{H}_{s}(T)}(Y\times Z_{1},X)\] is a bijection. **Definition 2.6**.: A map of spaces \(f:X\to Y\) is called an \(A\)-weak equivalence if for every \(A\)-local object \(Z\in\mathcal{H}_{s}(T)\) the induced map \[\operatorname{Hom}_{\mathcal{H}_{s}(T)}(Y,Z)\to\operatorname{Hom}_{\mathcal{ H}_{s}(T)}(X,Z)\] is a bijection. Having defined \(A\)-weak equivalence, (left) Bousfield localization gives us the following model structure. **Theorem 2.7** ([19], Theorem 2.5).: _The category of spaces with the classes of_ 1. _Weak equivalences: the_ \(A\)_-weak equivalences of definition_ 2.6__ 2. _Cofibrations: monomorphisms_ _is a model category, denoted by \(\mathcal{H}(T,A)\). The inclusion functor \(\mathcal{H}(T,A)\to\mathcal{H}_{s}(T)\) has a left adjoint which identifies \(\mathcal{H}(T,A)\) with the localization of \(\mathcal{H}_{s}(T)\) with respect to \(A\)-weak equivalences._ We call this model structure _the \(A\)-model structure_. A similar theorem can be stated for pointed spaces. Hence, we get the pointed homotopy category denoted by \(\mathcal{H}_{\bullet}(T,A)\). Again the free-forgetful adjunction gives us an adjunction between these two homotopy categories. In motivic homotopy, we want the affine line \(\mathbb{A}^{1}\) to play the role of the unit interval \([0,1]\). This is because some cohomology theories like algebraic \(K\)-theory (over a regular base) and motivic cohomology (over a field of characteristic zero) are \(\mathbb{A}^{1}\)-invariant, just like topological \(K\)-theory and singular cohomology (and other reduced cohomology theories) are \([0,1]\)-invariant. So, in the general case of the homotopy theory of a site \(T\), we first need to define what an interval is and discuss the localization with respect to an interval. **Definition 2.8**.: Let \(T\) be a site and \(*\) be the final object of \(\operatorname{Shv}(T)\). An _interval_ of \(T\) is a sheaf \(I\) with the morphisms \[\mu:I\times I \to I\] \[i_{0},i_{1}:* \to I\] that satisfy the following conditions: 1. For the canonical morphism \(I\to*\), \[\mu(i_{0}\times I)=\mu(I\times i_{0})=i_{0}p\] \[\mu(i_{1}\times I)=\mu(I\times i_{1})=\mathrm{id}\] 2. The morphism \(i_{0}\coprod i_{1}:*\coprod*\to I\) is a monomorphism. Localization with respect to \(I\) is defined by considering the \(A\)-model structure for \(A:=\{i_{0}:*\to I\}\). We call this model structure _the \(I\)-model structure_. This model structure is proper. **Example 2.9**.: Let \(\Delta\) be the category that has the finite sets \(\{0,1,...,n\}\) as its objects and order-preserving functions as it morphisms. Consider the trivial Grothendieck topology for this category. If we consider the interval \(\Delta^{1}\) for this site, the resulting homotopy category would be the homotopy category of simplicial sets. ### Motivic unstable homotopy category In this subsection, we will apply the theorems from section 2.1 to the category of smooth schemes of finite type over \(k\), where \(k\) is a field of characteristic zero. This category is denoted by \(Sm/k\). In order to do so, we need to choose a topology and an interval for this category. The chosen topology is the Nisnevich topology, and the reason behind this choice is explained in 2.1. Now, let's define the Nisnevich topology. **Definition 2.10**.: A finite family of etale coverings \(\{U_{i}\to X\}_{i\in I}\) is called a _Nisnevich covering_, if and only if, for every \(x\in X\), there exists an \(i\) and \(u\in U_{i}\) over \(x\), such that the induced map on residue fields is an isomorphism. These coverings define a pretopology on \(Sm/k\). The corresponding topology is called the _Nisnevich topology_. We will denote this site by \((Sm/k)_{\mathrm{Nis}}\). The chosen interval in \((Sm/k)_{\mathrm{Nis}}\) is the affine line \(\mathbb{A}^{1}\) for the reasons explained in 2.1. Let \((Sm/S,\mathbb{A}^{1})_{\mathrm{Nis}}\) be the above site with the interval \(\mathbb{A}^{1}\). Based on section 2.1, our category of spaces, denoted by \(\mathbf{Spc}(k)\), is the category of Nisnevich simplicial sheaves on \(Sm/k\), and the homotopy category of spaces, denoted by \(\mathcal{H}(k)\) is the homotopy category corresponding to the \(\mathbb{A}^{1}\)-model structure on \(\mathbf{Spc}(k)\). The same can be done for pointed spaces where we get \(\mathbf{Spc}_{\bullet}(k)\) and \(\mathcal{H}_{\bullet}(k)\). Up to this point, our picture of motivic homotopy theory is (10) _Remark 2.11_.: In Picture 10, the functor is given by localizing with respect to the affine line. But based on the paper [6], we can take \(\Delta^{\mathrm{op}}\mathrm{PSh}_{\bullet}(Sm/k)\) as our category of motivic spaces and then perform two localizations instead of one: one with respect to the affine line and one with respect to the homotopy-colimit-type relations coming from the Nisnevich topology. As a result, we might use either of these categories as our category of spaces. _Remark 2.12_.: There are other Quillen equivalent model structures for motivic spaces, such as one from Jardine in [13]. We might switch between such model categories when needed. ### Spheres, homotopy sheaves, and suspensions The next step for us is to build the motivic stable homotopy category. To do that, we need to define spectra, and therefore, we need to define the suspension functor. In the classical setting, suspension was defined using the smash product with the circle \(S^{1}\). But, what is the analogue of \(S^{1}\), and in general, the analogues of spheres in the motivic setting? As we know, \(\mathbf{Spc}_{\bullet}(k)\) has both pointed simplicial sets and \(Sm/k\) in it. This results in having two different circles; one coming from the topological world of simplicial sets and one from the algebraic world of \(Sm/k\). The topological circle is the simplicial sheave associated with the pointed simplicial set \(S^{1}\). The algebraic circle is \(\mathbb{G}_{m}=\mathbb{A}^{1}-\{0\}\) which is a pointed space with respect to the base point \(1\). This is called the Tate circle. So, now the question is which one of these circles should be considered as the right analogue of the sphere \(S^{1}\). Before answering this question, let's find out about the possible analogues of other spheres too. In order to do so, we first work over the complex numbers. Let \(X\) be a smooth scheme over complex numbers. Considering its complex points we get a topological space that we denote with \(X(\mathbb{C})\). The functor \(X\mapsto X(\mathbb{C})\) extends to a realization functor \(\mathbf{Spc}_{\bullet}(\mathbb{C})\to\mathbf{Top}_{\bullet}\), which preserves weak equivalences. Now let's see what kinds of objects are sent to spheres by this functor. Here is a list of such spaces: 1. The simplicial sheave \(S^{n}=(S^{1})^{\wedge n}\) as a motivic space is sent to \(S^{n}\), in particular \(S^{1}\) is sent to \(S^{1}\); 2. The space \((\mathbb{A}^{n}-\{0\},(1,1,...,1))\) is sent to \(S^{n}\), in particular \(\mathbb{G}_{m}\) is sent to \(S^{1}\); 3. The projective line \((\mathbb{P}^{1},\infty)\) is sent to a sphere; 4. The Thom space (which would be defined later on when stating the homotopy purity theorem), denoted by \(\mathrm{Th}(\mathbb{A}^{n})\), is sent to a sphere. Looking at these potential spheres would make things confusing at first. But, everything gets solved when we know that all of them are generated (up to \(\mathbb{A}^{1}\)-weak equivalence) by the smash product of \(S^{1}\) and \(\mathbb{G}_{m}\). More precisely, \[\mathbb{P}^{1}\simeq S^{1}\wedge\mathbb{G}_{m} \tag{12}\] \[\mathbb{A}^{n}-\{0\}\simeq S^{n-1}\wedge\mathbb{G}_{m}^{\wedge n}\] (13) \[\mathrm{Th}(\mathbb{A}^{n})\simeq S^{n}\wedge\mathbb{G}_{m}^{ \wedge n}. \tag{11}\] where \(\mathrm{Th}(\mathbb{A}^{n})\) is the Thom space. These weak equivalences are not limited to when we work over complex numbers and we have them for other bases as well. As a result of this observation, we define mixed spheres. **Definition 2.13**.: The mixed sphere \(S^{p,q}\) is defined as \(S^{p-q}\wedge\mathbb{G}_{m}^{q}\). Now, let's go back to our question. Which one of these mixed spheres should be considered the right one? Based on the fact that the Thom space \(S^{2n,n}\) and \(\mathbb{P}^{1}\simeq S^{2,1}\) are mixed spheres, we need to work with all mixed spheres. So, we define the bigraded homotopy sheaves and suspensions. **Definition 2.14**.: Let \((X,x_{0})\) be a pointed motivic space. The _bigraded homotopy sheaf_\(\pi_{a,b}^{\mathbb{A}^{1}}(X,x_{0})\) is defined as the sheaf associated with the presheaf \[U\mapsto\mathrm{Hom}_{\mathcal{H}_{\bullet}(k)}(S^{a,b}\wedge U_{+},(X,x_{0})).\] Although suspension can be defined using any mixed sphere, we only consider suspension with respect to the following spheres: \[\Sigma_{S^{1}}(X,x_{0}):=S^{1}\wedge(X,x_{0}) \tag{15}\] \[\Sigma_{\mathbb{G}_{m}}(X,x_{0}):=\mathbb{G}_{m}\wedge(X,x_{0})\] (16) \[\Sigma_{\mathbb{P}^{1}}(X,x_{0}):=\mathbb{P}^{1}\wedge(X,x_{0})\] (17) \[\Sigma_{T}(X,x_{0}):=\text{Th}(\mathbb{A}^{1})\wedge(X,x_{0}) \tag{14}\] The reason for that is when constructing the category of motivic spectra we need to stabilize with respect to \(\Sigma_{S^{1}}\) and \(\Sigma_{\mathbb{G}_{m}}\) both. But because there is a canonical isomorphism \(\Sigma_{T}\cong\Sigma_{S^{1}}\circ\Sigma_{\mathbb{G}_{m}}\), we only need to consider the category of \(T\)-spectra. But, the same steps can be taken by considering the category of \(\mathbb{P}^{1}\)-spectra. ### Motivic stable homotopy category **Definition 2.15**.: A _\(T\)-spectrum_ is a sequence of pointed motivic spaces \((E_{0},E_{1},E_{2},...)\) with a family of bonding maps \(\{\epsilon_{n}:\Sigma_{T}E_{n}\to E_{n+1}\}_{n=0}^{\infty}\). **Example 2.16**.: To each pointed motivic space \(X\), we can correspond a _\(T\)-spectrum_\(\Sigma_{T}^{\infty}X=(X,\Sigma_{\mathbb{P}^{1}}X,\Sigma_{T}^{2}X,...)\) with identity bonding maps. In particular, the _\(T\)-spectrum_ \[\mathbb{S}_{k}=(k_{+},T,T\wedge T,...)\] with identity bonding maps is the analogue of the sphere spectrum in the classical setting. **Definition 2.17**.: A map of \(T\)-spectra \(f:E\to F\) is a family of maps \(\{f_{n}:E_{n}\to F_{n}\}_{n=0}^{\infty}\) that respects the bonding maps of both spectra i.e. the diagram (18) is commutative for each integer \(n\geq 0\). We denote the category of \(T\)-spectra with maps given by definition 2.17 by \(\mathbf{Spt}_{T}(k)\). In the next step, we need to turn this category into a model category. We do that following Jardine in [13]. First, note that the notions of weak equivalence, fibration, and cofibration can be defined term-wise for \(T\)-spectra. **Definition 2.18**.: Let \(f:E\to F\) be a map of \(T\)-spectra. 1. The map \(f\) is called a _weak equivalence_ if, for every integer \(n\geq 0\), the map \(f_{n}:E_{n}\to F_{n}\) is a weak equivalence in the unstable sense i.e. an \(\mathbb{A}^{1}\)-weak equivalence. 2. The map \(f\) is called a _cofibration_ if, for every integer \(n\geq 0\), the map \(f_{n}:E_{n}\to F_{n}\) is a cofibration in the unstable sense i.e. a monomorphism. 3. The map \(f\) is called a _fibration_ if, for every integer \(n\geq 0\), the map \(f_{n}:E_{n}\to F_{n}\) is a fibration in the unstable sense i.e. a map having right lifting property with respect to monomorphisms that are also an \(\mathbb{A}^{1}\)-weak equivalence. **Theorem 2.19** ([13], Lemma 1.2).: 1. _The category_ \(\mathbf{Spt}_{T}(k)\) _with the classes of_ 1. _Weak equivalences: the term-wise weak equivalences from definition_ 2.18__ 2. _Fibrations: the term-wise fibrations from definition_ 2.18__ _becomes a proper simplicial model category._ 2. _The category_ \(\mathbf{Spt}_{T}(k)\) _with the classes of_ 1. _Weak equivalences: the term-wise weak equivalences from definition_ 2.18__ 2. _Cofibrations: the term-wise cofibrations from definition_ 2.18 _becomes a proper simplicial model category._ Starting with the first model structure in theorem 2.19, we need to extend the class of weak equivalences. To do that, we should first define stable homotopy sheaves of \(T\)-spectra using our definition of bigraded homotopy sheaves of pointed spaces. **Definition 2.20**.: Let \(E\) be a \(T\)-spectrum. The colimit of \[\pi_{a+2n,b+n}^{\mathbb{A}^{1}}(E_{n})\xrightarrow{\Sigma_{T}}\pi_{a+2n+2,b+ n+1}^{\mathbb{A}^{1}}(\Sigma_{T}E_{n})\xrightarrow{\sigma_{n,*}}\pi_{a+2n+2,b+n+1}^{ \mathbb{A}^{1}}(E_{n+1})\xrightarrow{\ \ }...\] is the motivic stable homotopy sheaf \(\pi_{a,b}^{\mathbb{A}^{1}}(E)\). **Definition 2.21**.: A map of \(T\)-spectra \(f:E\to F\) is called a _motivic stable weak equivalence_ if the induced map \[\pi_{a,b}^{\mathbb{A}^{1}}(E)\to\pi_{a,b}^{\mathbb{A}^{1}}(F)\] is an isomorphism for each pair of integers \(a,b\in\mathbb{Z}\). **Theorem 2.22** ([13], Theorem 9.2).: _The category \(\mathbf{Spt}_{T}(k)\) with the classes of_ 1. _Weak equivalences: the motivic stable weak equivalences from definition_ 2.21__ 2. _Cofibrations: the term-wise cofibrations from the first model structure in theorem_ 2.19__ _becomes a proper simplicial model category. We call the resulting homotopy category the motivic stable homotopy category and denote it by \(\mathcal{SH}(k)\)._ Up until this point, our picture of motivic homotopy theory looks like (19) where we have added \(Sm/k\) and its Yoneda embedding into \(\mathrm{PSh}_{\bullet}(Sm/k)\) to the picture and \(\Omega_{T}^{\infty}\) is the right adjoint of \(\Sigma_{T}^{\infty}\). ### Cohomology theories Just like spectra of topological spaces, each \(T\)-spectrum defines one homology and one cohomology theory. **Definition 2.23**.: Let \(E\) and \(F\) be two \(T\)-spectra. The _\(E\)-homology_ of \(F\) is defined to be \[E_{a,b}(F):=\operatorname{Hom}_{\mathcal{SH}(k)}(\Sigma_{T}^{n}\mathbb{S}_{k},E \wedge F),\] and the _\(E\)-cohomology_ of \(F\) is defined to be \[E^{a,b}(F):=\operatorname{Hom}_{\mathcal{SH}(k)}(F,\Sigma_{T}^{n}E).\] For a pointed motivic space \(X\), its \(E\)-homology and \(E\)-cohomology are respectively \(E_{a,b}(\Sigma_{T}^{\infty}X)\) and \(E^{a,b}(\Sigma_{T}^{\infty}X)\). Now, an important question is if \(T\)-spectra represent all the cohomology theories for schemes over \(k\), just like spectra did in the classical setting. We will only discuss the representability of motivic cohomology, the analogue of singular cohomology, only because it plays an important role in our picture of motivic homotopy, just like singular cohomology did in the picture of classical homotopy. In the classical setting, singular cohomology was represented by the Eilenberg-MacLane spectrum, which was built out of Eilenberg-MacLane spaces. Here, we will briefly discuss the process of constructing Eilenberg-MacLane spaces and the Eilenberg-MacLane spectrum, and then, we will take similar steps in the motivic setting to build the motivic Eilenberg-MacLane \(T\)-spectrum. Let \(X\) be a pointed topological space, and define the \(n\)-th symmetric product of \(X\), \(\operatorname{Sym}^{n}X\), to be the quotient \(X^{n}/S_{n}\), where the symmetric group \(S_{n}\) acts on \(X^{n}\) by permuting the factors. Now, we can define the infinite symmetric product of \(X\) as \[\operatorname{Sym}^{\infty}X=\operatorname{colim}\,\operatorname{Sym}^{n}X\] where the maps are given by the inclusions \(\operatorname{Sym}^{n}X\hookrightarrow\operatorname{Sym}^{n+1}X\). The Dold-Thom theorem gives a way of computing the homotopy groups of \(\operatorname{Sym}^{\infty}X\). **Theorem 2.24** (Dold-Thom [5]).: _Let \((X,x)\) be a pointed topological space. Then, there is a natural isomorphism_ \[\pi_{n}(\operatorname{Sym}^{\infty}X,x)\cong H_{n}(X,x)\] _where \(H\) is the reduced singular homology with coefficients in \(\mathbb{Z}\)._ Having this theorem, we know that \(\operatorname{Sym}^{\infty}\mathbb{S}\) is the Eilenberg-MacLane space \(K(\mathbb{Z},n)\). Therefore, the Eilenberg-MacLane spectrum \(H\mathbb{Z}\) is given by \((\operatorname{Sym}^{\infty}S^{0},\operatorname{Sym}^{\infty}S^{1}, \operatorname{Sym}^{\infty}S^{2},...)\). To construct the motivic Eilenberg-MacLane we will substitute \(S^{1}\) for \(\mathbb{P}^{1}\), and more generally, we will replace \(S^{n}\) with the mixed sphere \(S^{2n,n}=(\mathbb{P}^{1})^{n}\). So, the \(n\)-th symmetric product of \(\mathbb{P}^{1}\) is \[\operatorname{Sym}^{\infty}\mathbb{P}^{1}:=(\mathbb{P}^{1})^{n}/S_{n}\] and the infinite symmetric product of \(\mathbb{P}^{1}\) is \[\operatorname{Sym}^{\infty}\mathbb{P}^{1}=\operatorname{colim}\,\operatorname {Sym}^{n}\mathbb{P}^{1}\] where the maps are given by inclusion. Now, we define the motivic Eilenberg-MacLane \(T\)-spectrum. **Definition 2.25**.: The motivic Eilenberg-MacLane \(T\)-spectrum, \(M\mathbb{Z}\) is \[(\operatorname{Sym}^{\infty}S^{0,0},\operatorname{Sym}^{\infty}S^{2,1}, \operatorname{Sym}^{\infty}S^{4,2},...).\] One thing to notice here is that the motivic Eilenberg-Maclane \(T\)-spectrum is not a perfect analogue of the Eilenberg-MacLane spectrum, as it does not have the same properties. For example, the Eilenberg-MacLane space \(K(G,n)\) has only one non-trivial homotopy group which is \(\pi_{n}(K(G,n))=G\), but the analogous result is not true for the motivic Eilenberg-MacLane space \(\operatorname{Sym}^{\infty}S^{2n,n}\). However, it represents motivic cohomology, which is the result we were after. **Theorem 2.26** (Voevodsky [24]).: _Over a field \(k\) of characteristic zero, motivic cohomology is represented by_ \[H^{p,q}(X,\mathbb{Z})=M\mathbb{Z}^{p,q}(\Sigma_{T}^{\infty}X)=\text{Hom}_{ \mathcal{SH}(k)}(\Sigma_{T}^{\infty}X,\Sigma_{T}^{n}M\mathbb{Z}) \tag{20}\] ## 3. The picture in motivic homotopy theory: Part 2 Recall that the Eilenberg-Maclane spectrum provided us with a free-forgetful adjunction in the classical setting that led us to connect the stable homotopy category to the (unbounded) derived category of abelian groups. Using that adjunction, \(D(\mathbf{Ab})\) was identified with the category of \(H\mathbb{Z}\)-modules in the category of symmetric spectra. On the other hand, \(M\mathbb{Z}\) lifts to a commutative ring object in the category of motivic symmetric \(\mathbb{P}^{1}\)-spectra which allows us to define \(M\mathbb{Z}\)-modules. Now, the question is to find out what category does this lead us to. In other words, what is the analogue of \(D(\mathbf{Ab})\) in the motivic setting? It turns out that the analogue is Voevodsky's triangulated category of motives. Having the big picture 9 of classical homotopy in mind, we will construct the second part of our motivic picture. ### Motivic spaces with transfers First of all, in the second part of our motivic picture, we expect to work with abelian or additive categories, and as we know \(Sm/k\) is neither. So, in the first step, we embed \(Sm/k\) into the category of correspondences, denoted by \(Cor_{k}\). This category has the same objects as \(Sm/k\), and for morphisms we have \[Cor_{k}(X,Y):=\text{free abelian group on elementary correspondences}. \tag{21}\] where an elementary correspondence is a subscheme \(W\) of \(X\times Y\) whose projection \(W\to X\) is finite and onto a component of \(X\). The category \(Sm/k\) embeds into \(Cor_{k}\) as the graph of each morphism \(X\to Y\in Sm/k\) is an elementary correspondence. Now, Yoneda embedding enables us to embed \(Cor_{k}\) into the category of presheaves of abelian groups on it, which is the category of all contravariant additive functors from \(Cor_{k}\) to the category of abelian groups. This category is called the category of presheaves with transfers on \(Cor_{k}\) and is denoted by \(\operatorname{PSh}^{\operatorname{tr}}(Cor_{k})\). Analogous to the embedding of abelian groups into simplicial abelian groups in the classical picture, we can embed presheaves with transfers into the category of \(\Delta^{\operatorname{op}}\operatorname{PSh}^{\operatorname{tr}}(Cor_{k})\), which we call the category of motivic spaces with transfers. Notice that, we have an evident forgetful functor \(U:\operatorname{PSh}^{\operatorname{tr}}(Cor_{k})\to\operatorname{PSh}_{ \bullet}(Sm/k)\) induced by the graph of \(Sm/k\hookrightarrow Cor_{k}\). This functor has a left adjoint called the transfer functor and denoted by \(\mathbb{Z}^{\operatorname{tr}}\). Moreover, we have a similar adjunction between \(\Delta^{\operatorname{op}}\operatorname{PSh}^{\operatorname{tr}}(Cor_{k})\) and \(\Delta^{\operatorname{op}}\operatorname{PSh}_{\bullet}(Sm/k)\). So up to this point, our motivic picture looks like the following: (22) ### Unstable homotopy category of motivic spaces with transfers Having defined motivic spaces with transfers, the following theorem leads us to its homotopy category. Here, we are using the model structure of [7] for motivic spaces, which is equivalent to the \(\mathbb{A}^{1}\)-model. **Theorem 3.1** (Rondigs and Oster, [21] Lemma 2.7 and Lemma 2.8).: _There exists a monoidal and simplicial model structure for motivic spaces with transfers such that the forgetful functor \(U\) detects and preserves motivic weak equivalences and fibrations._ In the classical picture, we identified the category of simplicial abelian groups with the category of chain complexes of abelian groups with non-negative support. In the motivic picture, too, we have an analogous Dold-Kan equivalence between \(\Delta^{\mathrm{op}}\mathrm{PSh}^{\mathrm{tr}}(Cor_{k})\) and \(C_{\geq 0}(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\). Therefore, we may transport the model structure on \(\Delta^{\mathrm{op}}\mathrm{PSh}^{\mathrm{tr}}(Cor_{k})\) to a model structure on \(C_{\geq 0}(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\) and get a Quillen equivalence. This enables us to extend to the (unbounded) category \(C(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\) in later stages. Based on the Theorem 3.1, we can develop our picture even further and get the following: (23) _Remark 3.2_.: In picture 23, we have used the Dold-Kan equivalence to substitute the category of motivic spaces with transfers and into homotopy category with \(C_{\geq 0}(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\) and its homotopy category which we denote by \(\mathcal{H}_{\geq 0}(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\). As this equivalence is a Quillen equivalence, we can use either in our picture, but to make the analogy between our classical and our motivic picture, we will use the latter. ### Stable homotopy category of motivic spaces with transfers As you can already see, the steps taken to construct the second part of our motivic picture are very similar to the ones from the first part. For the stable part, too, we will take similar steps, but this time, something of importance will happen. The category occupying the bottom right place in our motivic picture will be equivalent to Voevodsky's triangulated category of motives, which has the properties to be the derived category of mixed motives. To build our way to Voevodsky's motives, we need to construct a few model categories and discuss their relations. While doing this, we won't update our picture until the very end. In the previous subsection, we introduced the Quillen equivalent model categories \(\Delta^{\mathrm{op}}\mathrm{PSh}^{\mathrm{tr}}(Cor_{k})\) and \(C_{\geq 0}(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\), and in this section, we will work with symmetric spectra of these two categories. As the steps are very similar to those seen before, we will not go into detail. When constructing the motivic stable homotopy category, we used suspension with respect to \(\mathbb{P}^{1}\simeq T=\mathrm{Th}(\mathbb{A}^{1})\). Now, if we apply the translation functor \(\mathbb{Z}^{\mathrm{tr}}\) to \(T\), we get a motivic space with transfers, and if we define symmetric spectra with respect to the suspension coming from this motivic space with transfers, we get the category of motivic symmetric spectra with transfers. We denote this category by \(\mathbf{SSpt}^{\mathrm{tr}}_{\mathrm{Mot}}\). We can take similar steps using \(\mathbb{P}^{1}\) instead of \(T\), getting the category \(\mathbf{SSpt}^{\mathrm{tr}}_{\mathbb{P}^{1},\mathrm{Mot}}\). Based on [20], the resulting model categories are Quillen equivalents through a zig-zag of strict symmetric monoidal Quillen equivalences. **Proposition 3.3** ([21], Proposition 2.29).: _There is a zig-zag of strict monoidal Quillen equivalences between \(\mathbf{SSpt}^{\mathrm{tr}}_{\mathrm{Mot}}\) and \(\mathbf{SSpt}^{\mathrm{tr}}_{\mathbb{P}^{1},\mathrm{Mot}}\)._ Using the Dold-Kan equivalence, we can take similar steps for \(C_{\geq 0}(\mathrm{PSh}^{\mathrm{tr}}(Cor_{k}))\). We can take symmetric spectra of objects in this category with respect to \(\mathbb{Z}^{\mathrm{tr}}(\mathbb{G}_{m},1)\) shifted one degree. We denote the resulting category by \(C_{\geq 0}\mathbf{SSpt}_{\mathbb{G}_{m}}\). We can also define \(C_{\geq 0}\mathbf{SSpt}_{\mathbb{P}^{1}}\) in a similar fashion by suspending with respect to \(\mathbb{Z}^{\mathrm{tr}}(\mathbb{P}^{1},1)\). There exists a similar zig-zag of zig-zag of strict symmetric monoidal Quillen equivalences for these categories too. **Proposition 3.4** ([21], Proposition 2.30).: _There is a zig-zag of strict monoidal Quillen equivalences between \(C_{\geq 0}\mathbf{SSpt}_{\mathbb{G}_{m}}\) and \(C_{\geq 0}\mathbf{SSpt}_{\mathbb{P}^{1}}\)._ **Proposition 3.5** ([21], Theorem 2.33).: _The Dold-Kan equivalence induces a lax symmetric equivalence between \(\mathbf{SSpt}^{\mathrm{tr}}_{\mathbb{P}^{1},\mathrm{Mot}}\) and \(C_{\geq 0}\mathbf{SSpt}_{\mathbb{P}^{1}}\)._ ### The link to Voevodsky's triangulated category of motives Consider motivic cohomology \(M\mathbb{Z}\) which was introduced in Definition 2.25. As mentioned in Section SS2, we expect motivic cohomology to be the analogue of singular cohomology in the motivic homotopy theory. In our picture of classical homotopy theory, singular cohomology was key to connecting the stable homotopy category of symmetric spectra to the (unbounded) derived category of abelian groups. It turns out that motivic cohomology provides us with a similar connection in the motivic setting. The \(T\)-spectrum \(M\mathbb{Z}\) lifts to a commutative ring object in the category of symmetric \(T\)-spectra, so we can define \(M\mathbb{Z}\)-modules in this category. On the other hand, symmetric \(T\)-spectra satisfy the monoid axiom in [23], therefore, we have a model structure for the category of \(M\mathbb{Z}\)-modules. **Theorem 3.6** ([21], Theorem 5.5).: _Suppose \(k\) is a field of characteristic zero. Then there is a strict symmetric monoidal Quillen equivalence between \(M\mathbb{Z}\)-modules and \(\mathbf{SSpt}^{\mathrm{tr}}_{\mathrm{Mot}}\)._ This theorem completes our picture of motivic homotopy theory, and in the last stage of this picture, the forgetful functor \(U\) becomes the functor that forgets the \(M\mathbb{Z}\)-module structure and \(\mathbb{Z}^{\mathrm{tr}}\) becomes its left adjoint which is \(M\mathbb{Z}\wedge[-]\). Before drawing our completed picture, we need to mention one more result. **Theorem 3.7** ([21], Theorem 1.1).: _If \(k\) is a field of characteristic zero, the homotopy category of \(M\mathbb{Z}\)-modules is equivalent to Voevodsky's big category of motives \(D\mathcal{M}(k)\) of \(k\). This equivalence preserves the monoidal and triangulated structures._ Proof.: Combining Theorem 2.9 and Theorem 5.5 from [21] implies this theorem. As a result, our final picture of motivic homotopy theory looks like the following: (24) ## 4. An introduction to the conjectural motivic \(t\)-structure Up until now, we have used the word "motivic" a lot, so a question that naturally arises is about the connection between the motivic setting we discussed and the conjectural category of mixed motives. In this section, we will introduce a conjectural bridge between Voevodsky's category of motives and the category of mixed motives. But before doing that, we need to introduce what a \(t\)-structure is on a triangulated category. ### Definition of \(t\)-structure and a fundamental example The idea of \(t\)-structure arises in the derived categories of abelian categories. Let \(\mathcal{A}\) be an abelian category. In its (unbounded) derived category \(D(\mathcal{A})\), which is a triangulated category, we have a full subcategory \(D(\mathcal{A})_{\geq 0}\), consisting of chain complexes with non-negative support. We can shift this full subcategory \(-n\) times to get \(D(\mathcal{A})_{\geq n}\), which consists of chain complexes with support \(\geq n\). We can also define the full subcategory \(D(\mathcal{A})_{\leq 0}\), consisting of chain complexes with non-positive support in a similar fashion, and shift it \(-n\) times to get \(D(\mathcal{A})_{\leq n}\). Having defined these subcategories, we have \[...\subseteq D(\mathcal{A})_{\leq-2}\subseteq D(\mathcal{A})_{ \leq-1}\subseteq D(\mathcal{A})_{\leq 0}\subseteq D(\mathcal{A})_{\leq 1} \subseteq D(\mathbf{Ab})_{\leq 2}\subseteq...\] \[...\subseteq D(\mathcal{A})_{\geq 2}\subseteq D(\mathcal{A})_{ \geq 1}\subseteq D(\mathcal{A})_{\geq 0}\subseteq D(\mathcal{A})_{\geq-1} \subseteq D(\mathbf{Ab})_{\geq-2}\subseteq...,\] and for each integer \(n\), each \(A\in D(\mathcal{A})_{\leq n}\), and each \(B\in D(\mathcal{A})_{\geq n+1}\), \[\operatorname{Hom}_{D(\mathcal{A})}(A,B)=0\text{ and }\operatorname{Hom}_{D( \mathcal{A})}(B,A)=0.\] Also, for each \(E\in D(\mathcal{A})\), there is a distinguished triangle \[E_{0}\to E\to E_{1}\to E_{0}[1]\] where \(E_{0}\in D(\mathcal{A})_{\leq 0}\) and \(E_{1}\in D(\mathcal{A})_{\geq 1}\). Such a triangle also exists for each pair of integers \((n,n+1)\) instead of \((0,1)\). Lastly, the intersection \[D(\mathcal{A})_{\leq 0}\cap D(\mathcal{A})_{\geq 0}\] which again is a full subcategory, is just the category \(\mathcal{A}\) itself. Generalizing the above data for arbitrary triangulated categories gives us the definition of \(t\)-structure. **Definition 4.1**.: Let \(\mathcal{C}\) be a triangulated category. A \(t\)_-structure_ on \(\mathcal{C}\) consists of two full subcategories \(\mathcal{C}_{\geq 0}\) and \(\mathcal{C}_{\leq 0}\) satisfying the following conditions: 1. For \(\mathcal{C}_{\geq n}:=\mathcal{C}_{\geq 0}[n]\) and \(\mathcal{C}_{\leq n}:=\mathcal{C}_{\leq 0}[-n]\), we have \(\mathcal{C}_{\leq-1}\subseteq\mathcal{C}_{\leq 0}\) and \(\mathcal{C}_{\geq 1}\subseteq\mathcal{C}_{\geq 0}\). 2. For any \(X\in\mathcal{C}_{\leq 0}\) and any \(Y\in\mathcal{C}_{\geq 1}\), we have \(\operatorname{Hom}_{\mathcal{C}}(X,Y)=0\). 3. For any \(Z\in\mathcal{C}\), there is a distinguished triangle \[Z_{0}\to Z\to Z_{1}\to Z_{0}[1]\] where \(Z_{0}\in\mathcal{C}_{\leq 0}\) and \(Z_{1}\in\mathcal{C}_{\geq 1}\). The _heart_ of this \(t\)-structure is the full subcategory \(\mathcal{C}_{\leq 0}\cap\mathcal{C}_{\geq 0}\). As in the fundamental example mentioned earlier, the heart of the canonical \(t\)-structure of \(D(\mathcal{A})\) gives us the abelian category \(\mathcal{A}\) itself. In other words, a \(t\)-structure provides us with a way to go back from \(D(\mathcal{A})\) to \(\mathcal{A}\). But an important thing to notice is that the could be many different \(t\)-structures available on a triangulated category with different hearts, so working backward is not always unique! The connection between \(DM(k)\) in picture 24 and the category of mixed motives is based on the existence of a \(t\)-structure called the motivic \(t\)-structure. ### The conjectural motivic \(t\)-structure Voevodsky's triangulated category of motives satisfies the properties expected from the derived category of mixed motives. So a natural guess is that there should be a \(t\)-structure on this category, for which the heart is the category of mixed motives. Now, let's see a more precise statement of this conjecture based on [2]. **Conjecture 4.2**.: _Let \(D\mathcal{M}(k,A)\) be Voevodsky's category of motives with coefficients in \(A\), and let \(D(A)\) be the derived category of \(A\)-modules. Given an embedding \(\iota:k\hookrightarrow\mathbb{C}\), let \(\mathcal{T}_{\geq 0}^{\mathcal{M}}\) be the full subcategory of \(D\mathcal{M}(k,A)\) consisting of those motives for which the Betti realization lands in \(D(A)_{\geq 0}\). Recall that the Betti realization functor_ \[B_{\iota}:D\mathcal{M}(k,A)\to D(A)\] _is the realization functor that takes the motive of \(X\) to the singular chain complex of \(X^{\text{an}}\) with coefficients in \(A\), where \(X^{\text{an}}\) is the complex analytic space associated with \(X\). Define \(\mathcal{T}_{<0}^{\mathcal{M}}\) to be the full subcategory whose objects are \(N\in D\mathcal{M}(k,A)\) such that_ \[\text{Hom}_{D\mathcal{M}(k,A)}(M,N)=0\] _for every \(M\in\mathcal{T}_{\geq 0}^{\mathcal{M}}\). Then, the following properties should hold:_ 1. _The pair_ \((\mathcal{T}_{\geq 0}^{\mathcal{M}},\mathcal{T}_{<0}^{\mathcal{M}})\) _defines a_ \(t\)_-structure on_ \(D\mathcal{M}(k,A)\)_, which is independent of the choice of the complex embedding_ \(\iota\)_._ 2. _The Betti realization takes motives in_ \(\mathcal{T}_{<0}^{\mathcal{M}}\) _to complexes in_ \(D(A)_{<0}\)_._ 3. _Assuming that_ \(A\) _is a regular ring, this_ \(t\)_-structure can be restricted to the subcategory of geometric motives denoted by_ \(D\mathcal{M}_{g}m(k,A)\)_._ _This \(t\)-structure, if exists, is called the motivic \(t\)-structure._ Beilinson in [3] proved that over a field of characteristic zero, the existence of the motivic \(t\)-structure implies the standard conjectures on algebraic cycles. This result completes the connection between our motivic setting and the world of motives.
2308.03647
Maximum principle for the weak solutions of the Cauchy problem for the fourth-order hyperbolic equations
We investigate the maximum principle for the weak solutions to the Cauchy problem for the hyperbolic fourth-order linear equations with constant complex coefficients in the plane bounded domain
Kateryna Buryachenko
2023-08-07T14:57:24Z
http://arxiv.org/abs/2308.03647v2
Maximum principle for the weak solutions of the Cauchy problem for the fourth-order hyperbolic equations ###### Abstract We investigate the maximum principle for the weak solutions to the Cauchy problem for the hyperbolic fourth-order linear equations with constant complex coefficients in the plane bounded domain. **MSC (2010)**: **Keywords:** the Cauchy problem, the maximum principle, hyperbolic fourth-order PDEs, weak solutions, L-traces. ## Introduction We concern here the problem of proving the analog of maximum principle for the weak solutions of the Cauchy problem for the fourth-order linear hyperbolic equations with the complex constant coefficients and homogeneous non-degenerate symbol in some plane bounded domain \(\Omega\in R^{2}\): \[L(\partial_{x})u=a_{0}\frac{\partial^{4}u}{\partial x_{1}^{4}}+a_{1}\frac{ \partial^{4}u}{\partial x_{1}^{3}\partial x_{2}}+a_{2}\frac{\partial^{4}u}{ \partial x_{1}^{2}\partial x_{2}^{2}}+a_{3}\frac{\partial^{4}u}{\partial x_{1} \partial x_{2}^{3}}+a_{4}\frac{\partial^{4}u}{\partial x_{2}^{4}}=f(x). \tag{0.1}\] Here coefficients \(a_{j}\in C,\,j=0,\,1,...,\,4\) are constant, \(f(x)\in L^{2}(\Omega),\)\(\partial_{x}=\left(\frac{\partial}{\partial x_{1}},\,\frac{\partial}{ \partial x_{2}}\right).\) We assume, that Eq.(0.1) is hyperbolic, that means that all roots of characteristics equation \[L(1,\,\lambda)=a_{0}\lambda^{4}+a_{1}\lambda^{3}+a_{2}\lambda^{2}+a_{3}\lambda +a_{4}=0\] are prime, real and are not equal to \(\pm i,\) that means that the symbol of Eq.(0.1) is non-degenerate or that the Eq. (0.1) is a principal-type equation. The equations for which the roots of the corresponding characteristic equation are multiple and can take the values \(\pm i\) are called the equation with degenerate symbol (see [6]). The main novelty of the paper is to prove the analog of maximum principle for the weak solutions of the Cauchy problem for the fourth-order hyperbolic equations. Most of these equations serve as mathematical models of many physical processes and attract the interest of researchers. The most famous of them are elasticity beam equations (Timoshenko beam equations with and without internal damping) [9], short laser pulse equation [11], equations which describe the structures are subjected to moving loads, equation of Euler-Bernoulli beam resting on two-parameter Pasternak foundation and subjected to a moving load or mass [22] and others. Due to evident practice application, these models need more exact tools for studying, and as consequence, to attract fundamental knowledge. As usual, most of these models are studied by the analytical-numerical methods (Galerkin's methods). On the other hand, the maximum principle is an efficient tool for fundamental knowledge of PDEs: the range of problems, which maximum principle allows to study, belongs to a class of quite actual problems of well-posedness of so-called general boundary-value problems for higher-order differential equations originating from the works by L. Hormander and M.Vishik who used the theory of extensions to prove the existence of well-posed boundary-value problems for linear differential equations of arbitrary order with constant complex coefficients in a bounded domain with smooth boundary. This theory got its present-day development in the works by G. Grubb [12], L.Hormander [13], and A. Posilicano [20]. Later, the problem of well-posedness of boundary-value problems for various types of second order differential equations and systems was studied by V. Burskii and A. Zhedanov [3], which developed a method of traces associated with a differential operator and applied this method to establish the Poncelet, Abel and Goursat problems, and also by I.Kmit [14]. In the previous works of author (see [7]-- [8]) there have been developed qualitative methods of studying the Cauchy problems and nonstandard in the case of hyperbolic equations the Dirichlet and Neumann problems for the linear fourth-order equations (moreover, for an equation of any even order \(2m\), \(m\geq 2\), ) with the help of operator methods (L-traces, theory of extension, moment problem, method of duality equation-domain and others). As concern the maximum principle, at the present time there are not any results for the fourth order equations even in linear case. As it was mentioned above, the maximum principle even for the simple case of one dimensional wave equation [1], and for the second-order telegraph equation [16] are quite different from those for elliptic and parabolic cases. Following R.Ortega, A.Robles-Perez [19], we introduce the definition of the maximum principle for the hyperbolic equations, which will be used later. _Definition 1._[19]. Let \(\mathcal{L}=\mathcal{L}u\) be linear differential operator, acting on functions \(u:\,D\to R\), in some domain \(D\). These functions will belong to the certain family \(\mathcal{B}\), which includes boundary conditions or others requirements. It is said that \(\mathcal{L}\) satisfies the maximum principle, if \[\mathcal{L}u\geq 0,\ u\in\mathcal{B},\] implies \(u\geq 0\) in \(D\). In further works of these authors (see [16], [17], [18]) there was studied the maximum principle for weak bounded twice periodical solutions from the space \(L^{\infty}\) of the telegraph equation with parameter \(\lambda\) in lower term, one-, two-, and -tree dimensional spaces, and which includes the cases of variables coefficients. The precise condition for \(\lambda\) under which the maximum principle still valid was font. There was also introduced a method of upper and lower solutions associated with the nonlinear equation, which allows to obtain the analogous results (uniqueness, existence and regularity theorems) for the telegraph equations with external nonlinear forcing, applying maximum principle. There was considered also the case when the external forcing belongs to a certain space of measures. The maximum principle for general quasilinear hyperbolic systems with dissipation was proved by [15]. There were given two estimates for the solution to the general quasilinear hyperbolic system (see and introduced the concept of dissipation (strong dissipation and weak dissipation), then state some maximum principles of quasilinear hyperbolic systems with dissipation. Using the maximum principle there were reproved the existence and uniqueness theorems of the global smooth solution to the Cauchy problem for considered quasilinear hyperbolic system. So, the problem to prove the maximum principle for the weak solutions stills more complicated and at that time becomes more interesting in the case of fourth-order hyperbolic equations, especially, in the case of non-classical boundary value problems with weak-regularity data. There are no results on maximum principle even for model case of linear 2-dimensional fourth-order hyperbolic equations with the constant coefficients and without lower terms. Moreover, we can not use the term of usual traces in the cases of initial data of weak regularity, and we come to the notions of \(L-\)traces, the traces, which associated with differential operator (see the Section 1). Let us remind (see, for example, [2]), that \(L-\) traces exist for the weak solutions from space \(L^{2}\) even in the situations when classical notions of traces does not work for such solutions. ## 1 Statement of the problem and auxiliary definitions Let us start to establish the maximum principle for the weak solutions to the Cauchy problem for the Eq.(0.1) in some admissible planar domain. Naturally, that in the hyperbolic case the characteristics of the equation play a crucial role. Let \(C_{j},\,j=1,\,2,\,3,\,4\) be characteristics. We call the angle of characteristics slop the solution to the equation \(-\tan\varphi_{j}=\lambda_{j},\) and angle between \(j-\) and \(k-\) characteristics: \(\varphi_{k}-\varphi_{j}\neq\pi l,\,l\in Z,\) where \(\lambda_{j}\neq\pm i\) are real and prime roots of the characteristics equation, \(j,\,k=1,\,2,\,3,\,4.\) Let also \(\Gamma_{0}:=\{x_{1}\in[a,\,b],\,x_{2}=0\},\) and define the domain \(\Omega\) as a domain which is restricted by the characteristics \(C_{j},\,j=1,\,2,\,3,\,4\) and \(\Gamma_{0}.\) Consider also the following Cauchy problem for the Eq.(0.1) on \(\Gamma_{0}:\) \[u|_{\Gamma_{0}}=\varphi(x),\,u^{\prime}_{\nu}|_{\Gamma_{0}}=\psi(x),\,u^{ \prime\prime}_{\nu\nu}|_{\Gamma_{0}}=\sigma(x),\,u^{\prime\prime\prime}_{\nu \nu}|_{\Gamma_{0}}=\chi(x), \tag{1.1}\] where \(\varphi,\,\psi,\,\sigma\) and \(\chi\) are given weak regular functions on \(\Gamma_{0},\) in general case \(\varphi,\,\psi,\,\sigma,\,\chi\in L^{2}(\Gamma_{0}),\,\nu-\) is outer normal of \(\Gamma_{0}.\) _Definition 2._[21]. We call a domain \(D:=\{(x_{1},\,x_{2}):\,x_{1}\in(-\infty,\,+\infty),\,x_{2}>0\}\) in the half-plane \(x_{2}>0\) an admissible domain if it has the property that for each point \(C\in D\) the corresponding characteristics domain \(\Omega\) is also in \(D\). More generally, \(D\) is admissible, if it is the finite or countable union of characteristics 5-angles (in the case of fourth-order equations with constant coefficients, that is existence of 4 different and real characteristics lines). Establishment of the maximum principle in this situation allows us to obtain a local properties of the solution to Cauchy problem (0.1)--(1.1) on the arbitrary interior point \(C\in D.\) We will consider the weak solution to the problem (0.1)--(1.1) from the \(D(L),\) domain of definition of maximal operator, associated with the differential operation \(\mathcal{L}\) in Eq.(0.1). Following [2], [6], we remind the corresponding definitions. In the bounded domain \(\Omega\) we consider the linear differential operation \(\mathcal{L}\) of the order \(m,\)\(m\geq 2,\) and formally adjoint \(\mathcal{L}^{+}\): \[\mathcal{L}(D_{x})=\sum_{|\alpha|\leq m}a_{\alpha}D^{\alpha},\,\mathcal{L}^{+ }(D_{x})=\sum_{|\alpha|\leq m}D^{\alpha}(a_{\alpha}), \tag{1.2}\] where \(\alpha=(\alpha_{1},\,\alpha_{2},...\alpha_{n}),\,|\alpha|=\alpha_{1}+\alpha_{2 }+...+\alpha_{n}\) is multi-index. Note, that for Eq.(0.1) \(n=2,\)\(m=4.\) _Definition 3. Minimum operator._[2]. Let us consider the differential operation (1.2) on functions from the space \(C_{0}^{\infty}(\Omega).\) The minimum operator \(L_{0}\) is called the extension of the operation from \(C_{0}^{\infty}(\Omega)\) to the set \(D(L_{0}):=\overline{C_{0}^{\infty}(\Omega)}.\) The closure is realized in the norm of the graph of operator \(L\): \(||u||_{L}^{2}:=||u||_{L_{2}(\Omega)}^{2}+||Lu||_{L_{2}(\Omega)}^{2}.\) _Definition 4. Maximum operator._[2]. The maximum operator \(L\) is define as the restriction of the differential operation \(\mathcal{L}(D_{x})\) to the set \(D(L):=\{u\in L^{2}(\Omega):\,Lu\in L^{2}(\Omega)\}.\) _Definition 5._[2]. The operator \(\tilde{L}\) is define as the extension of the minimum operator \(L_{0},\) to the set \(D(\tilde{L}):=\overline{C^{\infty}(\Omega)}.\) _Definition 6. Regular operator._[2]. The maximum operator is called regular if \(D(L)=D(\tilde{L}).\) It is easy to see, that in the case of the fourth-order differential operation (0.1), maximal operator is regular and \(D(L)=D(\tilde{L})=H^{4}(\Omega),\)\(D(L_{0})=\)\(H^{4}\,(\Omega).\) The definition of a weak solution to the problem (0.1)--(1.1) from the space \(D(L)\) is closely connected with the notion of \(L-\) traces, that is traces, which are associated with the differential operator \(L.\) _Definition 7. L-traces._[6]. Assume, that for a function \(u\in D(\tilde{L})\) there exist linear continuous functionals \(L_{(p)}u\) over the space \(H^{m-p-1/2}(\partial\Omega),\)\(p=0,1,2...,m-1,\) such that the following equality is satisfied: \[(Lu,v)_{L^{2}(\Omega)}-(u,L^{+}v)_{L^{2}(\Omega)}=\sum_{j=0}^{m-1}(L_{(m-1-j) }u,\,\partial_{\nu}^{(j)}v). \tag{1.3}\] The functionals \(L_{(p)}u\) are called the \(L_{(p)}-\) traces of the function \(u\in D(\tilde{L}).\) Here \((\cdot,\,\cdot)_{L^{2}(\Omega)}\) is a scalar product in Hilbert space \(L^{2}(\Omega).\) Finally, we are going to the definition of the weak solution to the problem (0.1)--(1.1): _Definition 8._ We will call the function \(u\in D(L)\) a weak solution to the Cauchy problem (1)-(2), if it satisfies to the following integral identity \[(f,\,v)_{L^{2}(\Omega)}-(u,\,L^{+}v)_{L^{2}(\Omega)}=\sum_{j=0}^{3}(L_{(3-j)}u,\,\partial_{\nu}^{(j)}v), \tag{1.4}\] for any functions \(v\in C_{0}^{\infty}(\Omega).\) The functionals \(L_{(p)}u\) is called the \(L_{(p)}-\) traces of the function \(u,\)\(p=0,\)\(1,\)\(2,\)\(3,\) and completely determined by the initial functions \(\varphi,\,\psi,\,\sigma,\,\chi\) by the following way [6]: \[\begin{array}{c}L_{(0)}u=-L(x)u|_{\partial\Omega}=-L(\nu)\varphi;\\ \\ L_{(1)}u=L(\nu)\psi+\alpha_{1}\varphi^{\prime}_{\tau}+\alpha_{2}\varphi;\\ \\ L_{(2)}u=-L(\nu)\sigma+\beta_{1}\psi^{\prime}_{\tau}+\beta_{2}\psi+\beta_{3} \varphi^{\prime\prime}_{\tau\tau}+\beta_{4}\varphi^{\prime}_{\tau}+\beta_{5} \varphi;\\ \\ L_{(3)}u=L(\nu)\chi+\delta_{1}\varphi^{\prime\prime\prime}_{\tau\tau\tau}+ \delta_{2}\sigma+\delta_{3}\psi^{\prime\prime}_{\tau\tau}+\delta_{4}\psi^{ \prime}_{\tau}+\delta_{5}\psi+\delta_{6}\varphi^{\prime\prime}_{\tau\tau}+ \delta_{7}\varphi^{\prime}_{\tau}+\delta_{8}\varphi.\end{array} \tag{1.5}\] Here \(\alpha_{i},\,i=1,\,2,\,\beta_{j},\,j=1,\,2,...,\,5,\) and \(\delta_{k},\,k=1,\,...,\,9\) are smooth functions, completely determined by the coefficients of the Eq.(0.1). _Remark 1._ We can use a general form of the operators \(\gamma_{j}\) in the left-hand side of the identity (1.4) instead of operators of differentiation \(\partial_{\nu}^{(j)}v\). Indeed, we define \(\gamma_{j}=p_{j}\gamma\), where \(\gamma:\,u\in H^{m}(\Omega)\rightarrow(u|_{\partial\Omega},\,...,\,u_{\nu}^{( m-1)}|_{\partial\Omega})\in H^{(m)}=H^{m-1/2}(\partial\Omega)\times H^{m-3/2}( \partial\Omega)\times...\times H^{1/2}(\partial\Omega)\), and \(p_{j}:\,H^{(m)}\to H^{m-j-1/2}(\partial\Omega)-\) projection. As it has been mentioned above, some examples show (see [2]) that in the general case do not exist ordinary traces of solutions \(u\in D(L)\) in the sense of distributions even for the simplest hyperbolic equations. Indeed, for the wave equation \(Lu=\frac{\partial^{2}u}{\partial x_{1}\partial x_{2}}=0\) in the unit disk \(K:\,|x|=1\), the solution \(u(x)=(1-x_{1}^{2})^{-\frac{5}{2}}\) belongs to \(L^{2}(K)\), but \(<u|_{\partial K},1>_{\partial K}=\infty\) it means that \(\lim_{r\to 1-0}\int\limits_{|x|=r}u(x)ds_{x}=\infty\), such a way the trace \(u|_{\partial K}\) does not exist even as a distribution. However, for every solution \(u\in L^{2}(K)\) the \(L_{(0)}-\)trace \(L_{(0)}u:=-L(x)u(x)|_{|x|=1}=-x_{1}x_{2}u(x)|_{|x|=1}\in L^{2}(\partial K).\) Likewise, \(L_{(1)}-\) trace \(L_{(1)}u\) exists for every \(u\in L^{2}(K)\): \[L_{(1)}u=\left(L(x)u^{\prime}_{\nu}+L^{\prime}_{\tau}u^{\prime}_{\tau}+\frac{ 1}{2}L^{\prime\prime}_{\tau\tau}u\right)|_{\partial K}\in H^{-\frac{3}{2}}( \partial K).\] where \(\tau\) is the angular coordinate and \(u^{\prime}_{\tau}\) is the tangential derivative, and \(L(x)=x_{1}x_{2}-\) symbol of the operator \(L=\frac{\partial^{2}}{\partial x_{1}\partial x_{2}}\). ## 2 Maximum principle for the weak solutions of Cauchy problem We prove here the first simple case of the maximum principle for the weak solution of the Cauchy problem (0.1)--(1.1) in the admissible plane domain \(\Omega\), restricted by the different and non-congruent characteristics \(C_{j}\), \(j=1,\,2,...,\,4\) and initial line \(\Gamma_{0}\). _Theorem 1. Maximum principle._ Let \(u\in D(L)\) and satisfy the following inequalities: \[Lu=f\leq 0,\,\,\,x\in D, \tag{2.1}\] and \[L_{(0)}u\mid_{\Gamma_{0}}\geq 0,\,L_{(1)}u|_{\Gamma_{0}}\geq 0,\,L_{(2)}u|_{ \Gamma_{0}}\geq 0,\,L_{(3)}u|_{\Gamma_{0}}\geq 0, \tag{2.2}\] then \(u\leq 0\) in \(D\). _Proof._ Due to homogeneity of the symbol, \(L(\xi)=a_{0}\xi_{1}^{4}+a_{1}\xi_{1}^{3}\xi_{2}+a_{2}\xi_{1}^{2}\xi_{2}^{2}+a_{3 }\xi_{1}\xi_{2}^{3}+a_{4}\xi_{2}^{4}=<\xi\), \(a^{1}><\xi\), \(a^{2}><\xi\), \(a^{3}>\times\) \(<\xi\), \(a^{4}>\), \(\xi=(\xi_{1}\), \(\xi_{2})\in R^{2}\), we can rewrite equation (0.1) in the following form: \[<\nabla,\,a^{1}><\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u=f(x). \tag{2.3}\] The vectors \(a^{j}=(a_{1}^{j}\), \(a_{2}^{j})\), \(j=1\), 2, 3, 4 are determined by the coefficients \(a_{i}\), \(i=0\), 1, 2, 3, 4, and \(<a\), \(b>=a_{1}\bar{b}_{1}+a_{2}\bar{b}_{2}\) is a scalar product in \(C^{2}\). It is easy to see that vector \(a^{j}\) is a tangent vector of \(j-\)th characteristic, slope \(\varphi_{j}\) of which is determined by \(-\tan\varphi_{j}=\lambda_{j}\), \(j=1\), 2, 3, 4. In what follows, we also consider the vectors \(\tilde{a}^{j}=(-\bar{a}_{2}^{j},\,\bar{a}_{1}^{j})\), \(j=1\), 2, 3, 4. It is obvious that \(<\tilde{a}^{j}\), \(a^{j}>=0\), so \(\tilde{a}^{j}\) is a normal vector of \(j-\)th characteristic. Use the definitions 7 and 8 for the case \(m=4\), that is fourth-order operator in Eq.(0.1), and domain \(\Omega\), which is restricted by the characteristics \(C_{j}\), \(j=1\), 2, 3, 4 and \(\Gamma_{0}:\) \[\int\limits_{\Omega}\{Lu\cdot\bar{v}-u\cdot\overline{L^{+}v}\}dx=\sum\limits_ {k=0}^{3}\int\limits_{\partial\Omega}L_{(3-k)}u\cdot\partial_{\nu}^{(k)}v\,ds=\] \[=\sum\limits_{k=0}^{3}\int\limits_{C_{1}}L_{(3-k)}u\cdot\partial_{\nu}^{(k)}v \,ds+\sum\limits_{k=0}^{3}\int\limits_{C_{2}}L_{(3-k)}u\cdot\partial_{\nu}^{( k)}v\,ds+\sum\limits_{k=0}^{3}\int\limits_{C_{3}}L_{(3-k)}u\cdot\partial_{\nu}^{( k)}v\,ds+\sum\limits_{k=0}^{3}\int\limits_{C_{4}}L_{(3-k)}u\cdot \partial_{\nu}^{(k)}v\,ds+\] \[+\sum\limits_{k=0}^{3}\int\limits_{\Gamma_{0}}L_{(3-k)}u\cdot\partial_{\nu}^{ (k)}v\,ds. \tag{2.4}\] Taking into account the representation (2.3), we arrive to \[\int\limits_{\Omega}Lu\cdot\bar{v}\,dx=\int\limits_{\Omega}<\nabla,\,a^{1}>< \nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\cdot\bar{v}\,dx=\] \[\int\limits_{\partial\Omega}<\nu,\,a^{1}>\cdot<\nabla,\,a^{2}><\nabla,\,a^{3 }><\nabla,\,a^{4}>u\cdot\bar{v}\,ds-\] \[\int\limits_{\Omega}<\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\cdot \overline{<\nabla,\,a^{1}>v}\,dx,\] and further: \[\int\limits_{\Omega}Lu\cdot\bar{v}\,dx=\int\limits_{\partial\Omega}<\nu,\,a^{ 1}>\cdot<\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\cdot\bar{v}\,ds-\] \[\int\limits_{\partial\Omega}<\nu,\,a^{2}>\cdot<\nabla,\,a^{3}><\nabla,\,a^{4}> u\cdot\overline{<\nabla,\,a^{1}>v}\,ds+\] \[+\int\limits_{\partial\Omega}<\nu,\,a^{3}>\cdot<\nabla,\,a^{4}>u\cdot \overline{<\nabla,\,a^{2}><\nabla,\,a^{1}>v}\,ds-\] \[-\int\limits_{\partial\Omega}<\nu,\,a^{4}>\cdot u\cdot\overline{<\nabla,\,a^{3 }><\nabla,\,a^{2}><\nabla,\,a^{1}>v}\,ds+\] \[+\int\limits_{\Omega}u\cdot\overline{<\nabla,\,a^{4}><\nabla,\,a^{3}><\nabla,\,a^{2 }><\nabla,\,a^{1}>v}\,dx.\] Since \(<\nabla,\,a^{4}><\nabla,\,a^{3}><\nabla,\,a^{2}><\nabla,\,a^{1}>v=L^{+}v\), and determining \[\tilde{L}_{(0)}u:=<\nu,\,a^{4}>u,\,\,\tilde{L}_{(1)}u:=<\nu,\,a^{3}>\cdot< \nabla,\,a^{4}>u,\] \[\tilde{L}_{(2)}u:=<\nu,\,a^{2}>\cdot<\nabla,\,a^{3}><\nabla,\,a^{4}>u,\] \[\tilde{L}_{(3)}u=L_{(3)}u=<\nu,\,a^{1}>\cdot<\nabla,\,a^{2}><\nabla,\,a^{3}>< \nabla,\,a^{4}>u\] that are analogues of \(L-\)traces from the formula (2.4). Such a way we have \[\int\limits_{\Omega}\{Lu\cdot\bar{v}-u\cdot\overline{L^{+}v}\}\,dx=\int \limits_{\partial\Omega}L_{(3)}u\cdot\bar{v}\,ds-\int\limits_{\partial\Omega} \tilde{L}_{(2)}u\cdot\overline{<\nabla,\,a^{1}>v}\,ds+\] \[+\int\limits_{\partial\Omega}\tilde{L}_{(1)}u\cdot\overline{<\nabla,\,a^{2}>< \nabla,\,a^{1}>v}\,ds-\int\limits_{\partial\Omega}\tilde{L}_{(0)}u\cdot \overline{<\nabla,\,a^{3}><\nabla,\,a^{2}><\nabla,\,a^{1}>v}\,ds. \tag{2.5}\] Difference between formulas (2.4) and (2.5) is that natural \(L_{(3-k)}\) traces in (2.4) are multiplied by \(k-\) derivative by outer normal \(\nu\) of truncated function \(v\) : \(\partial_{\nu}^{(k)}v\), on the other hand, in (2.5) we determined by \(\tilde{L}_{(3-k)}\) some expressions which multiplied by differential operators \(L_{k}^{+}v\) of order \(k\) and which can serve as analogous of natural \(L_{(3-k)}\) traces, \(k=0,\,1,\,2,\,3.\) So, in the (2.5) \[L_{1}^{+}v:=<\nabla,\,a^{1}>v,\,L_{2}^{+}v:=<\nabla,\,a^{2}><\nabla,\,a^{1}>v,\] \[L_{0}^{+}v=v,\,L_{3}^{+}v:=<\nabla,\,a^{3}><\nabla,\,a^{2}><\nabla,\,a^{1}>v.\] Let \(v\in KerL^{+}\) in (2.5) and calculate \(L-\) traces on \(\partial\Omega=C_{1}\cup C_{2}\cup C_{3}\cup C_{4}\cup\Gamma_{0}\). For instance, for \(L_{(3)}u\) we obtain: \(L_{(3)}u=<\nu,\,a^{1}><\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\), and use that \(<\nabla,\,a^{j}>u=<\nu,\,a^{j}>u^{\prime}_{\nu}+<\tau,\,a^{j}>u^{\prime}_{ \tau},\,j=1,\,2,\,3,\,4\), where \(\nu-\) normal vector, \(\tau-\) tangent vector. Due to presence the product \(<\nu,\,a^{1}>,\,L_{(3)}u=0\) on characteristic \(C_{1}\), normal vector \(\tilde{a}^{1}\) of which is orthogonal to the vector \(a^{1}\). On the other parts of \(\partial\Omega\) there will vanish the terms containing \(<\nu,a^{j}>\) on \(C_{j}\). After that \[\int\limits_{\partial\Omega}<\nu,\,a^{1}><\nabla,\,a^{2}><\nabla,\,a^{3}>< \nabla,\,a^{4}>u=\int\limits_{\Gamma_{0}}L_{(3)}u\,ds+\] \[<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><\tilde{a}^{2},\,a^{3}><\tilde{a}^{2}, \,a^{4}>\int\limits_{C_{2}}u_{\nu\nu\tau}\,ds+\] \[<\tilde{a}^{3},\,a^{1}><\tilde{a}^{3},\,a^{2}><a^{3},\,a^{3}><\tilde{a}^{3}, \,a^{4}>\int\limits_{C_{3}}u_{\nu\tau}\,ds+\] \[<\tilde{a}^{4},\,a^{1}><\tilde{a}^{4},\,a^{2}><\tilde{a}^{4},\,a^{3}><a^{4}, \,a^{4}>\int\limits_{C_{4}}u_{\nu\tau}\,ds+\] \[\{<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><\tilde{a}^{2},\,a^{3}><a^{2},\,a^{4}> +<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><a^{2},\,a^{3}><\tilde{a}^{2},\,a^{4}> \}\int\limits_{C_{2}}u_{\tau\tau\nu}\,ds+\] \(\{<\tilde{a}^{3},\,a^{1}><\tilde{a}^{3},\,a^{2}><a^{3},\,a^{3}><a^{3},\,a^{4}>+< \tilde{a}^{3},\,a^{1}><a^{3},\,a^{2}><a^{3},\,a^{3}><\tilde{a}^{3},\,a^{4}>\}\int \limits_{C_{3}}u_{\tau\tau\nu}\,ds+\) \(\{<\tilde{a}^{4},\,a^{1}><\tilde{a}^{4},\,a^{2}><a^{4},\,a^{3}><a^{4},\,a^{4}>+< \tilde{a}^{4},\,a^{1}><a^{4},\,a^{2}><\tilde{a}^{4},\,a^{3}><a^{4},\,a^{4}>\} \int\limits_{C_{4}}u_{\tau\tau\nu}\,ds+\) \(<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><a^{2},\,a^{3}><a^{2},\,a^{4}>\int \limits_{C_{2}}u_{\tau\tau\tau}\,ds+\) \(<\tilde{a}^{3},\,a^{1}><a^{3},\,a^{2}><a^{3},\,a^{3}><a^{3},\,a^{4}>\int \limits_{C_{3}}u_{\tau\tau\tau}\,ds+\) \(<\tilde{a}^{4},\,a^{1}><a^{4},\,a^{2}><a^{4},\,a^{3}><a^{4},\,a^{4}>\int \limits_{C_{4}}u_{\tau\tau\tau}\,ds+\alpha_{4,1}\int\limits_{C_{2}}u_{\nu\nu} \,ds+\alpha_{4,2}\int\limits_{C_{3}}u_{\nu\nu}\,ds+\) \(\alpha_{5,1}\int\limits_{C_{2}}u_{\nu\tau}\,ds+\alpha_{5,2}\int\limits_{C_{3} }u_{\nu\tau}\,ds+\alpha_{5,3}\int\limits_{C_{4}}u_{\nu\tau}\,ds+\alpha_{6,1} \int\limits_{C_{2}}u_{\tau\tau}\,ds+\alpha_{6,2}\int\limits_{C_{3}}u_{\tau \tau}\,ds+\alpha_{6,3}\int\limits_{C_{4}}u_{\tau\tau}\,ds+\) \(\alpha_{7,1}\int\limits_{C_{2}}u_{\nu}\,ds+\alpha_{7,2}\int\limits_{C_{3}}u_{ \nu}\,ds+\alpha_{7,3}\int\limits_{C_{4}}u_{\nu}\,ds+\alpha_{8,1}\int\limits_{ C_{2}}u_{\tau}\,ds+\alpha_{8,2}\int\limits_{C_{3}}u_{\tau}\,ds+\alpha_{8,3} \int\limits_{C_{4}}u_{\tau}\,ds.\) Here correspondent coefficients \(\alpha_{i,j}\) were numerated as follows: first index \(i\) indicates the derivative of \(u\): 1) \(u_{\nu\nu\tau}\), 2) \(u_{\nu\tau\tau}\), 3) \(u_{\tau\tau\tau}\), 4) \(u_{\nu\nu}\), 5) \(u_{\nu\tau}\), 6) \(u_{\tau\tau}\), 7) \(u_{\nu}\,8)\,u_{\tau}\), the second index \(j\) indicates \(j+1-\)th characteristic, \(j=1,\,2,\,3\). So, now the formula (2.4) has the form: \[\int\limits_{\Omega}Lu\,dx=\int\limits_{\Gamma_{0}}L_{(3)}u\,ds+\alpha_{1,1} \int\limits_{C_{2}}u_{\nu\nu\tau}\,ds+\alpha_{1,2}\int\limits_{C_{3}}u_{\nu \nu\tau}\,ds+\alpha_{1,3}\int\limits_{C_{4}}u_{\nu\nu\tau}\,ds+\] \[\alpha_{2,1}\int\limits_{C_{2}}u_{\tau\tau\nu}\,ds+\alpha_{2,2}\int\limits_{C _{3}}u_{\tau\tau\nu}\,ds+\alpha_{2,3}\int\limits_{C_{4}}u_{\tau\tau\nu}\,ds+\] \[\alpha_{3,1}\int\limits_{C_{2}}u_{\tau\tau}\,ds+\alpha_{3,2}\int\limits_{C_{3} }u_{\tau\tau\tau}\,ds+\alpha_{3,3}\int\limits_{C_{4}}u_{\tau\tau\tau}\,ds+\] \[\alpha_{4,1}\int\limits_{C_{2}}u_{\nu\nu}\,ds+\alpha_{4,2}\int\limits_{C_{3}}u_{ \nu\nu}\,ds+\alpha_{4,3}\int\limits_{C_{4}}u_{\nu\nu}\,ds+\] \(\alpha_{5,1}\int\limits_{C_{2}}u_{\nu\tau}\,ds+\alpha_{5,2}\int\limits_{C_{3}}u_ {\nu\tau}\,ds+\alpha_{5,3}\int\limits_{C_{4}}u_{\nu\tau}\,ds+\alpha_{6,1}\int \limits_{C_{2}}u_{\tau\tau}\,ds+\alpha_{6,2}\int\limits_{C_{3}}u_{\tau\tau}\,ds+ \alpha_{6,3}\int\limits_{C_{4}}u_{\tau\tau}\,ds+\) \(\alpha_{7,1}\int\limits_{C_{2}}u_{\nu}\,ds+\alpha_{7,2}\int\limits_{C_{3}}u_{ \nu}\,ds+\alpha_{7,3}\int\limits_{C_{4}}u_{\nu}\,ds+\alpha_{8,1}\int\limits_{ C_{2}}u_{\tau}\,ds+\alpha_{8,2}\int\limits_{C_{3}}u_{\tau}\,ds+\alpha_{8,3}\int \limits_{C_{4}}u_{\tau}\,ds.\) Coefficients \(\alpha_{i,j}\) are constant and depend on only from coefficients \(a_{0},\,a_{1},\,a_{2},\,a_{3},\,a_{4}\). By analogous way we calculate others \(L-\) traces, \(L_{(0)}u\), \(L_{(1)}u\) and \(L_{(2)}u\). To obtain the statement of the Theorem 1, we choose some arbitrary point \(C\in D\) in admissible plane domain \(D\), draw through this point two arbitrary characteristics, \(C_{1}\) and \(C_{2}\). Another two characteristics (\(C_{3}\) and \(C_{4}\)) we draw through the ends \(a\) and \(b\) of initial line \(\Gamma_{0}\). We determine the points \(O_{1}\) and \(O_{2}\) as intersections of \(C_{1},\,C_{3}\) and \(C_{2},\,C_{4}\) correspondingly: \(O_{1}=C_{1}\cap C_{3},\,O_{2}=C_{2}\cap C_{4}.\) Such a way, domain \(\Omega\) is a pentagon \(aO_{1}CO_{2}b.\) The value of the function \(u\) at the point \(C\in D,\,u(C)\) we estimate from the last equality, integrating by the characteristics \(C_{1}\) and \(C_{2}\) and using conditions (1.1), (1.5)--(2.2). Since, the chosen point \(C\in D\) is arbitrary, we arrive at \(u\leq 0\) in \(D.\) _Remark 2._ In the case of classical solution of the Cauchy problem for the second order hyperbolic equations of the general form with the constant coefficients the statement of the Theorem 1 coincides with the result of [21]. In this case conditions (2.2) have usual form without using the notion of \(L-\)traces (see [21]): \[u|_{\Gamma_{0}}\leq 0,\ u^{\prime}_{\nu}|_{\Gamma_{0}}\leq 0.\] ## Acknowledgments The author thanks to Prof. Iryna Kmit for the drawing the author's attention to the problem of the maximum principle for high-order hyperbolic PDEs and systems, that allowed to apply developed earlier methods of investigations of the fourth-order PDEs. This work is partially supported by the Volkswagen Foundation (the project A131968 "From Modeling and Analysis to Approximation").